I am wondering if there exists some kind of clearinghouse of data from usability tests and A/B tests on digital libraries and archives. Or, if such a thing does not exist, if members from this community would be interested in building one with me.
I’m sure many results have been published in papers in various journals or blog posts. But what I had in mind was an accumulation of many such results into a central place, so that it would be possible to quickly lookup and answer questions like “which facets/filters are used most or least?” or “which layouts of complex objects result in more images/bitstreams being viewed/streamed?” and so on. The general goal is to build up an evidence-based set of design patterns for digital library interfaces.
I already have strong opinions about some of these questions, but I would like data to back them up before acting on them. For instance, I think the consistent use of author and subject fields in faceted search is an antipattern. Any field with more than a few dozen possible terms seems unusable (to me) in faceted search. I think it would be much better to use type-ahead search for data in these fields and use facets/filters only on simpler fields like date, language, or resource type. But these are just opinions and I would like some proof.
I could run my own tests locally, and I intend to, but I would feel more confident if I saw consistent results from multiple institutions. And I don’t think I need to convince anyone subscribing to this list about the merits of working collaboratively and sharing knowledge.
So if you know of something like this, please point me to it. Or if you are interested in putting something like this together, please get in touch.
Joshua Gomez
Head of Software Development & Library Systems
UCLA Library
[log in to unmask]<mailto:[log in to unmask]>
|