I don't know of anywhere to point you, but I am very interested in collaborating on building this. We've been doing a lot of retooling on our digital library site, so I'm really eager to talk usage and best practices.
Editorial Services, Manager & Digital Librarian | PBS Education
O: 703-739-5485 | [log in to unmask]
From: Code for Libraries [[log in to unmask]] on behalf of Andrew L Hickner [[log in to unmask]]
Sent: Thursday, January 03, 2019 11:41 AM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Usability and A/B test results clearinghouse
I have thought about this for years. I would be very much interested in collaborating on such an effort.
Andy Hickner, MSI
Health Sciences Librarian
Seton Hall University | Interprofessional Health Sciences Campus
[log in to unmask] | 1-973-542-6973
Victoria on MASTERPIECE | | Tune In or Stream | Sunday Jan 13 at 9/8c
pbs.org/victoria | facebook.com/pbs | twitter.com/pbs | #VictoriaPBS
This email may contain material that is confidential or proprietary to PBS and is intended solely for use by the intended recipient. Any review, reliance or distribution of such material by others, or forwarding of such material without express permission, is strictly prohibited. If you are not the intended recipient, please notify the sender and destroy all copies.
From: Code for Libraries <[log in to unmask]> On Behalf Of Gomez, Joshua
Sent: Thursday, January 3, 2019 11:32 AM
To: [log in to unmask]
Subject: [CODE4LIB] Usability and A/B test results clearinghouse
I am wondering if there exists some kind of clearinghouse of data from usability tests and A/B tests on digital libraries and archives. Or, if such a thing does not exist, if members from this community would be interested in building one with me.
I’m sure many results have been published in papers in various journals or blog posts. But what I had in mind was an accumulation of many such results into a central place, so that it would be possible to quickly lookup and answer questions like “which facets/filters are used most or least?” or “which layouts of complex objects result in more images/bitstreams being viewed/streamed?” and so on. The general goal is to build up an evidence-based set of design patterns for digital library interfaces.
I already have strong opinions about some of these questions, but I would like data to back them up before acting on them. For instance, I think the consistent use of author and subject fields in faceted search is an antipattern. Any field with more than a few dozen possible terms seems unusable (to me) in faceted search. I think it would be much better to use type-ahead search for data in these fields and use facets/filters only on simpler fields like date, language, or resource type. But these are just opinions and I would like some proof.
I could run my own tests locally, and I intend to, but I would feel more confident if I saw consistent results from multiple institutions. And I don’t think I need to convince anyone subscribing to this list about the merits of working collaboratively and sharing knowledge.
So if you know of something like this, please point me to it. Or if you are interested in putting something like this together, please get in touch.
Head of Software Development & Library Systems UCLA Library [log in to unmask]<mailto:[log in to unmask]>