Hi Josh,
I see this as a complicated project for a number of reasons, but I also think it would be valuable to aggregate our collective UX research and I'm definitely interested in exploring it.
If we get a critical mass of interested folks at the Code4Lib national conference this year, perhaps we can start to discuss details/goals/challenges in person at one of the Breakout Sessions (https://wiki.code4lib.org/Code4Lib2019_Breakout_Sessions)?
Shaun Ellis
Digital Collections User Interface Developer
Princeton University Library
On 1/3/19, 12:41 PM, "Code for Libraries on behalf of Jeanine E Finn" <[log in to unmask] on behalf of [log in to unmask]> wrote:
I would be interested in collaborating in some way as well. I’d especially like to see some documentation for some qualitative usability approaches (focus groups, interviews, etc.) since I think those tend to get short shrift in some of the existing material.
Jeanine
-------------------------------------------
Jeanine Finn
Data Science and Digital Scholarship Coordinator
The Claremont Colleges Library
800 North Dartmouth Ave. | Claremont, CA 91711
(909) 607-7958 | [log in to unmask]
Pronouns: she/her/hers
> On Jan 3, 2019, at 8:41 AM, Andrew L Hickner <[log in to unmask]> wrote:
>
> Dear Joshua,
>
> I have thought about this for years. I would be very much interested in collaborating on such an effort.
>
>
>
> Andy Hickner, MSI
> Health Sciences Librarian
> Seton Hall University | Interprofessional Health Sciences Campus
> [log in to unmask] | 1-973-542-6973
> http://library.shu.edu/ihs
>
> -----Original Message-----
> From: Code for Libraries <[log in to unmask]> On Behalf Of Gomez, Joshua
> Sent: Thursday, January 3, 2019 11:32 AM
> To: [log in to unmask]
> Subject: [CODE4LIB] Usability and A/B test results clearinghouse
>
> I am wondering if there exists some kind of clearinghouse of data from usability tests and A/B tests on digital libraries and archives. Or, if such a thing does not exist, if members from this community would be interested in building one with me.
>
> I’m sure many results have been published in papers in various journals or blog posts. But what I had in mind was an accumulation of many such results into a central place, so that it would be possible to quickly lookup and answer questions like “which facets/filters are used most or least?” or “which layouts of complex objects result in more images/bitstreams being viewed/streamed?” and so on. The general goal is to build up an evidence-based set of design patterns for digital library interfaces.
>
> I already have strong opinions about some of these questions, but I would like data to back them up before acting on them. For instance, I think the consistent use of author and subject fields in faceted search is an antipattern. Any field with more than a few dozen possible terms seems unusable (to me) in faceted search. I think it would be much better to use type-ahead search for data in these fields and use facets/filters only on simpler fields like date, language, or resource type. But these are just opinions and I would like some proof.
>
> I could run my own tests locally, and I intend to, but I would feel more confident if I saw consistent results from multiple institutions. And I don’t think I need to convince anyone subscribing to this list about the merits of working collaboratively and sharing knowledge.
>
> So if you know of something like this, please point me to it. Or if you are interested in putting something like this together, please get in touch.
>
> Joshua Gomez
> Head of Software Development & Library Systems UCLA Library [log in to unmask]<mailto:[log in to unmask]>
>
|