Hi Cindy,
Sure, and thank you for the compliment! [And, thanks Terry for the pointer to our report the other day, as well.]
It is homegrown. Regarding sharing, I'm currently in the process of switching that app (and several related projects) from local subversion to the UNT Libraries' GitHub space, but they're not there yet. Personally, I'm also in the process of making the (long overdue) switch from svn to git, so there's a little bit of a mental shift there on my part. My goal is to have everything moved over by mid-March, in time for the Innovative User Group conference--I'll be presenting about our local Catalog API project, which the bento box uses for some of its results, and I'd like to have that whole set of apps available for folks to look at, if possible. Though, it is only a few weeks away, so I may only manage to get the Catalog API up by then--we'll see.
The bento box consists of two components, a backend API and front-end app.
1. The backend API is implemented in Python Django, using Django REST Framework. It provides a simple interface for the front-end app to query and does the job of communicating with bento box search targets and returning the data needed for display as JSON. New search targets can be added pretty easily by extending a base class and overriding methods that define how to query the target and how to translate results into the output format. Different targets can return different fields, and you can use whatever fields are available in views and templates in the front-end app.
2. The front-end is a JS app that uses Backbone.js, RequireJS, and Bootstrap, skinned with our website template. It also ties into Google Analytics, with lots of custom events to record exactly what results people click on; how often "best bets" (from the Summon API) show up, for what queries, and how often they're clicked on; how often each target returns no results and for what queries, and fun things like that.
Search targets include:
* "Articles" retrieves results from Summon via their API.
* "Books and More" scrapes our III Web catalog (ouch). That's why that search tends to perform a little slowly compared to the others.
* "Librarians" hits a Solr instance where we've indexed our LibGuides and staff directory data, in an attempt to serve up a relevant librarian for a given query.
* "Journals" and "Databases" both hit our homegrown Catalog API.
* "Website" hits our Google Custom Search that services the Library website search.
* "Guides" hits our local Solr index of LibGuides.
* "Digital Collections" hits the Solr index for our digital library.
* "Background Materials" is another Summon API search, limited to reference materials.
The reason we're scraping our catalog for Books and More instead of pulling results from our catalog API is because the results the bento box displays needs to mirror what the catalog displays, and attempting to replicate III's relevance ranking ourselves wasn't something we wanted to do. Soon we'll be looking at possibly implementing a Blacklight layer on top of the same Solr index our catalog API uses, at which point we'd switch Books and More so it pulls results from the API instead of scraping the III catalog.
I hope that gives you a good idea, and I'm happy to answer any additional questions on or off list! Thanks for asking.
Jason Thomale
Resource Discovery Systems Librarian
User Interfaces Unit, UNT Libraries
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Harper, Cynthia
> Sent: Tuesday, February 16, 2016 11:01 AM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] article discovery platforms -- post-
> implementation assessment?
>
> Jason Thomale - can you tell us about your bento-box application? Is it
> homegrown? Is it shareable? I like it a lot.
>
> Cindy Harper
> Virginia Theological Seminary
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Terry Reese
> Sent: Thursday, February 11, 2016 1:10 PM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] article discovery platforms -- post-
> implementation assessment?
>
> I'm not sure if this was exactly what you are looking for -- but a talk
> derived from this report was given at C4L last year.
> http://digital.library.unt.edu/ark:/67531/metadc499075/
>
> --tr
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Tom Cramer
> Sent: Thursday, February 11, 2016 12:55 PM
> To: [log in to unmask]
> Subject: [CODE4LIB] article discovery platforms -- post-implementation
> assessment?
>
> I’ve seen many reviews of article discovery platforms (Ebsco Discovery
> Service, Ex Libris Primo Central, Serials Solutions Summon) before an
> implementations as part of a selection process—typically covering things
> like content coverage, API features, integrability with other content /
> sites. I have not seen any assessments done after an implementation.
>
> - what has usage of the article search been like?
> - what is the patron satisfaction with the service?
> - has anyone gone from blended results to bento box, or bento box to
> blended, based on feedback?
> - has anyone switched from one platform to another?
> - knowing what you know now, would you do anything different?
>
> I’m particularly interested in the experiences of libraries who use
> their own front ends (like Blacklight or VUFind), and hit the discovery
> platform via an API.
>
> Does anyone have a report or local experience they can share? On list or
> directly?
>
> It would be great to find some shoulders to stand on here. Thanks!
>
> - Tom
|