Print

Print


Numerous comments from today's posts.

As to Jonathan on complexity, resources and "we've got it working". Indeed you have and it is a cool looking UI and functionality behind it. I did not mean to imply that you had not got a working system, or that anyone else could not do it (see Dan's comments on his available JavaScript). The important word in what I wrote was "scale". Many of the content providers, ILS vendors, and even enterprise software producers who have come to use Muse as a third party middleware platform have done so after they have started on this type of work. (I include federated search, known item enrichment, record harvesting, record conversion and the like as the same type of work here.) They got it written, running and started expanding. Then they came to see us once the number of Sources reached about 2 dozen. At this point they found that the work of maintenance (Karen's concern) was actually starting to take whole numbers of FTE programmers, and it was becoming costly. The projections to hundreds of Sources were scary financially, yet there was/is demand for the capability. So they came to us to take advantages of economies of scale where we can build and fix...and fix...and fix... just once for all our partners. That way it works. It also works on a small scale with well known, well defined Sources. (More horror stories can wait for the pub.)

Integration with ILS's (we have our system integrated with two, two more were - but have gone their own way, and are developing with a fifth one, so some experience): Generally this is technically challenging because the ILS catalogue is display software which is built to handle results from a single source - the search engine of the catalogue. Thus integrating other results into the "data stream" is just not possible, without development. So you have to go about it some other way.
 
	First Possibility is to layer the extra functionality on top of the OPAC. Then this code becomes the OPAC and a lot of work has to be done to 	replicate the OPAC functions correctly down to the underlying ILS. And some of the ILS vendors have a hissy fit about this - just saying. 

	Second Possibility is to do what Dan has done and make the extra functionality a display level action. In other words create a Browser based 	client which does what you want in terms of aggregating records. Again, our experience has been that this does not make ILS vendors feel all warm 	and cuddly, but there is not a lot they can do about it - they do not own the users browsers. A version of this approach is what Umlaut does (as 	I understand it - which could be very wrong :-( ) where the additional functionality is sever based, but is an adjunct to the main OPAC.

	Third possibility is to go right to the backend and put the integration between the ILS and the search engine for the OPAC. Thus the OPAC talks 	to the federator, and it queries the OPAC database and any other Sources, presenting all the results as one stream, as if from the ILS database. 

Surprisingly (maybe - it was to me a long while ago) the easiest way to do this - with tech help - is the third. And it seems to be the one which gives the ILS vendors the least qualms. (Some caveats there - so see the reply to Karen's concerns below.) With most ILSs running as client-server architectures (as far as their DB/search engine are concerned), there is a natural break point to make use of. But this is not just a bit of universally applicable JavaScript - it is unique to each ILS, and only makes sense in a dedicated installation with the technical resources to implement and maintain it, or in a situation like ours, where we can make use of that one integration to add access to thousands of extra sources, consequently fitting all (so far) user requirements.

Karen's point about approaching Vendors and their concern about stability of Source(s). (Well, a lot of comment from others as well, but she started it.) This is a concern and, as I said above, one of the reasons why vendors work with us. We can guarantee a stable API for the vendor ILS whatever the vagaries of the actual Source. And I think that would be vital. The ILS vendors are not interested in crafting lots of API or parsing code and fixing it continuously. So OL would have to guarantee a stable API, which met minimum functionality requirements, and keep it running for at least a dozen years. We are still running the first API we produced some 10 years ago as there are deployed systems out there which use it and they are not going to be replaced any time soon. The users lose out on new functionality (a lot of it!), but cannot or will not pay for the upgraded ILS. A subsidiary advantage of this "stable third party supplier" scenario is that Karen's last query ("graceful degradation?") is our problem, not the ILS's. We have to handle that and notify the ILS, and it just passes on (or quite often doesn't) the message to the user that there is no data from Source "X". And then we have to fix it!

This is long and probably boring to many (apologies), but the actual functionality is, I believe, exactly where the web is going. It is, after all, the real time use of linked data and all those relations across the web and between catalogues (even if they are implicit). As Jonathan, I see this as being as exciting now as when we first started it in '97, but we still have to evangelise about its benefits. Until you can show the user, in their own data, exactly what it can do for them, then they become excited.

Peter

p.s. Karen, If you want to, contact me off list to talk about talking to ILS vendors.

> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Karen Coyle
> Sent: Thursday, June 16, 2011 9:29 AM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] JHU integration of PD works
> 
> Quoting Jonathan Rochkind <[log in to unmask]>:
> 
> > I think the vast majority of libraries can make javascript-only
> > changes to their OPAC interfaces, which is all Dan's approach
> > requires. Even III libraries do that.
> 
> So maybe what is needed is a very clear, step-by-step set of
> instructions on how to do this? And, of course, how to un-do it.
> 
> Question: what happens when the script fails, e.g. when the OL API
> does not respond? How graceful is that failure?
> 
> kc
> 
> >
> > IF they have any programming staff at all with a bit of time and are
> > willing to do such hacks. That might be an 'if' that's not met. But
> > it's not a huge technical challenge.
> >
> > So that leaves the answers being:
> > a) get libraries to change their attitudes and realize that being a
> > library in the 21st century means having a bit of programming staff,
> > just like being a library in the 20th meant having a bit of
> > cataloging staff.
> >
> > OR
> >
> > b) Get vendors to include an IA/OL integration feature out of the
> > box. (Which only meets IA/OL and not the next thing you're going to
> > want to integrate too, of course).
> >
> >
> > Which of these is harder/less-likely to happen, left as a judgement
> > to the reader.
> >
> > If I were a vendor, I too would have that reluctance KAren mentions,
> > to rely on an external service that may not be stable (both in sense
> > of uptime and in sense of the API not changing without warning in
> > the future), from a third-party service I have no agreement with.
> > Perhaps if IA would sign service level contracts with vendors (with
> > or withotu payment from the vendor), that would make things
> > smoother. Where they promise not to change their API without X
> > amount of notice, and/or commit to certain uptime.  Not sure that's
> > really feasible for IA though.
> >
> > Jonathan
> >
> > On 6/16/2011 11:44 AM, Karen Coyle wrote:
> >> Yes, I know about this, and I think this is great ... for Evergreen
> >> users. My concern is how we get it out there to the majority of
> >> libraries who aren't on an OS platform and/or cannot make changes
> >> to their UI. As I think your post demonstrates, what we need is to
> >> get through to the system vendors and get them to implement this
> >> kind of linking. I intend to chat up vendors in the exhibits at ALA
> >> to find out what this means to them. I suspect they are reluctant
> >> to rely on a system or feature that may not be stable or persistent
> >> (a reasonable reluctance when you have thousands of installations),
> >> so then the question becomes: how can this be made to work?
> >>
> >> kc
> >>
> >> Quoting Dan Scott <[log in to unmask]>:
> >>
> >>> (Apologies in advance if this looks like crap, I hate trying to
> >>> reply in context in GroupWise)
> >>>
> >>>>>> On Wed, Jun 15, 2011 at 10:55 AM, Karen Coyle <[log in to unmask]> wrote:
> >>>> Quoting Eric Hellman <[log in to unmask]>:
> >>>>
> >>>>
> >>>>> What are the reasons that this sort of integration not more
> >>>>> widespread? Are they technical or institutional? What can be done by
> >>>>> producers of open access content to make this work better and
> >>>>> easier? Are "unified" approaches being touted by vendors delivering
> >>>>> something really different?
> >>>>
> >>>> I've been struggling with this around the Open Library digital texts:
> >>>> how can we make them available to libraries through their catalogs?
> >>>
> >>> You're aware of the recent addition of the OpenLibrary Read API,
> >>> which is meant to simplify exactly this problem, right?
> >>>
> >>> The official announcement was at
> >>> http://blog.openlibrary.org/2011/06/03/announcing-a-new-read-api/
> >>> ; http://ur1.ca/4g5bd describes how I integrated it into Evergreen
> >>> with a few hours' effort (mostly helping to debug the new
> >>> service); the official documentation is at
> >>> http://openlibrary.org/dev/docs/api/read and I augment those docs
> >>> in the latter half of the presentation I gave last week (available
> >>> in plain text, html, and epub formats at
> >>> http://bzr.coffeecode.net/2011_olita/ ).
> >>>
> >>>> When I look at the install documentation for Umlaut [1](I was actually
> >>>> hoping to find a "technical requirements" list), it's obvious that it
> >>>> takes developer chops. We're not going to find that in a small,
> >>>> medium, or often even a large public library. It seems to me that this
> >>>> kind of feature will not be widely available until it is included in
> >>>> ILS software, since that's what most libraries have.
> >>>
> >>> The OpenLibrary digital editions enhancement approach I took in
> >>> Evergreen was about 100 lines of JavaScript (around here:
> >>> http://ur1.ca/4g5cm ), most of which could probably be cloned
> >>> (under the GPL v2 or later) to any other library system from which
> >>> you can scrape ISBNs or other identifiers (LCCN, OCLC, or
> >>> OpenLibrary IDs).
> >>>
> >>> Note that the Evergreen-OpenLibrary integration hasn't been merged
> >>> yet, but the branch is there and will hopefully make its way into
> >>> core Evergreen soon.
> >>>
> >>
> >>
> >>
> >
> 
> 
> 
> --
> Karen Coyle
> [log in to unmask] http://kcoyle.net
> ph: 1-510-540-7596
> m: 1-510-435-8234
> skype: kcoylenet