Print

Print


BTW, one other solution comes to mind, III supports an XML format request
from the OPAC (it has to be configured in the wwwoptions to enable, but
you *should* be able to request bib's/items via:

<catalog_url>/xrecord=b100200
<catalog_url>/xrecord=i102345

eg.
http://tripod.brynmawr.edu/xrecord=b1005614

I believe it's III's xml (as opposed to marcXML, but it might be
preferable (and easier). It's essentially a poor man's rest interface.

- adam

On Tue, May 01, 2007 at 06:32:02PM -0400, Godmar Back wrote:
> If I may follow up on an earlier discussion [ relevant parts are
> included below ] regarding how to extract holdings information from
> III or other catalogs.
>
> I have one thing to offer and 1 thing to request. I'll start with the
> offering: MAJAX. MAJAX is a JavaScript library that screenscrapes III
> catalogs and can include the results so obtained into any document
> served from the same domain. URL of the current code is
> http://libx.org/majax/majax.html ; a demo is at
> http://libx.org/majax/majaxtest4.html )
>
> After an initial, somewhat clumsy approach, we've now adopted an
> approach that's similar to COinS. For instance, to include holdings
> information for a book into a website, all you have to do is include a
> <span class="majax-showholdings" title="iXXXXXXXXX"></span> in your
> HTML, and include MAJAX via a single <!cript> element, which will
> result in that SPAN being replaced with the holdings of the book with
> ISBN XXXXXXX. Also support bibrecord number and title.
> It's so easy a cave librarian could do it. It can be done directly
> from the WebBridge management panel for those of you have are damned
> to use WebBridge. Of course, the underlying JavaScript API is still
> available for more advanced users. MAJAX has been released under the
> LGPL.
>
> Now for the thing to request. Are there any reusable, open source
> scripts out there that implements a REST interface that screenscrapes
> or otherwise efficiently accesses a III catalog? David and James have
> provided links, but no code. I would be grateful for anything I could
> reuse and don't have to reimplement.
>
> Here's what I envision:
>
> Interface: REST
>
> Input: search terms/type - maybe OpenURL v0.1-syntax, or another
> adopted standard, or something custom, but ideally simple.
>
> Output: XML - maybe Marc XML with 852 (or whatever the number is)
> holdings records - similar to what David's screen scrape test
> provides. Ideally XML that comes with a schema and validates against
> it. Maybe JSON like James's scripts (?)
>
> Implementation: Something that a cave librarian could deploy - good
> candidates are PhP and possibly Perl-based cgi, but one could conceive
> of others. Nothing that requires elaborate server setups or installing
> custom frameworks.
>
> Thank you for any pointers/suggestions you may have.
>
> - Godmar
>
> On 3/4/07, Birkin James Diana <[log in to unmask]> wrote:
> >On Mar 1, 2007, at 5:23 PM, Walker, David wrote:
> >
> >> http://walkertr.csusm.edu/scrape/test.htm
> >
> >Very cool; works on our III catalog!
> >
> >Nathan Mealy -- I also used the screenscrape method to get info we
> >needed for a couple of ISBN-based projects, not knowing at the time
> >about the yaz-z39.50-OPAC option.
> >
> >By implementing this in the form of a web-service, I can switch the
> >work-horse code without affecting other apps, and minimize session
> >concerns.
> >
> ><http://dl.lib.brown.edu/soa_services/josiah_status/examples.php>
> ><http://dl.lib.brown.edu/soa_services/josiah_status/tests/
> >InfoHolderTest.php>
> >
> >(The returned json info is more comprehensible via view-source.)
> >
> >---
> >Birkin James Diana
> >Programmer, Web Services
> >Brown University Library
> >[log in to unmask]
> >