LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  October 2005

CODE4LIB October 2005

Subject:

Re: Catalog Enhancements & Extensions (Re: mylibrary @ockham)

From:

Roy Tennant <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Fri, 28 Oct 2005 14:36:23 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (222 lines)

Ross,
Although you need Java to run it (i.e., it requires a Java servlet
container such as Tomcat), it is my understanding that from then on
you basically just do XSLT. Configuration of the indexing and the
rendering of the files is all done with XML or XSLT. See <http://
xtf.sourceforge.net/WebDocs/HTML/XTF_Programming_Guide/
XTFProgGuide.html> for more information.
Roy

On Oct 28, 2005, at 2:22 PM, Ross Singer wrote:

> 800k records really aren't that much, honestly.  Any of the more
> common
> xml dbs should be able to handle this (exist, xindice, berkeley dbxml,
> etc.).  Zebra is fine, too, for what you're talking about.  You'd just
> need to index the last modified field.
>
> I have not tested either Xapian or XTF.  I opted for Lucene because of
> other people on/in code4lib that had experience in it.
>
> XTF seems good... But, Roy, does it require the development to be
> in Java?
>
> <Merging your other email>
>
> No reason for Berkeley DB.  To me it seemed arbitrary as to which
> one to
> pick as it was just to be more efficient than storing on the
> filesystem.  If something better came along, we could just dump all
> the
> records into something else, since it's all just XML files.
>
> I guess the point is that once the data is free, the rest of this
> stuff
> has more flexibility.
>
> -Ross.
>
> Andrew Nagy wrote:
>
>
>> Wow!  Thanks for such a detailed reply ... this is awesome.
>>
>> I am thinking about storing the data from the catalog in an XML
>> database
>> as well, however since I know very little about these I am greatly
>> concerned about the scalability ... can they handle the 800,000+
>> records
>> we have in our catalog?  If I am just using it as a store, and
>> then use
>> some sort of indexer, this shouldn't be a concern?
>>
>> Lucene seems enticing over Zebra since it is a z39 interface which
>> from
>> what I can understand will not let me do fancy searches such as
>> what was
>> recently cataloged in the past 7 days, etc.
>> What about Xapian or XTF, did you test these out at all?  I guess
>> lucene
>> seems like a better product because it is an apache project?
>>
>> Thanks for all the Info!
>>
>> Andrew
>>
>>
>> Ross Singer wrote:
>>
>>
>>> This is pretty similar to the project that Art Rhyno and I have been
>>> working on for a couple of months now.  Thankfully, I just got the
>>> go-ahead to make it the top development priority, so hopefully we'll
>>> actually have something to see in the near future.  Like Eric, we
>>> don't
>>> have any problem with (and there aren't touching) any of the backend
>>> stuff (cataloging, acq, circ), but have major issues with the public
>>> interface.
>>>
>>> Although the way we're extracting records from our catalog is a
>>> little
>>> different (and there are reasons for it), the way I would recommend
>>> getting the data out of the opac is not via z39.50, but through
>>> whatever sort of marcdump utility your ILS has.  You can then use
>>> marc4j (or something similar) to transform the marc to xml (we're
>>> going
>>> to MODS, for example).  Although we're currently just writing
>>> this dump
>>> to a filesystem (broken up by LCC... again, there are reasons that
>>> don't exactly apply to this project), but I anticipate this will
>>> eventually go into a METS record and a Berkeley xmldb for
>>> storage.  For
>>> indexing, we're using Lucene (Art is accessing it via Cocoon, I am
>>> through PyLucene) and we're, so far, pretty happy with the results.
>>>
>>> If Lucene has issues, we'll look at Zebra (as John mentioned),
>>> although
>>> Zebra's indexes are enormous.  The nice thing about Zebra,
>>> though, is
>>> that it would forgo the need for the Berkeley DB, since it stores
>>> the
>>> XML record.  The built-in Z39.50 server is a nice bonus, as well.
>>> Backups would be XTF (http://www.cdlib.org/inside/projects/xtf/) and
>>> Xapian.  Swish-e isn't really an option since it can't index utf-8.
>>>
>>> The idea then is to be able to make stronger relationships
>>> between our
>>> site's content... eliminate the silos.  A search that brings back a
>>> couple of items that are in a particular subject guide would get
>>> a link
>>> to the subject... or at least links to the other "top" items from
>>> that
>>> guide (good tie in with MyLibrary, Eric).  Something that's on
>>> reserve
>>> would have links to reserve policies or a course guide for that
>>> course
>>> or whatever.
>>>
>>> Journals would have links to the databases they are indexed in.
>>>
>>> Yes, there's some infrastructure that needs to be worked out... :)
>>>
>>> But the goal is to have something to at least see by the end of the
>>> year (calendar, not school).
>>>
>>> We'll see :)
>>>
>>> -Ross.
>>>
>>> On Oct 27, 2005, at 5:58 PM, Eric Lease Morgan wrote:
>>>
>>>
>>>> On Oct 27, 2005, at 2:06 PM, Andrew Nagy wrote:
>>>>
>>>>
>>>>>>  http://mylibrary.ockham.org/
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I have been thinking of ways, similiar to what you have done
>>>>> that you
>>>>> mentioned below with the Ockham project, to allow more modern day
>>>>> access
>>>>> with our library catalog.  I have been beginning to think about
>>>>> devising
>>>>> a way to index/harvest our entire catalog (and allow this indexing
>>>>> process to run every so often) to allow our own custom access
>>>>> methods.
>>>>> We could then generate our own custom RSS feeds of new books,
>>>>> allow
>>>>> more
>>>>> efficient/enticing search interfaces, etc.
>>>>>
>>>>> Do you know of any existing software for indexing or harvesting a
>>>>> catalog into another datastore (SQL Database, XML Database, etc).
>>>>> I am
>>>>> sure I could fetch all of the records somehow through Z39.50 and
>>>>> dump it
>>>>> into a MySQL database, but maybe there is some better method?
>>>>>
>>>>
>>>>
>>>>
>>>> I too have thought about harvesting content from my local
>>>> catalog and
>>>> providing new interfaces to the content, and I might go about
>>>> this in
>>>> a number of different ways.
>>>>
>>>> 1. I might use OAI to harvest the content, cache is locally, and
>>>> provide services against the cache. This cache might be saved on a
>>>> file system, but more likely into a relational database.
>>>>
>>>> 2. I might simply dump all the MARC records from my catalog,
>>>> transform them into something more readable, say sets of HTML/XML
>>>> records, and provide services against these files.
>>>>
>>>> The weakest link in my chain would be my indexer. Relational
>>>> databases are notoriously ill-equipped to handle free text
>>>> searching.
>>>> Yes, you can implement it and you can use various database-specific
>>>> features to implement free text searching, but they still won't
>>>> work
>>>> as well as an indexer. My only experience with indexers lies in
>>>> things like swish-e and Plucene. I sincerely wonder whether or not
>>>> these indexers would be up to the task.
>>>>
>>>> Supposing I could find/use an indexer that was satisfactory, I
>>>> would
>>>> then provide simple and advanced (SRU/OpenSearch) search features
>>>> against the index of holdings. Search results would then be
>>>> enhanced
>>>> with the features such as borrow, re-new, review, put on reserve,
>>>> save as citation, email, "get it for me", put on hold, "what's
>>>> new?",
>>>> view as RSS, etc. These services would require a list of authorized
>>>> users of the system -- a patron database.
>>>>
>>>> In short, since I would have direct access to the data, and since I
>>>> would have direct to the index, I would use my skills to provide
>>>> services them. For the most part, I don't mind back-end,
>>>> administrative, data-entry interfaces to our various systems, but I
>>>> do have problems with the end-user interfaces. Let me use those
>>>> back-
>>>> ends to create and store my data, then give me unfettered access to
>>>> the data and I will provide my own end-user interfaces. Another
>>>> alternative is to exploit (industry standard) Web Services
>>>> computing
>>>> techniques against the existing integrated library system. In this
>>>> way you get XML data (information without presentation) back and
>>>> you
>>>> can begin to do the same things.
>>>>
>>>> --
>>>> Eric Lease Morgan
>>>> University Libraries of Notre Dame
>>>>
>>>>
>>
>>
>
>

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager