Hi,
I may be able to assist you with the content mirroring part of this.
The University of Toronto Libraries hosts one of the Internet Archive
scanning operations through the Open Content Alliance and we host
content originally scanned by the Archive through the OCUL
Scholarsportal project at this URL: http://books.scholarsportal.info
In order to retrieve content from the IA (since it is sent immediately
to San Francisco as it is scanned) I've written a set of scripts that
download content based on various parameters.
-the starting point is a list of IA identifiers and other metadata
pulled from an advanced search query.
-from those which file types you want to download (*.pdf, *_marc.xml,
*.djvu, *_meta.xml, etc.) can be specified.
-The downloads are then queued and retrieved to specified local file
systems.
The system uses a mysql backend, perl, and curl for http downloads, with
an option for rsync. Designed to run on Linux systems. It contains
fairly sophisticated tools for checking download success, file size
comparison with the Archive, md5 error checking, re-running against the
Archive in case content changes, and can be adapted to a variety of needs.
So far we've downloaded about 400,000 pdfs and associated metadata
(about 14 TB altogether). It could be used, however to, for example,
just download marc records for integration into an ILS (a separate
challenge, of course), and to build pointers to the archive's content
for the fulltext.
Have had plans to open source it for some time, but other work always
gets in the way. If you (or anyone) want to take a look and try it out,
just let me know.
--
Graham Stewart [log in to unmask] 416-550-2806
Network and Storage Services Manager, Information Technology Services
University of Toronto Libraries
130 St. George Street
Toronto, Ontario, Canada M5S 1A5
On 10-05-14 03:34 PM, Eric Lease Morgan wrote:
> We are doing a tiny experiment here at Notre Dame with the Internet Archive, specifically, we are determining whether or not we can supplement a special collection with full text content.
>
> We are hosting at site colloquially called the Catholic Portal -- a collection of rare, infrequently held, and uncommon materials of a Catholic nature. [1] Much of the content of the Portal is metadata -- MARC and EAD records/files. I think the Portal would be more useful if it contained full text content. If it did, then indexing would be improved and services against the texts could be implemented.
>
> How can we get full text content? This is what we are going to try:
>
> 1. parse out identifying information from
> metadata (author names, titles, dates,
> etc.)
>
> 2. construct a URL in the form of a
> Advanced Search query and send it to the
> Archive
>
> 3. get back a list of matches in an XML
> format
>
> 4. parse the result looking for the "best"
> matches
>
> 5. save Internet Archive keys identifying
> full text items
>
> 6. mirror Internet Archive content locally
> using keys as pointers
>
> 7. update local metadata files pointing to
> Archive content as well as locally
> mirrored content
>
> 8. re-index local metadata
>
> If we are (somewhat) successful, then search results would not only have pointers to the physical items, but they would also have pointers to the digitized items. Not only could they have pointers to the digitized items, but they could also have pointers to "services against the texts" such as make word cloud, display concordance, plot word/phrase frequency, etc. These later services are spaces where I think there is great potential for librarianship.
>
> Frankly, because of the Portal's collection policy, I don't expect to find very much material. On the other hand, the same process could be applied to more generic library collections where more content may have already been digitized.
>
> Wish us luck.
>
> [1] Catholic Portal - http://www.catholicresearch.net/
> [2] Advanced search - http://www.archive.org/advancedsearch.php
>
|