Am 10.01.19 um 23:24 schrieb Fitchett, Deborah:
> To date there’ve been a number of successful open-source catalogue systems – Koha, Evergreen, the current FOLIO project. Discovery layers seem to have been left out of scope of all of these.
There are several open source discovery layers for libraries, in wide
use are for example VuFind (https://vufind.org/) and Blacklight
(http://projectblacklight.org/). But these are user interfaces, that
come with a search engine component (in these two cases the open source
search engine Solr) but without bibliographic metadata.
> My impression is that the main reason for this is the problem of the metadata index. Metadata is hoarded by for-profit vendors; some of them only grudgingly work with the Big Discover Layer companies under strict conditions (and possibly with money changing hands, I don’t know…) so would be unlikely to just hand it over to a community project. No metadata, no discovery layer.
We are a not-for-profit library consortium in Germany and we are running
a "discovery search engine" for our member libraries (with about 200
million records at the moment). This search engine (also implemented
with Solr) has no user interface, bot only an API that libraries can use
to connect their discovery layer to this search engine.
From our experience, if you talk to matadata providers (like publishing
companies, data aggregators etc.) they are usually willing to give their
matadata for free, at least if libraries buy licences for their products
(in general it is recommendable to negotiate free delivery of metadata
for licenced material with publishers, in our experience they are
willing to do that, at least for consortia or other groups of
libraries). It's even in publishers' interest, because they earn money
with licences and not with metadata. Putting metadata for licenced
material into discovery layers is actually like advertising the stuff
they earn money with.
So getting the data is actually doable. But very much work goes into
converting, normalizing and enriching the metadata you get from all
these different sources in different formats (that tend to change over
time). Don't underestimate that, that's extremely cumbersome work, at
least if you want to have some "data quality" at the end of processing.
> But more and more, metadata is becoming available through other sources. Crossref is the biggest and most obvious, and then there’s hundreds or thousands of institutional repositories. So my thought is, is it now becoming possible to create an index at least tolerably suitable for an open-source discovery layer, using open-access metadata sources?
I know that some libraries or library consortiums are using article data
from Crossref. We also looked at it and it seemed to be rather "thin".
But still, far better than nothing.
> And if so… how hard would it be?
It's doable, but doesn't come for free. Don't underestimate the amount
of work necessary to run a reliable backend for metadata provision for a
Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG)
Platz der Göttinger Sieben 1, D 37073 Göttingen
[log in to unmask], +49 (0) 551 39-31414, http://www.gbv.de/