Maybe it is time to innovate and rethink 'discovery' services rather than
simply providing an open source replication of existing proprietary
approaches? A major long-term study of how readers discover scholarly
content noted that use of library discovery services may have peaked in 2012
and is now in *decline*. (See: 'How Readers Discover Content in Scholarly
Publications'. By Tracy Gardner and Simon Inger. Renew Publishing
Consultants. August 2018)
http://renewpublishingconsultants.com/wp-content/uploads/2018/08/How-Readers
-Discover-Content-2018-Published-180903.pdf
There are other approaches.
Library centric reading/resource list systems
In my work with academic libraries I have noted (presented and written
about) a big increase in the deployment of library centric reading/resource
list systems which, at least for many undergraduates, are now the
'discovery' systems of choice. Unlike (almost?) all catalogue/discovery
systems they helpfully provide a *patron context* to discovery--i.e. the
course/module the student is on--even the year/week of the course. In the
main (notably undergrad') students love this more straightforward and
relevant approach and may never use a library 'discovery' service.
In addition these library reading lists solutions integrate closely with the
university's learning management system and often provide access to
'learning resources' not typically found in the library catalogue/discovery
system. Titles can be annotated with additional metadata - typically added
by faculty - such as 'essential' or 'background' [reading]. There is barely
a university in Australia or the UK that doesn’t deploy this (complimentary)
approach to discovery --though the US and other countries seem to be lagging
behind. There is more information on the reading/resource list page of
Higher Education Library Technology (HELibTech)
https://helibtech.com/reading_resource_lists
AI
Yewno https://www.yewno.com/discover/ is an interesting AI based approach to
providing a different kind of discovery environment. A number of libraries
are using it. Yewno harvests 'millions of scholarly articles, books, and
databases across virtually all academic fields' to allow users to 'navigate
intuitively across concepts, relationships, and fields, learning from
resources that might have otherwise been overlooked'.
Voice
With voice search becoming ubiquitous (Gartner predicts that by 2020 "30% of
web browsing sessions" will be voice
https://www.gartner.com/smarterwithgartner/gartner-predicts-a-virtual-world-
of-exponential-change/ is any library doing work on this?--e.g. using the
growing number of tools to optimise their website and/or catalogue/discovery
service for voice search or to develop voice user interfaces (VUIs). It
seems like library content and tech providers are working on this.. (AI and
voice search featured at the recent ConTech conference in London)
Linked data
Finally there was a lot of talk a few years back about the opportunity to
enhance discovery using linked data and a number of catalogues offered a
linked data set - but I'm not aware of anything in place yet that looks
really transformative in terms of the user experience of *discovery*.
Ken
Ken Chad Consulting Ltd http://www.kenchadconsulting.com Tel:
+44(0)7788727845
Twitter: @kenchad | Skype: kenchadconsulting |Linkedin:
www.linkedin.com/in/kenchad
Researcher IDs:
Orcid.org/0000-0001-5502-6898
ResearchGate: https://www.researchgate.net/profile/Ken_Chad
-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Till
Kinstler
Sent: 11 January 2019 15:12
To: [log in to unmask]
Subject: Re: [CODE4LIB] Spit-balling - open-source discovery layer
Hi,
Am 10.01.19 um 23:24 schrieb Fitchett, Deborah:
> To date there’ve been a number of successful open-source catalogue systems
– Koha, Evergreen, the current FOLIO project. Discovery layers seem to have
been left out of scope of all of these.
There are several open source discovery layers for libraries, in wide
use are for example VuFind (https://vufind.org/) and Blacklight
(http://projectblacklight.org/). But these are user interfaces, that
come with a search engine component (in these two cases the open source
search engine Solr) but without bibliographic metadata.
> My impression is that the main reason for this is the problem of the
metadata index. Metadata is hoarded by for-profit vendors; some of them only
grudgingly work with the Big Discover Layer companies under strict
conditions (and possibly with money changing hands, I don’t know…) so would
be unlikely to just hand it over to a community project. No metadata, no
discovery layer.
We are a not-for-profit library consortium in Germany and we are running a
"discovery search engine" for our member libraries (with about 200 million
records at the moment). This search engine (also implemented with Solr) has
no user interface, bot only an API that libraries can use to connect their
discovery layer to this search engine. From our experience, if you talk to
matadata providers (like publishing companies, data aggregators etc.) they
are usually willing to give their matadata for free, at least if libraries
buy licences for their products (in general it is recommendable to negotiate
free delivery of metadata for licenced material with publishers, in our
experience they are willing to do that, at least for consortia or other
groups of libraries). It's even in publishers' interest, because they earn
money with licences and not with metadata. Putting metadata for licenced
material into discovery layers is actually like advertising the stuff they
earn money with. So getting the data is actually doable. But very much work
goes into converting, normalizing and enriching the metadata you get from
all these different sources in different formats (that tend to change over
time). Don't underestimate that, that's extremely cumbersome work, at least
if you want to have some "data quality" at the end of processing.
> But more and more, metadata is becoming available through other sources.
Crossref is the biggest and most obvious, and then there’s hundreds or
thousands of institutional repositories. So my thought is, is it now
becoming possible to create an index at least tolerably suitable for an
open-source discovery layer, using open-access metadata sources?
I know that some libraries or library consortiums are using article data
from Crossref. We also looked at it and it seemed to be rather "thin". But
still, far better than nothing.
> And if so… how hard would it be?
It's doable, but doesn't come for free. Don't underestimate the amount of
work necessary to run a reliable backend for metadata provision for a
dsicovery system.
Till
--
Till Kinstler
Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG)
Platz der Göttinger Sieben 1, D 37073 Göttingen
[log in to unmask], +49 (0) 551 39-31414, http://www.gbv.de/
|