Thanks Jason and Ed,
I suspect within this project we'll keep using OAI-PMH because we've got tight deadlines and the other project strands (which do stuff with the harvested content) need time from the developer. At the moment it looks like we will probably combine OAI-PMH with web crawling (using nutch) - so use data from the
However, that said, one of the things we are meant to be doing is offering recommendations or good practice guidelines back to the (repository) community based on our experience. If we have time I would love to tackle the questions (a)-(d) that you highlight here - perhaps especially (a) and (c). Since this particular project is part of the wider JISC 'Discovery' programme (http://discovery.ac.uk and tech principles at http://technicalfoundations.ukoln.info/guidance/technical-principles-discovery-ecosystem) - from which one of the main themes might be summarised as 'work with the web' these questions are definitely relevant.
I need to look at Jason's stuff again as I think this definitely has parallels with some of the Discovery work, as, of course, does some of the recent discussion on here about the question of the indexing of library catalogues by search engines.
Thanks again to all who have contributed to the discussion - very useful
Owen
Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: [log in to unmask]
Telephone: 0121 288 6936
On 1 Mar 2012, at 11:42, Ed Summers wrote:
> On Mon, Feb 27, 2012 at 12:15 PM, Jason Ronallo <[log in to unmask]> wrote:
>> I'd like to bring this back to your suggestion to just forget OAI-PMH
>> and crawl the web. I think that's probably the long-term way forward.
>
> I definitely had the same thoughts while reading this thread. Owen,
> are you forced to stay within the context of OAI-PMH because you are
> working with existing institutional repositories? I don't know if it's
> appropriate, or if it has been done before, but as part of your work
> it would be interesting to determine:
>
> a) how many IRs allow crawling (robots.txt or lack thereof)
> b) how many IRs support crawling with a sitemap
> c) how many IR HTML splashpages use the rel-license [1] pattern
> d) how many IRs support syndication (RSS/Atom) to publish changes
>
> If you could do this in a semi-automated way for the UK it would be
> great if you could then apply it to IRs around the world. It would
> also align really nicely with the sort of work that Jason has been
> doing around CAPS [2].
>
> It seems to me that there might be an opportunity to educate digital
> repository managers about better aligning their content w/ the Web ...
> instead of trying to cook up new standards. I imagine this is way out
> of scope for what you are currently doing--if so, maybe this can be
> your next grant :-)
>
> //Ed
>
> [1] http://microformats.org/wiki/rel-license
> [2] https://github.com/jronallo/capsys
|