So the issue being discussed on AUTOCAT was the availability/fault tolerance of the database, given that it's spread over numerous remote systems, and I suppose local caching and mirroring are the answers there.  

The other issue was skepticism about the feasibility of indexing all these remote sources, which led me to thinking about remote indexes, but I see the answer is that that's why we won't be using single-site local systems so much, but instead using Google-like web-scale indexes.  That's putting pressure on the old vision of "the library catalog" as "our database".

Is that a fair understanding?

Cindy Harper
[log in to unmask] 

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Eric Lease Morgan
Sent: Thursday, February 26, 2015 9:44 AM
To: [log in to unmask]
Subject: Re: [CODE4LIB] linked data question

On Feb 25, 2015, at 3:12 PM, Sarah Weissman <[log in to unmask]> wrote:

> I am kind of new to this linked data thing, but it seems like the real 
> power of it is not full-text search, but linking through the use of 
> shared vocabularies. So if you have data about Jane Austen in your 
> database and you are using the same URI as other databases to 
> represent Jane Austen in your data (say 
>, then you (or rather, your 
> software) can do an exact search on that URI in remote resources vs. a 
> fuzzy text search. In other words, linked data is really
> supposed to be linked by machines and discoverable through URIs. If 
> you
> visit the URL: you can see a 
> human-interpretable representation of the data a SPARQL endpoint would 
> return for a query for triples { ?p ?o}.
> This is essentially asking the database for all 
> subject-predicate-object facts it contains where Jane Austen is the subject.

Again, seweissman++  The implementation of linked data is VERY much like the implementation of a relational database over HTTP, and in such a scenario, the URIs are the database keys. —ELM