We have taken a somewhat different approach to how we manage our RDF data:
after years of using a native triple store, we found that it was actually
extremely impractical for the way we actually used our data. Triple stores
are fine for ad-hoc queries over arbitrary data, but that didn't reflect
our usage. Our schema, while flexible, was still predefined and our joins
were always on the same properties.

We found it made a lot more sense to store all of our data about a given
subject (concise bounded descriptions) as a document in a document-style
database (we use MongoDB). This took care of our SPARQL DESCRIBE queries.

However, since majority of the data we use are the equivalent of a SPARQL
CONSTRUCT made up of fixed joins over these CBD graphs, we decided to cache
these joined documents as their own documents in a read-only cache
collection, which we refer to as views.  Then, if any of the CBD graphs
change, we invalidate the view and rebuild the cache documents.

I think this usage pattern would also pretty closely reflect how a linked
library data system might work, as well. The data doesn't actually change
all that much, and there will likely be very few ad hoc queries.

This also addresses a problem in the suggestion that Jeff made regarding
using your own URIs with 3rd party data: it gets pretty complicated to
manage changes to the graph in this scenario. By storing the original CBDs
as is, and generating graphs with your URIs over the external data as a
view, it's far easier to isolate what gets changed with particular updates
(not to mention that large data updates from various sources in a native
triple store is painful).

You can take a look at what we wrote if you want more details:

This still doesn't address how to deal with keeping your local copy of the
external data up to date, and I don't know that there are a lot of good or
standard answers to that yet. That said, I think it's a solvable problem:
we just haven't gotten to that scale yet.

On Tuesday, February 24, 2015, Mixter,Jeff <[log in to unmask]> wrote:

> There are a few issue here that might need to be parsed out. The first is
> indexing Linked Data. It seems to make sense from a performance perspective
> to have a local index for the URIs and their names. For example
>             name: Austen, Jane
> Pretend 'name' is an index field and the URI is the index key or ID. If
> you are using a Lucene index, you can imagine having multiple names based
> on language variation, preferred label variation (i.e. 'Austen, Jane,
> 1775-1817') etc.
> Another issue has to do with what is cached in the index. I would argue
> that nothing other then the lookup values should be cached. The system
> should go off the the key/ID (i.e. the URI) and fetch the data from it.
> This is important because data can change all the time and you do not want
> to rely on having to download monthly data dumps to rebuild your index.
> Plus the idea of data dumps stands in opposition to the idea of Linked Data
> and the Web (i.e. its on the Web for a reason and that is to be accessed on
> the Web not downloaded and stored in a silo).
> The third issue has to do with using VIAF URIs or coining your own local
> URIs. This is a bit of a toss up but I would argue that it would be better
> if you could coin your own URI and simply use a sameAs link to other
> entities, such as VIAF, LCSH, FAST etc. This would allow you to have a
> localized world-view of the entity. Or, to explain it better, it would
> allow you to put a localized lens on the entity and show things like how
> does this entity relate to other things that I have, know about, vend to
> patrons, etc.  There are also practice reasons for this. If I see a
> hot-link in my local Library OPAC for 'Jane Austen' I expect to stay within
> my local OPAC domain when I click on it. I do not want to be taken out to
> VIAF or another place. The reason for clicking it is to learn about it
> within the context of what I am doing on that website. Finally, coining
> your own URI allows you provide people with a bookmark-able URL. That is
> important for search engine visibility.
> The last issue would require the index example above to not have a VIAF
> URI but rather a local URI that could be retrieved from a local Triple
> Store. In the store you could provide sameAs links to VIAF as well as
> localized information about the entity such as what he/she has authored
> that you current have available.
> Thanks,
> Jeff Mixter
> Research Support Specialist
> OCLC Research
> 614-761-5159
> [log in to unmask] <javascript:;>
> ________________________________________
> From: Code for Libraries <[log in to unmask] <javascript:;>> on
> behalf of Esmé Cowles <[log in to unmask] <javascript:;>>
> Sent: Tuesday, February 24, 2015 3:09 PM
> To: [log in to unmask] <javascript:;>
> Subject: Re: [CODE4LIB] linked data question
> Yes, I would expect each organization to fetch linked data resources and
> maintain their own local indexes, and probably also cache the remote
> resources to make it easier and faster to work with them.  I've heard
> discussions of caching strategies, shared indexing tools, etc., but haven't
> heard about anyone distributing pre-indexed content.
> Many vocabularies are available as RDF data dumps, which can sometimes be
> very large and unwieldy.  So I could imagine being able to download, e.g.,
> a Solr index of the vocabulary instead of having to index it yourself.  But
> I haven't heard of anybody doing that.
> -Esme
> > On 02/24/15, at 10:56 AM, Harper, Cynthia <[log in to unmask]
> <javascript:;>> wrote:
> >
> > Ann - I thought I'd refer part of your question to Code4lib.
> >
> > As far as having to click to get the linked data: systems that use
> linked data will be built to transit the link without the user being aware
> - it's the system that will follow that link and find the distributed data,
> then display it as it is programmed to do so.
> >
> > I think Code4libbers will know more about my question about distributed
> INDEXES?  This is my rudimentary knowledge of linked data - that the
> indexing process will have to transit the links, and build a local index to
> the data, even if in displaying the individual "records", it goes again out
> to the source.  But are there examples of distributed systems that have
> distributed INDEXES?  Or Am I wrong in envisioning an index as a separate
> entity from the data in today's technology?
> >
> > Cindy Harper
> >
> > -----Original Message-----
> > From: Harper, Cynthia
> > Sent: Tuesday, February 24, 2015 1:20 PM
> > To: [log in to unmask] <javascript:;>; 'Williams, Ann'
> > Subject: RE: linked data question
> >
> > What I haven't read, but what I have wondered about, is whether so far,
> linked DATA is distributed, but the INDEXES are local?  Is there any
> example of a system with distributed INDEXES?
> >
> > Cindy Harper
> > [log in to unmask] <javascript:;>
> >
> > -----Original Message-----
> > From: AUTOCAT [mailto:[log in to unmask] <javascript:;>] On
> Behalf Of Williams, Ann
> > Sent: Tuesday, February 24, 2015 10:26 AM
> > To: [log in to unmask] <javascript:;>
> > Subject: [ACAT] linked data question
> >
> > I was just wondering how linked data will affect OPAC searching and
> discovery vs. a record with text approach. For example, we have various 856
> links to publisher, summary and biographical information in our OPAC as
> well as ISBNs linking to ContentCafe. But none of that content is
> discoverable in the OPAC and it requires a further click on the part of
> patrons (many of whom won't click).
> >
> > Ann Williams
> > USJ
> > --
> > ***********************************************************************
> >
> > AUTOCAT quoting guide:
> > E-mail AUTOCAT listowners:             [log in to unmask]
> <javascript:;>
> > Search AUTOCAT archives:
> >  By posting messages to AUTOCAT, the author does not cede copyright
> >
> > ***********************************************************************