Print

Print


I agree entirely that these would need to be a collection of triples with its own set of attributes/metadata describing the collection. Basically a "record" with triples as the data elements.

But I see a bigger problem with the direction this thread has taken so far. The use of versions has been conditioned by the use of something like Github as the underlying "versioning platform". But Github (and all software versioning systems) are based on temporal versions, where each version is, in some way, an evolved unit of the same underlying thing - a program or whatever. So the versions are really temporally linearly related to each other as well as related in terms of added or improved or fixed functionality. Yes, the codebase (the underlying "thing") can fork or split in a number of ways, but they are all versions of the same thing, progressing through time.

In the existing bibliographic case we have many records which purport to be about the same thing, but contain different data values for the same elements. And these are the "the versions" we have to deal with, and eventually reconcile. They are not descendents of the same original, they are independent entities, whether they are recorded as singular MARC records or collections of LD triples. I would suggest that at all levels, from the triplet or key/value field pair to the triple collection or fielded record, what we have are "alternates", not "versions". 
 
Thus the alternates exist at the triple level, and also at the "collection" level (the normal bibliographic unit record we are familiar with). And those alternates could then be allowed versions which are the attempts to, in some way, improve the quality (your definition of what this is is as good as mine) over time. And with a closed group of alternates (of a single bib unit) these versioned alternates would (in a perfect world) iterate to a common descendent which had the same agreed, authorized set of triples. Of course this would only be the "authorized form" for those organizations which recognized the arrangement. 

But, allowing alternates and their versions does allow for a method of tracking the original problem of three organizations each copying each other endlessly to "correct" their data. In this model it would be an alternate/version spiral of states, rather than a flat circle of each changing version with no history, and no idea of which was master. (Try re-reading Stuart's "(a), (b), (c)" below with the idea of alternates as well as versions (of the Datasets). I think it would become clearer as to what was happening.) There is still no master, but at least the state changes can be properly tracked and checked by software (and/or humans) so the endless cycle can be addressed - probably by an outside (human) decision about the "correct" form of a triple to use for this bib entity.

Or this may all prove to be an unnecessary complication.

Peter


> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of stuart yeates
> Sent: Monday, August 27, 2012 3:42 PM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] Corrections to Worldcat/Hathi/Google
> 
> These have to be named graphs, or at least collections of triples which can be processed through
> workflows as a single unit.
> 
> In terms of LD there version needs to be defined in terms of:
> 
> (a) synchronisation with the non-bibliographic real world (i.e. Dataset Z version X was released at
> time Y)
> 
> (b) correction/augmentation of other datasets (i.e Dataset F version G contains triples augmenting
> Dataset H versions A, B, C and D)
> 
> (c) mapping between datasets (i.e. Dataset I contains triples mapping between Dataset J version K and
> Dataset L version M (and visa-versa))
> 
> Note that a 'Dataset' here could be a bibliographic dataset (records of works, etc), a classification
> dataset (a version of the Dewey Decimal Scheme, a version of the Māori Subject Headings, a version of
> Dublin Core Scheme, etc), a dataset of real-world entities to do authority control against (a dbpedia
> dump, an organisational structure in an institution, etc), or some arbitrary mapping between some
> arbitrary combination of these.
> 
> Most of these are going to be managed and generated using current systems with processes that involve
> periodic dumps (or drops) of data (the dbpedia drops of wikipedia data are a good model here). git
> makes little sense for this kind of data.
> 
> github is most likely to be useful for smaller niche collaborative collections (probably no more than
> a million triples) mapping between the larger collections, and scripts for integrating the collections
> into a sane whole.
> 
> cheers
> stuart
> 
> On 28/08/12 08:36, Karen Coyle wrote:
> > Ed, Corey -
> >
> > I also assumed that Ed wasn't suggesting that we literally use github
> > as our platform, but I do want to remind folks how far we are from
> > having "people friendly" versioning software -- at least, none that I
> > have seen has felt "intuitive." The features of git are great, and
> > people have built interfaces to it, but as Galen's question brings
> > forth, the very
> > *idea* of versioning doesn't exist in library data processing, even
> > though having central-system based versions of MARC records (with a
> > single time line) is at least conceptually simple.
> >
> > Therefore it seems to me that first we have to define what a version
> > would be, both in terms of data but also in terms of the mind set and
> > work flow of the cataloging process. How will people *understand*
> > versions in the context of their work? What do they need in order to
> > evaluate different versions? And that leads to my second question:
> > what is a version in LD space? Triples are just triples - you can add
> > them or delete them but I don't know of a way that you can version
> > them, since each has an independent T-space existence. So, are we
> > talking about named graphs?
> >
> > I think this should be a high priority activity around the "new
> > bibliographic framework" planning because, as we have seen with MARC,
> > the idea of versioning needs to be part of the very design or it won't
> > happen.
> >
> > kc
> >
> > On 8/27/12 11:20 AM, Ed Summers wrote:
> >> On Mon, Aug 27, 2012 at 1:33 PM, Corey A Harper
> >> <[log in to unmask]>
> >> wrote:
> >>> I think there's a useful distinction here. Ed can correct me if I'm
> >>> wrong, but I suspect he was not actually suggesting that Git itself
> >>> be the user-interface to a github-for-data type service, but rather
> >>> that such a service can be built *on top* of an infrastructure
> >>> component like GitHub.
> >> Yes, I wasn't saying that we could just plonk our data into Github,
> >> and pat ourselves on the back for a good days work :-) I guess I was
> >> stating the obvious: technologies like Git have made once hard
> >> problems like decentralized version control much, much easier...and
> >> there might be some giants shoulders to stand on.
> >>
> >> //Ed
> >
> 
> 
> --
> Stuart Yeates
> Library Technology Services http://www.victoria.ac.nz/library/