Print

Print


Thanks for this pointer Owen.

It's a nice illustration of the fact that what users actually want (well, I know I did back when I actually worked in large information services departments!) is something more like an intranet where the content I find is weighted towards me, the "audience" e.g. the intranet knows I'm a 2nd year medical student and one of my registered preferred languages is Mandarin already, or it "knows" that I'm a rare books cataloguer and I want to see what "nine out of ten" other cataloguers recorded for this obscure and confusing title.

However, this stuff is quite intense for linked data, isn't it? I understand that it would involve lots of quads, named graphs or whatever...

In a parallel world, I'm currently writing up recommendations for aggregating ONIX for Books records. ONIX data can come from multiple sources who potentially assert different things about a given "book" (i.e. something with an ISBN to keep it simple).

This is why *every single ONIX data element* can have option attributes of

@datestamp
@sourcename
@sourcetype [e.g. publisher, retailer, data aggregator... library?]

...and the ONIX message as a whole is set up with "header" and "product record"  segments that each include some info about the sender/recipient/data record in question.

How people in the book supply chain are implementing these is a distinct issue, but could these capabilities have some relevance to what you're discussing?

Do you have any other pointers to "intranet-like" catalogues?

In the museum space, there is of course this: http://www.researchspace.org/

Cheers,

Michael 

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Owen Stephens
Sent: 28 August 2012 21:37
To: [log in to unmask]
Subject: Re: [CODE4LIB] Corrections to Worldcat/Hathi/Google

The JISC funded CLOCK project did some thinking around cataloguing processes and tracking changes to statements and/or records - e.g. 
http://clock.blogs.lincoln.ac.uk/2012/05/23/its-a-model-and-its-looking-good/

Not solutions of course, but hopefully of interest

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: [log in to unmask]
Telephone: 0121 288 6936

On 28 Aug 2012, at 19:43, Simon Spero <[log in to unmask]> wrote:

> On Aug 28, 2012, at 2:17 PM, Joe Hourcle wrote:
> 
>> I seem to recall seeing a presentation a couple of years ago from someone in the intelligence community, where they'd keep all of their intelligence, but they stored RDF quads so they could track the source.
>> 
>> They'd then assign a confidence level to each source, so they could get an overall level of confidence on their inferences.
>> [...]
>> It's possible that it was in the context of provenance, but I'm getting bogged down in too many articles about people storing provenance information using RDF-triples (without actually tracking the provenance of the triple itself)
> 
> Provenance is of great importance in the IC and related sectors.   
> 
> An good overview of the nature of evidential reasoning is David A Schum (1994;2001). Evidential Foundations of Probabilistic Reasoning. Wiley & Sons, 1994; Northwestern University Press, 2001 [Paperback edition].
> 
> There are usually papers on provenance and associated semantics at the GMU Semantic Technology for Intelligence, Defense, and Security (STIDS).  This years conference is 23 - 26 October 2012; see http://stids.c4i.gmu.edu/ for more details. 
> 
> Simon