See historical comment in text below. But, to look forward -
It seems to me that we should be able to design a model with graceful degradation from full MARC data element set (vocabulary if you insist) to a core set which allows systems to fill in what they have and, on the receiving end, extract what they can find. Each system can work with its own schema, if it must, as long as the mapping for its level of detail against whatever designated level of detail it wishes to accept in the exchange format is created first. Obviously greater levels of detail cannot be inferred from lesser, and so many systems would be working with less than the data they would like, or create locally, but that is the nature of bibliographic data - it is never complete, or it must be processed assuming that is the case.
Using RDF and entity modeling it should be possible to devise a (small) number of levels from a basic core set (akin to DC, if not semantically identical) through to a "2,500 attribute*" person authority record (plus the other bib entities), and produce pre-parsers which will massage these to what the ILS (or other repository/system) is comfortable with. Since the "receiving system" is fixed for any one installation it does not need the complexity we build into our fed search platforms, and converters would be largely re-usable.
So, what about a Russian doll bibliographic schema? (Who gets to decide on what goes in which level is for years of committee work - unemployment solved!)
* number obtained from a line count from http://www.loc.gov/marc/authority/ecadlist.html - so rather approximate.
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Jonathan Rochkind
> Sent: Monday, December 05, 2011 10:57 AM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] Models of MARC in RDF
>
> On 12/5/2011 1:40 PM, Karen Coyle wrote:
> >
> > This brings up another point that I haven't fully grokked yet: the use
> > of MARC kept library data "consistent" across the many thousands of
> > libraries that had MARC-based systems.
>
> Well, only somewhat consistent, but, yeah.
>
> > What happens if we move to RDF without a standard? Can we rely on
> > linking to provide interoperability without that rigid consistency of
> > data models?
>
> Definitely not. I think this is a real issue. There is no magic to "linking" or RDF that provides
> interoperability for free; it's all about
> the vocabularies/schemata -- whether in MARC or in anything else.
> (Note different national/regional library communities used different schemata in MARC, which made
> interoperability infeasible there. Some still do, although gradually people have moved to Marc21
> precisely for this reason, even when Marc21 was less powerful than the MARC variant they started with).
Just a comment about the "good old days" when we had to work with USMARC, UKMARC, DANMARC, MAB1, AUSMARC, and so on. "interoperability infeasible" was not the situation. It was perfectly possible to convert records from one format to another - with some loss of data into the less specific format of course. Which meant that a "round trip" was not possible. But "major elements" were present in all and that meant it was practically useful to do it. We did this at the British Library when I was there, and we did it commercially as a service for OCLC (remember them?) as a commercial ILS vendor. It did involve specific coding, and an internal database system built to accommodate the variability.
>
> That is to say, if we just used MARC's own implicit vocabularies, but output them as RDF, sure, we'd
> still have consistency, although we
> wouldn't really _gain_ much. On the other hand, if we switch to a new
> better vocabulary -- we've got to actually switch to a new better vocabulary. If it's just "whatever
> anyone wants to use", we've made it VERY difficult to share data, which is something pretty darn
> important to us.
>
> Of course, the goal of the RDA process (or one of em) was to create a new schema for us to
> consistently use. That's the library community effort to maintain a common schema that is more
> powerful and flexible than MARC. If people are using other things instead, apparently that failed, or
> at least has not yet succeeded.
|