Print

Print


David, sorry my message was unclear and I should have given the example 
for this particular case. My objection is not over the need to add more 
complex conditional logic to the MARC holdings parsing code. The 
exception to the rule is, after all, documented.

What I take issue with is the fact that the ambiguity does not provide 
enough guidance or incentive for the cataloger to ensure consistent 
data. Because we have opened the possibility that in some cases it is 
legitimate to put chronological data in enumeration subfields I am 
encountering cases in which it is far less clear that the shift should 
have taken place.

This is a caption/pattern and data combination that I have encountered:

   854 00 $8 1 $a (year) $b no. $o (suppl.)
   864 40 $8 1.1 $a 1996 $b 6-12

What someone has done here is shifted the chronological data (year) into 
the subfield for "Highest level of enumeration" even when there is 
enumeration data (no.). So why has this happened? Is someone trying to 
say that the year data is enumeration because this publisher uses the 
year for its highest level of enumeration? If so, it is ambiguous 
whether that is the intent or this is a case of the subfield shifting 
and therefore extremely hard to detect with code. Is the cataloger just 
used to putting numbered (as opposed to Volume) enumeration in subfield 
$b and it just felt wrong to put the "no." in subfield $a as the highest 
level? I get that on some level because I would expect the cataloging to 
produce habit forming patterns of workflow. We are after all trying to 
impose order upon this data.

In the end, when there is this ambiguity, I suspect people are 
experimenting with their data input and stopping when the data looks 
reasonable in the current OPAC display. Not all of our cataloging is 
done by trained catalogers. Not all of it is even done by librarians who 
have once had a cataloging class, maybe decades ago. Some of our 
holdings cataloging is done by student staff who have no vested interest 
in the long term state of the data.

So I then ask myself what incentive could the designers of this MARC 
holdings spec possibly have for encouraging the subfield shift? Is it 
just that it is always desirable to have *something* in subfield $a? If 
that is the case then enumeration and chronology should not be sharing 
the same MARC fields.

Presently, I have only tackled the parsing issue fully for visual 
display, but there are other significant areas where this ambiguity is 
more problematic. Say I want to match my holdings against my consortium, 
HathiTrust, OCLC or Google Books? That is a case where holdings 
statements need to be exploded from summary ranges (e.g., volumes 1-50, 
59-100) into individually enumerated volumes and numbers (etc.). If I 
can't trust the semantics of the metadata standard, that is a very hard 
thing to do.

-Steve

David Fiander wrote, On 1/28/12 11:22 AM:
> Stephen, regarding the question of ambiguity about chronology vs
> enumeration, this is what I did with my parser:
>
> # If items are identified by chronology only, with no separate
> # enumeration (eg, a newspaper issue), then the chronology is
> # recorded in the enumeration subfields $a - $f.  We can tell
> # that this is the case if there are $a - $f subfields and no
> # chronology subfields ($i-$k), and none of the $a-$f subfields
> # have associated $u or $v subfields, but there's a $w and no $x
>
> So, if there are ONLY enumeration fields, and none of the enumeration
> fields have corresponding frequency or continuity indicators, AND there's a
> publication frequency but no indication of when in the calendar the highest
> level of enumeration changes, THEN the enumerations are really chronology.
>
> Of course, this will still get certain patterns wrong, but it's the best
> one can do.
>
>
> On Sat, Jan 28, 2012 at 11:37, Stephen Meyer<[log in to unmask]>wrote:
>
>> War is hell, right? Lately we have been dealing with a particular
>> combination of two circles of the metadata Inferno: the first (limbo) and
>> sixth (heresy):
>>
>> The limbo I'll define as a poorly designed metadata spec: the MARC
>> holdings standard. The poor design in question is the ambiguity of
>> enumeration/chronology subfield assignment, specifically this rule:
>>
>>   When only chronology is used on an item (that is, the item
>>   carries no enumeration), the chronology is contained in the
>>   relevant enumeration subfield ($a-$h) instead of the chronology
>>   subfields ($i-$m).
>>   http://www.loc.gov/marc/**holdings/hd863865.html<http://www.loc.gov/marc/holdings/hd863865.html>
>>
>> This means that as a programmer trying to parse enumeration and chronology
>> data from our holdings data *that uses a standard* I cannot reliably know
>> that a subfield which has been defined as containing "First level of
>> enumeration" will in fact contain enumeration rather than chronology.
>> What's a programmer to do? Limbo, limbo.
>>
>> Others in this thread have already described the common heresy involved in
>> MARC cataloging: embedding data in a record intended for a single
>> institution, or worse, a specific OPAC.
>>
>> Due to the ambiguity in the spec and the desire to just make it look the
>> way I want it to look in my OPAC, the temptation is simply too great. In
>> the end, we have data that couldn't possibly meet the standard as it is
>> described and means that we spend more time than we expected parsing it in
>> the next system.
>>
>> In our case we work through these issues with an army of code tests. Our
>> catalogers and reference staff find broken examples of MARC holdings data
>> parsing in our newest discovery system, we gather the real-world MARC
>> records as a test data set and then we write a bunch of Rspec tests so we
>> don't undo previous bug fixes as we deal with the current ones. The
>> challenge is coming up with a fast and responsive mechanism/process for
>> adding a record to the test set once identified.
>>
>> -Steve
>>
>> Bess Sadler wrote, On 1/27/12 8:26 PM:
>>
>>   I remember the "required field" operation of... aught six? aught seven?
>>> It all runs together at my age. Turns out, for years people had been making
>>> shell catalog records for items in the collection that needed to be checked
>>> out but hadn't yet been barcoded. Some percentage of these people opted not
>>> to record any information about the item other than the barcode it left the
>>> building under, presumably because they were "in a hurry". If there was
>>> such a thing as a metadata crime, that'd be it.
>>>
>>> We were young and naive, we thought "why not just index all our catalog
>>> records into solr?" Little did we know what unholy abominations we would
>>> uncover. Out of nowhere, we were surrounded by zombie marc records,
>>> horrible half-created things, never meant to roam the earth or even to
>>> exist in a sane mind. They could tell us nothing about who they were, what
>>> book they had once tried to describe, they could only stare blankly and
>>> repeat in mangled agony "required field!" "required field!" "required
>>> field!" over and over…
>>>
>>> It took us weeks to put them all out of their misery.
>>>
>>> This is the first time I've ever spoken of this publicly. The support
>>> group is helping with the nightmares, but sometimes still, I wake in a cold
>>> sweat, wondering… did we really find them all?????
>>>
>>>
>>> On Jan 27, 2012, at 4:28 PM, Ethan Gruber wrote:
>>>
>>>   EDIT ME!!!!
>>>>
>>>> http://ead.lib.virginia.edu/**vivaxtf/view?docId=uva-sc/**
>>>> viu00888.xml;query=;brand=**default#adminlink<http://ead.lib.virginia.edu/vivaxtf/view?docId=uva-sc/viu00888.xml;query=;brand=default#adminlink>
>>>>
>>>> On Fri, Jan 27, 2012 at 6:26 PM, Roy Tennant<[log in to unmask]>
>>>>   wrote:
>>>>
>>>>   Oh, I should have also mentioned that some of the worst problems occur
>>>>> when people treat their metadata like it will never leave their
>>>>> institution. When that happens you get all kinds of crazy cruft in a
>>>>> record. For example, just off the top of my head:
>>>>>
>>>>> * Embedded HTML markup (one of my favorites is an<img>   tag)
>>>>> * URLs to remote resources that are hard-coded to go through a
>>>>> particular institution's proxy
>>>>> * Notes that only have meaning for that institution
>>>>> * Text that is meant to display to the end-user but may only do so in
>>>>> certain systems; e.g., "Click here" in a particular subfield.
>>>>>
>>>>> Sigh...
>>>>> Roy
>>>>>
>>>>> On Fri, Jan 27, 2012 at 4:17 PM, Roy Tennant<[log in to unmask]>
>>>>>   wrote:
>>>>>
>>>>>> Thanks a lot for the kind shout-out Leslie. I have been pondering what
>>>>>> I might propose to discuss at this event, since there is certainly
>>>>>> plenty of fodder. Recently we (OCLC Research) did an investigation of
>>>>>> 856 fields in WorldCat (some 40 million of them) and that might prove
>>>>>> interesting. By the time ALA rolls around there may something else
>>>>>> entirely I could talk about.
>>>>>>
>>>>>> That's one of the wonderful things about having 250 million MARC
>>>>>> records sitting out on a 32-node cluster. There are any number of
>>>>>> potentially interesting investigations one could do.
>>>>>> Roy
>>>>>>
>>>>>> On Thu, Jan 26, 2012 at 2:10 PM, Johnston, Leslie<[log in to unmask]>
>>>>>>
>>>>> wrote:
>>>>>
>>>>>> Roy's fabulous "Bitter Harvest" paper:
>>>>>>>
>>>>>> http://roytennant.com/bitter_**harvest.html<http://roytennant.com/bitter_harvest.html>
>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Code for Libraries [mailto:[log in to unmask]**EDU<[log in to unmask]>]
>>>>>>> On Behalf
>>>>>>>
>>>>>> Of Walter Lewis
>>>>>
>>>>>> Sent: Wednesday, January 25, 2012 1:38 PM
>>>>>>> To: [log in to unmask]
>>>>>>> Subject: Re: [CODE4LIB] Metadata war stories...
>>>>>>>
>>>>>>> On 2012-01-25, at 10:06 AM, Becky Yoose wrote:
>>>>>>>
>>>>>>>   - Dirty data issues when switching discovery layers or using
>>>>>>>> legacy/vendor metadata (ex. HathiTrust)
>>>>>>>>
>>>>>>>
>>>>>>> I have a sharp recollection of a slide in a presentation Roy Tennant
>>>>>>>
>>>>>> offered up at Access  (at Halifax, maybe), where he offered up a range
>>>>> of
>>>>> dates extracted from an array of OAI harvested records.  The good, the
>>>>> bad,
>>>>> the incomprehensible, the useless-without-context (01/02/03 anyone?)
>>>>> and on
>>>>> and on.  In my years of migrating data, I've seen most of those
>>>>> variants.
>>>>> (except ones *intended* to be BCE).
>>>>>
>>>>>>
>>>>>>> Then there are the fielded data sets without authority control.  My
>>>>>>>
>>>>>> favourite example comes from staff who nominally worked for me, so I'm
>>>>> not
>>>>> telling tales out of school.  The classic Dynix product had a Newspaper
>>>>> index module that we used before migrating it (PICK migrations; such a
>>>>> joy).  One title had twenty variations on "Georgetown Independent" (I
>>>>> wish
>>>>> I was kidding) and the dates ranged from the early ninth century until
>>>>> nearly the 3rd millenium. (apparently there hasn't been much change in
>>>>> local council over the centuries).
>>>>>
>>>>>>
>>>>>>> I've come to the point where I hand-walk the spatial metadata to links
>>>>>>>
>>>>>> with to geonames.org for the linked open data. Never had to do it for
>>>>> a
>>>>> set with more than 40,000 entries though.  The good news is that it
>>>>> isn't
>>>>> hard to establish a valid additional entry when one is required.
>>>>>
>>>>>>
>>>>>>> Walter
>>>>>>>
>>>>>>
>>>>>