We've not done any wholesale, across-the-board RDA work in our catalog, but here and there we bite off a particular element/issue to bring into (better) alignment with current practice.
The main problem is that in many cases you can't automate the transformation without introducing weird errors/newly incorrect data at a level we aren't yet prepared to accept.
I mainly work with the Ruby MARC library and it's basically completely agnostic about what the fields mean---it just gives you an easy way to get at a particular field/subfield. Traject's basic config does know that publisher info can be in 260 or 264. It's always been safer to pull date info from the 008 field, and that's what Traject prefers. Looks like it goes to 260$c if 008 date is missing, but doesn't check 264$c. Of course, with 264 for publication date, you'd need to limit to 2nd indicator = 1 and make sure you were grabbing the subfield from the right 264.1 field, if there were more than one.
We currently have a locally created Perl-script to transform the bib data we pull out of our Sierra DNA db (non-MARC, but MARC-ish) into a shared set of properties/dimensions understood by the shared index beneath our consortial shared catalog. I've kept that up to date with changes to RDA, more or less... Some things we don't care about, like we don't feel it's yet useful or important to display the 336/7/8 fields to our users.
> Which leaves me to ask another question, “Why is there so much business logic embedded into the MARC cataloging rules?”
Ha. Because we've ended up using MARC for a purpose VERY different than it was designed for.
And because the typical MARC record is expressing data created following a number of cataloging standards --- the content standard (RDA), various vocabularies, ISBD punctuation, specific MARC input conventions --- and the semantic/content elements vs. the markup/encoding elements are all confused/entangled.
If you haven't seen it, Jason Thomale's article "Interpreting MARC: Where’s the Bibliographic Data?" http://journal.code4lib.org/articles/3832 is an excellent examination of this topic.
-=-
Kristina M. Spurgin -- Cataloging Instructor for 6 years, now Library Data Strategist/catalog-scale MARC wrangler
E-Resources & Serials Management, Davis Library
University of North Carolina at Chapel Hill
CB#3938, Davis Library -- Chapel Hill, NC 27514-8890
919-962-3825 -- [log in to unmask]
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Eric Lease Morgan
> Sent: Wednesday, May 3, 2017 11:14 AM
> To: [log in to unmask]
> Subject: [CODE4LIB] rda
>
> To what degree have any of us done massive RDA work in our catalogs, and
> similarly, to what degree have some of the community's MARC programming
> libraries have been modified to account for RDA rules?
>
> For example, has anybody done any large scale find & replace operations
> against their catalogs to create RDA fields with values found in other MARC
> fields? Why or why not? Similarly, RDA seems to define a publication field in
> MARC 264. Correct? Yet the venerable Perl-based MARC::Record module
> (still) pulls publication dates from MARC 260. [1] A colleague found a bit of a
> discussion of this issue from the VuFind community. [2] Which leaves me to
> ask another question, “Why is there so much business logic embedded into
> the MARC cataloging rules?”
>
> Alas. How in the world is the library community ever going to have more
> consistently encoded data so it can actually disseminate information?
>
> [1] MARC::Record - http://bit.ly/2px2sC6 [2] discussion -
> https://vufind.org/jira/browse/VUFIND-749
>
> —
> Eric Morgan
|