Print

Print


Hi Kelley.

The XC Metadata Services Toolkit will do much of what you have outlined
below - I won't go into detail here about what functionality is
available now and what we're still developing.

We have built the MST in such a way that there is a basic platform that
runs metadata services on top of it.  We have written services that will
normalize/clean up MARCXML data, and which will transform MARCXML to our
XC Schema, which is a FRBRized schema.  We are currently working on
another service that will aggregate records at various FRBR levels.  You
could use the MST platform and write your own services for your own
purposes, rather than starting from scratch with everythin.  OR, you
could start with the services that we've written and modify them for
your needs - I think this would work rather well for you, actually,
since we've already done a great deal with FRBRization and managing
records for FRBR entities - you could just tweak the mappings so that
they conform to OLAC's transformations, and write additional
normalization steps to do whatever large-scale data cleanup that you
need. You could also play around with our services first and then write
your own later if they didn't work for you.

Tweaking the existing services or writing new ones would require some
Java programming expertise, but not an extensive amount.  

There are links on www.eXtensibleCatalog.org for downloading the
software if you are ready for that - I'd be happy to talk with you to
provide more information as well. 

Jennifer

Jennifer Bowen
Assistant Dean, University of Rochester River Campus Libraries
Co-Executive Director, eXtensible Catalog Organization, LLC
585-275-0004    [log in to unmask]    
 

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
McGrath, Kelley C.
Sent: Thursday, April 01, 2010 12:51 PM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Looking for advice for project to tranform MARC
bib data into work records

Thanks, Jonathan. I had thought about the XC Metadata Toolkit, but I
think perhaps our use case is sufficiently non-standard that it might
not be easier than starting from scratch. 1) The moving image cataloging
community has significant disagreements with RDA's interpretation of the
FRBR model and for this project we are using a modified FRBR model
anyway; I think Rochester is using a more orthodox model. 2) I think we
are trying to squeeze a lot more out of the metadata than most of the
other FRBRizing applications I've seen. But it's probably worth checking
out. I do think XC's OAI toolkit in particular and possibly the
discovery layer could be useful to us when we get further along.

I think we are probably looking at hiring someone to do custom
programming, but I feel that I am in a bit over my head knowing how to
specify what to ask for, much less to come up with a realistic budget
and timeline for a grant. And I guess the reason I was asking about
tools was that I was getting the impression that we would have to
specify something when advertising for a programmer. I obviously have no
particular preferences except for not wanting to end up in a dead end.

Kelley

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
Jonathan Rochkind
Sent: Thursday, April 01, 2010 11:47 AM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Looking for advice for project to tranform MARC
bib data into work records

I don't have enough experience with that problem area to be sure of 
what's out there and what would work, but I was pretty impressed with 
the presentation on the XC Metadata Toolkit at the recent Code4Lib 
conference, I think it is designed to do at least some, if not all, of 
your tasks you outline, and seems to be pretty solid. It might be worth 
contacting Jennifer Bowen to see what she has to say about your problem 
case and XC's ability to meet it, or if she has any other tool 
suggestions, she's pretty clever on this stuff. And XC is probably 
interested in getting their software used by folks like you.

Another alternative is simply hiring a programmer to write something 
custom to do exactly what you need; that's what most of us probably end 
up doing, because we don't know about suitable general-purpose metadata 
control software that's still customizable enough to do what we need.  
But I think the XC Metadata Toolkit is definitely _intended_ to fill 
that niche. If you can hire someone with experience with library 
metadata, and you can have people giving them requirements who 
understand what software can and can not actually do (like yourself 
Kelley), this is not TOO much of an unusual project, in just about any 
programming language (I'd do it in ruby, but let's not start THAT thread

which has thankfully died again. ) ).

Jonathan

McGrath, Kelley C. wrote:
> I am hoping someone can help me with my current conundrum. I am
looking for recommendations for tools and methods for a project I am
working on to try to implement some of the Online Audiovisual Catalogers
(OLAC) work on FRBR works and moving images
(http://www.olacinc.org/drupal/?q=node/27). I am not a programmer or
coder, but we are going to have to hire someone to do this and give them
some direction. So I am interested in what tools you would recommend for
this purpose and why, as well as any other advice anyone can give me.
>
> Basically what we want to do is take a large number of MARC
bibliographic records for moving images, extract the information that
might describe the FRBR Work and parse and normalize it. We then want to
use this data to create provisional Work records. I am not so worried
about getting the data out of MARC, but about how to work with the data
once it's out. I have listed the main steps we anticipate needing in
broad outlines below.
>
> 1.      Parsing and Normalizing Data
>
> There are several types of situations from easiest to harder with
examples:
>
> a.       Data that is already in machine-comprehensible form:
>
> Coded language data, e.g., an 041 $h of fre means the movie was
originally in French
>
> A 700 field with a $4 of drt means that the name in that in that 700
is (hopefully) the authorized form of the name of the director of the
movie
>
> A DateType fixed field of p means that the lower of Date1 and Date2 is
the original date of the movie (technically this should always be Date2,
but some libraries reverse the order to support sorting by original date
in their OPACs)
>
> b.      Data that can be extracted using keywords in textual fields
>
> We can often extract an original date from a note field by identifying
the combination of a year (18xx, 19xx, or 200x) and a keyword that
signifies that it is an original production date note, such as
"originally," "release," "broadcast," or "produced."


>
> c.       Data that requires matching between information in more than
one field
>
> In order to identify the authorized form of the name of a person
performing a particular function, in many cases we have to try to match
the authorized form of the name to a transcribed statement including
both the function and the name. Note that functions can be transcribed
in many forms (directed by, director, direction) and languages (Regie,
kantoku). Also the transcribed name may vary from the authorized name
("Andrei Tarkovsky" vs. "Tarkovskii, Andrei Arsenevich"). Neither of
these is a practical problem to solve completely, but we would like to
be able to make inferences as follows (probably starting from the 7xx
fields and trying to find a matching transcribed statement).
>
> 245$c includes "directed by Steven Spielberg"
> + 700 Spielberg, Steven, $d 1946-
> = n  79148103 (Spielberg, Steven, $d 1946-) is the director
>
> 2.      Ranking Information Sources Within Records
>
> We have multiple possible methods for extracting most types of data.
We plan to rank these data sources in terms of their probable accuracy.
Some of the ranking we can predict up front and probably skip step 1 for
the non-preferred data sources. Some data sources we can probably rank
based on analysis of preliminary results. Some sources probably can't be
ranked and we would want to know when a record presents conflicting data
(e.g., one original date in a note and a different one in a fixed field)
>
> 3.      Clustering Records for the Same Work
>
> In our data pool, we will have cases where multiple bibliographic
records represent the same work. We need to cluster the ones that
represent a given work based on data extracted in the above steps.
Information such as title, original date, director, or production
company is probably useful for this purpose.
>
> 4.      Creating Provisional Work Records by Identifying the Most
Likely Value for Each Data Element from the Work Cluster
>
> Once we have clustered the records for the works, we want to create a
single composite work record from the data in the clustered records. We
will need some algorithm, possibly as simple as a majority vote or
perhaps a majority vote per manifestation rather than per record, to
determine the probable best value for each field in our preliminary work
record.
>
> Thanks in advance for any advice on tools or general thoughts on this.
Also, are there any particular skills or qualities we should be looking
for in a programmer?
>
> Kelley McGrath
> [log in to unmask]
>
>