Kelley,
The work you are proposing is interesting and overlaps somewhat both
with work I have already done and with a new project I'm looking into
here at UVa.
I have been the primary contributor to the Marc4j java project for the
past several years and am the creator of the project SolrMarc which
extracts data from Marc records based on a customizable specification,
to build Solr index records to facilitate rich discovery.
Much of my work on creating and improving these projects has been in
service of my actual job of creating and maintaining the Solr Index
behind our Blacklight-based discovery interface. As a part of that
work I have created custom SolrMarc routines that extract the format of
items similar to what is described in Example 3, including looking in
the leader, 006, 007 and 008 to determine the format "as-coded" but
further looking in the 245 h, 300 and 538 fields to heuristically
determine when the format "as-coded" is incorrect and ought to be
overridden. Most of the heuristic determination is targeted towards
Video material, and was initiated when I found an item that due to a
coding error was listed as a "Video in Braille format".
Further I have developed a set of custom routines that look more closely
at Video items, one of which already extracts the runtime from the
008[18-20] field,
To modify it from its current form that currently returns the runtime in
minutes, to instead return it as HH:MM as specified in your xls file,
and to further handle the edge case of 008[18-20] = "000" to return
"over 16:39" would literally take about 15 minutes.
Another of these custom routines that is more fully-formed, is code for
extracting the Director of a video from the Marc record. It examines
the contents of the fields 245c, 508a, 500a, 505a, 505t, employing
heuristics and targeted natural language processing techniques, to
attempt to correctly extract the "Director". At this point I believe
it achieves better results than a careful cataloger would achieve, even
one who specializes in film and video.
The other project I have just started investigating is an effort to
create and/or flesh out Marc records for video items based on heuristic
matching of title and director and date with data returned from
publicly-accessible movie information sites.
This more recent work may not be relevant to your needs but the custom
extraction routines seem directly applicable to your goals, and may also
provide a template that may make your other goals more easily achievable.
-Robert Haschart
On 12/2/2013 12:37 AM, Kelley McGrath wrote:
> I wanted to follow up on my previous post with a couple points.
>
> 1. This is probably too late for anybody thinking about applying, but I thought there may be some general interest. I have put up some more detailed specifications about what I am hoping to do at http://pages.uoregon.edu/kelleym/miw/. Data extraction overview.doc is the general overview and the other files contain supporting documents.
>
> 2. I replied some time ago to Heather's offer below about her website that will connect researchers with volunteer software developers. I have to admit that looking for volunteer software developers had not really occurred to me. However, I do have additional things that I would like to do for which I currently have no funding so if you would be interested in volunteering in the future, let me know.
>
> Kelley
> [log in to unmask]
>
>
> On Tue, Nov 12, 2013 at 6:33 PM, Heather Claxton<[log in to unmask]<mailto:[log in to unmask]>> wrote:
> Hi Kelley,
>
> I might be able to help in your search. I'm in the process of starting a
> website that connects academic researchers with volunteer software
> developers. I'm looking for people to post programming projects on the
> website once it's launched in late January. I realize that may be a
> little late for you, but perhaps the project you mentioned in your PS
> ("clustering based on title, name, date ect.") would be perfect? The
> one caveat is that the website is targeting software developers who wish to
> volunteer. Anyway, if you're interested in posting, please send me an
> e-mail at [log in to unmask]<mailto:[log in to unmask]> I would greatly appreciate it.
> Oh and of course it would be free to post :) Best of luck in your
> hiring process,
>
> Heather Claxton-Douglas
>
>
> On Mon, Nov 11, 2013 at 9:58 PM, Kelley McGrath<[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>> I have a small amount of money to work with and am looking for two people
>> to help with extracting data from MARC records as described below. This is
>> part of a larger project to develop a FRBR-based data store and discovery
>> interface for moving images. Our previous work includes a consideration of
>> the feasibility of the project from a cataloging perspective (
>> http://www.olacinc.org/drupal/?q=node/27), a prototype end-user interface
>> (https://blazing-sunset-24.heroku.com/,
>> https://blazing-sunset-24.heroku.com/page/about) and a web form to
>> crowdsource the parsing of movie credits (
>> http://olac-annotator.org/#/about).
>> Planned work period: six months beginning around the second week of
>> December (I can be somewhat flexible on the dates if you want to wait and
>> start after the New Year)
>> Payment: flat sum of $2500 upon completion of the work
>>
>> Required skills and knowledge:
>>
>> * Familiarity with the MARC 21 bibliographic format
>> * Familiarity with Natural Language Processing concepts (or
>> willingness to learn)
>> * Experience with Java, Python, and/or Ruby programming languages
>>
>> Description of work: Use language and text processing tools and provided
>> strategies to write code to extract and normalize data in existing MARC
>> bibliographic records for moving images. Refine code based on feedback from
>> analysis of results obtained with a sample dataset.
>>
>> Data to be extracted:
>> Tasks for Position 1:
>> Titles (including the main title of the video, uniform titles, variant
>> titles, series titles, television program titles and titles of contents)
>> Authors and titles of related works on which an adaptation is based
>> Duration
>> Color
>> Sound vs. silent
>> Tasks for Position 2:
>> Format (DVD, VHS, film, online, etc.)
>> Original language
>> Country of production
>> Aspect ratio
>> Flag for whether a record represents multiple works or not
>> We have already done some work with dates, names and roles and have a
>> framework to work in. I have the basic logic for the data extraction
>> processes, but expect to need some iteration to refine these strategies.
>>
>> To apply please send me an email at kelleym@uoregon explaining why you
>> are interested in this project, what relevant experience you would bring
>> and any other reasons why I should hire you. If you have a preference for
>> position 1 or 2, let me know (it's not necessary to have a preference). The
>> deadline for applications is Monday, December 2, 2013. Let me know if you
>> have any questions.
>>
>> Thank you for your consideration.
>>
>> Kelley
>>
>> PS In the near future, I will also be looking for someone to help with
>> work clustering based on title, name, date and identifier data from MARC
>> records. This will not involve any direct interaction with MARC.
>>
>>
>> Kelley McGrath
>> Metadata Management Librarian
>> University of Oregon Libraries
>> 541-346-8232<tel:541-346-8232>
>> [log in to unmask]<mailto:[log in to unmask]>
>>
|