LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  December 2013

CODE4LIB December 2013

Subject:

Re: CODE4LIB Digest - 9 Dec 2013 to 10 Dec 2013 (#2013-320)

From:

"Williams, Cecilia - HPL" <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Wed, 11 Dec 2013 06:47:31 -0600

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (2130 lines)

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of CODE4LIB automatic digest system
Sent: Tuesday, December 10, 2013 10:00 PM
To: [log in to unmask]
Subject: CODE4LIB Digest - 9 Dec 2013 to 10 Dec 2013 (#2013-320)

There are 22 messages totaling 2360 lines in this issue.

Topics of the day:

  1. Lorem Ipsum metadata? Is there such a thing? (3)
  2. Mapping LCSH to DDC (9)
  3. problem in old etd xml files (3)
  4. Metadata generator (was: Lorem Ipsum metadata? Is there such a thing?)
  5. Job: DIGITAL INITIATIVES LIBRARIAN at Western Carolina University
  6. LYRASIS Open Source Case Study Call for Proposals
  7. Fwd: FW: Lorem Ipsum metadata? Is there such a thing?
  8. Announcing Code4Lib2014 expected registration date and estimated
     registration cost
  9. Developer House Nominations Close Friday
 10. Call for Proposals: Code4Lib Journal

----------------------------------------------------------------------

Date:    Mon, 9 Dec 2013 22:24:22 -0600
From:    Brian Zelip <[log in to unmask]>
Subject: Re: Lorem Ipsum metadata? Is there such a thing?

Not metadata, but still pretty fun - http://meettheipsums.com - some
curated ipsums.


Brian Zelip

---
Graduate Assistant
Scholarly Commons, University Library
University of Illinois at Urbana-Champaign


On Mon, Dec 9, 2013 at 9:14 PM, Pottinger, Hardy J. <
[log in to unmask]> wrote:

> Well it's not a web service, but it does make lots of fake metadata for
> batch loading into DSpace. I will just leave this here:
>
> https://github.com/hardyoyo/random_dspace_batch_metadata
>
> Thanks for the lead on the Faker gem! This was a fun diversion. I
> especially like the titles this script mints. :-)
>
> A possible improvement would be to randomly reuse author names, so author
> facets have more than one item. I'll do that if I ever have to test author
> facets.
>
> --Hardy
>
> Sent from my iPad
>
> On Dec 9, 2013, at 7:36 PM, "Roy Tennant" <[log in to unmask]<mailto:
> [log in to unmask]>> wrote:
>
> I ask you, would you want to work all day sitting on top of a huge pile of
> radioactive MARC records? I sure wouldn't...
> Roy
>
>
> On Mon, Dec 9, 2013 at 5:08 PM, Bill Dueber <[log in to unmask]<mailto:
> [log in to unmask]>> wrote:
>
> The sad thing is that the Library of Congress spent billions of dollars of
> taxpayer money building a safe storage facility in the stable caves under
> Dublin, OH, but now no one will let them bury them there.
>
>
> On Mon, Dec 9, 2013 at 4:50 PM, Roy Tennant <[log in to unmask]<mailto:
> [log in to unmask]>> wrote:
>
> I can't help wondering what the half-life of a radioactive MARC record
> is.
> My guess is it is either really, really short or really, really long. ;-)
> Roy
>
>
> On Mon, Dec 9, 2013 at 1:39 PM, Peter Binkley <[log in to unmask]
> <mailto:[log in to unmask]>
> wrote:
>
> Years ago Bill Moen had a set of "radioactive" MARC records with unique
> tokens in all fields, to test Z39.50 retrieval. I don't know whether
> they
> were ever released anywhere, but I see the specs are here:
>
> http://digital.library.unt.edu/ark:/67531/metadc111015/m1/1/
>
> Peter
>
>
> Peter Binkley
> Digital Initiatives Technology Librarian
> Information Technology Services
> [log in to unmask]<mailto:[log in to unmask]>
>
> 2-10K Cameron Library
> University of Alberta
> Edmonton, Alberta
> Canada T6G 2J8
>
> phone 780-492-3743
> fax 780-492-9243
>
>
> On Mon, Dec 9, 2013 at 12:50 PM, Joshua Welker <[log in to unmask]<mailto:
> [log in to unmask]>>
> wrote:
>
> I checked out the Eclipse option and was not able to get much use out
> of
> it.
> Maybe someone else will have better luck? It doesn't seem to align
> very
> well
> with a library use case.
>
> Josh Welker
>
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf
> Of
> Ben
> Companjen
> Sent: Monday, December 09, 2013 11:14 AM
> To: [log in to unmask]<mailto:[log in to unmask]>
> Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?
>
> Hi Josh,
>
> Before you start coding:
>
>
>
> http://stackoverflow.com/questions/17106/how-to-generate-sample-xml-documen
> ts-from-their-dtd-or-xsd suggests that Eclipse can generate XML from
> an
> DTD
> or XSD file. First try with the EAC XSD shows I need to try other
> options,
> but it's promising.
>
> (It's still an interesting problem to try to tackle yourself, of
> course.)
>
> Ben
>
> On 09-12-13 17:59, "Joshua Welker" <[log in to unmask]<mailto:[log in to unmask]>>
> wrote:
>
> It's hard-coded to generate the specific elements. But your way
> sounds
> a lot cleaner, so I might try to do that instead :) It will be more
> difficult initially but much easier once I start implementing other
> metadata formats.
>
> Josh Welker
>
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On
> Behalf
> Of
> Ben Companjen
> Sent: Monday, December 09, 2013 10:52 AM
> To: [log in to unmask]<mailto:[log in to unmask]>
> Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?
>
> Cool!
> My first thought on this topic was: give the program an XML schema,
> and
> generate possible documents with the correct datatypes etc.
> (Something
> like that must exist somewhere, right?) Does it happen to work
> anything
> like that, or is it hardcoded to generate these specific elements?
>
> Ben
>
> On 09-12-13 17:27, "Joshua Welker" <[log in to unmask]<mailto:[log in to unmask]>>
> wrote:
>
> Challenge accepted.
>
> http://library.ucmo.edu/dev/metadata-generator.php
>
> Obviously in the prototype phase, but it works. Only MODS is
> available
> for now, and you can only select top-level elements (all child
> elements of the top-level selections will be auto-generated). I
> will
> try to expand it to more than just MODS. Admittedly, I know very
> little about METS, so I will need some assistance if I am going to
> make
> one of those.
>
> I'll eventually host this somewhere else once it's done, so don't
> bookmark it.
>
> Josh Welker
> Information Technology Librarian
> James C. Kirkpatrick Library
> University of Central Missouri
> Warrensburg, MO 64093
> JCKL 2260
> 660.543.8022
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On
> Behalf
> Of Kevin S. Clarke
> Sent: Sunday, December 08, 2013 12:26 PM
> To: [log in to unmask]<mailto:[log in to unmask]>
> Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a
> thing?
>
> When I first read this, I was imagining not having to give it your
> metadata but native support for most of our commonly used metadata
> records... so the interface is: "Give me 100 MODS records" and it
> spits that out... You could get fancy and say, "Give me X number of
> METS records that wrap TIFFs and JPGs and that uses MODS, etc."
> That's not as trivial as hooking into an lorem ipsum machine, but
> it'd
> be pretty cool, imho.
>
> Kevin
>
>
> On Sat, Dec 7, 2013 at 11:51 PM, Pottinger, Hardy J. <
> [log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Hi, I asked this on Google Plus earlier today, but I figured I'd
> better take this question here: my brain is trying to tell me
> that
> there's a service or app that makes "fake" metadata, kind of like
> "Lorem Ipsum" but you feed it your fields and it gives you
> nonsense
> metadata back. But, it looks right enough for testing.
> Yesterday, I
> had to make up about 50 rows of fake metadata to test some code
> that
> handles paging in a UI, and I had to make it all up by hand. This
> hurts my soul. Someone please tell me such a service exists, and
> link me to it, so I never have to do this again. Or else, I may
> just
> make such a service, to save us all. But I don't want to go
> coding
> some new service if it already exists, because that sort of thing
> is
> for chumps.
>
>
> --
> HARDY POTTINGER <[log in to unmask]<mailto:[log in to unmask]>>
> University of
> Missouri
> Library Systems http://lso.umsystem.edu/~pottingerhj/
> https://MOspace.umsystem.edu/
> "Making things that are beautiful is real fun." --Lou Reed
>
>
>
>
>
>
>
>
> --
> Bill Dueber
> Library Systems Programmer
> University of Michigan Library
>

------------------------------

Date:    Mon, 9 Dec 2013 23:28:04 -0500
From:    "Kevin S. Clarke" <[log in to unmask]>
Subject: Re: Lorem Ipsum metadata? Is there such a thing?

I was telling Hardy earlier today we needed a metadata ipsum... it would
pull random words from the full-text of the AACR2.

Kevin



On Mon, Dec 9, 2013 at 11:24 PM, Brian Zelip <[log in to unmask]> wrote:

> Not metadata, but still pretty fun - http://meettheipsums.com - some
> curated ipsums.
>
>
> Brian Zelip
>
> ---
> Graduate Assistant
> Scholarly Commons, University Library
> University of Illinois at Urbana-Champaign
>
>
> On Mon, Dec 9, 2013 at 9:14 PM, Pottinger, Hardy J. <
> [log in to unmask]> wrote:
>
> > Well it's not a web service, but it does make lots of fake metadata for
> > batch loading into DSpace. I will just leave this here:
> >
> > https://github.com/hardyoyo/random_dspace_batch_metadata
> >
> > Thanks for the lead on the Faker gem! This was a fun diversion. I
> > especially like the titles this script mints. :-)
> >
> > A possible improvement would be to randomly reuse author names, so author
> > facets have more than one item. I'll do that if I ever have to test
> author
> > facets.
> >
> > --Hardy
> >
> > Sent from my iPad
> >
> > On Dec 9, 2013, at 7:36 PM, "Roy Tennant" <[log in to unmask]<mailto:
> > [log in to unmask]>> wrote:
> >
> > I ask you, would you want to work all day sitting on top of a huge pile
> of
> > radioactive MARC records? I sure wouldn't...
> > Roy
> >
> >
> > On Mon, Dec 9, 2013 at 5:08 PM, Bill Dueber <[log in to unmask]<mailto:
> > [log in to unmask]>> wrote:
> >
> > The sad thing is that the Library of Congress spent billions of dollars
> of
> > taxpayer money building a safe storage facility in the stable caves under
> > Dublin, OH, but now no one will let them bury them there.
> >
> >
> > On Mon, Dec 9, 2013 at 4:50 PM, Roy Tennant <[log in to unmask]
> <mailto:
> > [log in to unmask]>> wrote:
> >
> > I can't help wondering what the half-life of a radioactive MARC record
> > is.
> > My guess is it is either really, really short or really, really long. ;-)
> > Roy
> >
> >
> > On Mon, Dec 9, 2013 at 1:39 PM, Peter Binkley <[log in to unmask]
> > <mailto:[log in to unmask]>
> > wrote:
> >
> > Years ago Bill Moen had a set of "radioactive" MARC records with unique
> > tokens in all fields, to test Z39.50 retrieval. I don't know whether
> > they
> > were ever released anywhere, but I see the specs are here:
> >
> > http://digital.library.unt.edu/ark:/67531/metadc111015/m1/1/
> >
> > Peter
> >
> >
> > Peter Binkley
> > Digital Initiatives Technology Librarian
> > Information Technology Services
> > [log in to unmask]<mailto:[log in to unmask]>
> >
> > 2-10K Cameron Library
> > University of Alberta
> > Edmonton, Alberta
> > Canada T6G 2J8
> >
> > phone 780-492-3743
> > fax 780-492-9243
> >
> >
> > On Mon, Dec 9, 2013 at 12:50 PM, Joshua Welker <[log in to unmask]<mailto:
> > [log in to unmask]>>
> > wrote:
> >
> > I checked out the Eclipse option and was not able to get much use out
> > of
> > it.
> > Maybe someone else will have better luck? It doesn't seem to align
> > very
> > well
> > with a library use case.
> >
> > Josh Welker
> >
> >
> > -----Original Message-----
> > From: Code for Libraries [mailto:[log in to unmask]] On Behalf
> > Of
> > Ben
> > Companjen
> > Sent: Monday, December 09, 2013 11:14 AM
> > To: [log in to unmask]<mailto:[log in to unmask]>
> > Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?
> >
> > Hi Josh,
> >
> > Before you start coding:
> >
> >
> >
> >
> http://stackoverflow.com/questions/17106/how-to-generate-sample-xml-documen
> > ts-from-their-dtd-or-xsd suggests that Eclipse can generate XML from
> > an
> > DTD
> > or XSD file. First try with the EAC XSD shows I need to try other
> > options,
> > but it's promising.
> >
> > (It's still an interesting problem to try to tackle yourself, of
> > course.)
> >
> > Ben
> >
> > On 09-12-13 17:59, "Joshua Welker" <[log in to unmask]<mailto:
> [log in to unmask]>>
> > wrote:
> >
> > It's hard-coded to generate the specific elements. But your way
> > sounds
> > a lot cleaner, so I might try to do that instead :) It will be more
> > difficult initially but much easier once I start implementing other
> > metadata formats.
> >
> > Josh Welker
> >
> >
> > -----Original Message-----
> > From: Code for Libraries [mailto:[log in to unmask]] On
> > Behalf
> > Of
> > Ben Companjen
> > Sent: Monday, December 09, 2013 10:52 AM
> > To: [log in to unmask]<mailto:[log in to unmask]>
> > Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?
> >
> > Cool!
> > My first thought on this topic was: give the program an XML schema,
> > and
> > generate possible documents with the correct datatypes etc.
> > (Something
> > like that must exist somewhere, right?) Does it happen to work
> > anything
> > like that, or is it hardcoded to generate these specific elements?
> >
> > Ben
> >
> > On 09-12-13 17:27, "Joshua Welker" <[log in to unmask]<mailto:
> [log in to unmask]>>
> > wrote:
> >
> > Challenge accepted.
> >
> > http://library.ucmo.edu/dev/metadata-generator.php
> >
> > Obviously in the prototype phase, but it works. Only MODS is
> > available
> > for now, and you can only select top-level elements (all child
> > elements of the top-level selections will be auto-generated). I
> > will
> > try to expand it to more than just MODS. Admittedly, I know very
> > little about METS, so I will need some assistance if I am going to
> > make
> > one of those.
> >
> > I'll eventually host this somewhere else once it's done, so don't
> > bookmark it.
> >
> > Josh Welker
> > Information Technology Librarian
> > James C. Kirkpatrick Library
> > University of Central Missouri
> > Warrensburg, MO 64093
> > JCKL 2260
> > 660.543.8022
> >
> > -----Original Message-----
> > From: Code for Libraries [mailto:[log in to unmask]] On
> > Behalf
> > Of Kevin S. Clarke
> > Sent: Sunday, December 08, 2013 12:26 PM
> > To: [log in to unmask]<mailto:[log in to unmask]>
> > Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a
> > thing?
> >
> > When I first read this, I was imagining not having to give it your
> > metadata but native support for most of our commonly used metadata
> > records... so the interface is: "Give me 100 MODS records" and it
> > spits that out... You could get fancy and say, "Give me X number of
> > METS records that wrap TIFFs and JPGs and that uses MODS, etc."
> > That's not as trivial as hooking into an lorem ipsum machine, but
> > it'd
> > be pretty cool, imho.
> >
> > Kevin
> >
> >
> > On Sat, Dec 7, 2013 at 11:51 PM, Pottinger, Hardy J. <
> > [log in to unmask]<mailto:[log in to unmask]>> wrote:
> >
> > Hi, I asked this on Google Plus earlier today, but I figured I'd
> > better take this question here: my brain is trying to tell me
> > that
> > there's a service or app that makes "fake" metadata, kind of like
> > "Lorem Ipsum" but you feed it your fields and it gives you
> > nonsense
> > metadata back. But, it looks right enough for testing.
> > Yesterday, I
> > had to make up about 50 rows of fake metadata to test some code
> > that
> > handles paging in a UI, and I had to make it all up by hand. This
> > hurts my soul. Someone please tell me such a service exists, and
> > link me to it, so I never have to do this again. Or else, I may
> > just
> > make such a service, to save us all. But I don't want to go
> > coding
> > some new service if it already exists, because that sort of thing
> > is
> > for chumps.
> >
> >
> > --
> > HARDY POTTINGER <[log in to unmask]<mailto:
> [log in to unmask]>>
> > University of
> > Missouri
> > Library Systems http://lso.umsystem.edu/~pottingerhj/
> > https://MOspace.umsystem.edu/
> > "Making things that are beautiful is real fun." --Lou Reed
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> > Bill Dueber
> > Library Systems Programmer
> > University of Michigan Library
> >
>

------------------------------

Date:    Tue, 10 Dec 2013 13:18:43 +0000
From:    Irina Arndt <[log in to unmask]>
Subject: Mapping LCSH to DDC

Hi CODE4LIB,

we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings.
Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?

I understood, that WebDewey http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a service, but

·         we are no OCLC customers or subscribers to WebDewey

·         even if we were, I'm not sure, if the service matches our needs

I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?

Thanks for any clues,
Irina


-------

Irina Arndt
Max Planck Digital Library (MPDL)
Library System Coordinator
Amalienstr. 33
D-80799 Muenchen, Germany

Tel. +49 89 38602-254
Fax +49 89 38602-290

Email: [log in to unmask]<mailto:[log in to unmask]>
http://www.mpdl.mpg.de

------------------------------

Date:    Tue, 10 Dec 2013 07:36:29 -0600
From:    Jason Bengtson <[log in to unmask]>
Subject: Re: problem in old etd xml files

Sounds like a good plan to me.

Best regards,

Jason Bengtson, MLIS, MA
Head of Library Computing and Information Systems
Assistant Professor, Graduate College
Department of Health Sciences Library and Information Management
University of Oklahoma Health Sciences Center
405-271-2285, opt. 5
405-271-3297 (fax)
[log in to unmask]
http://library.ouhsc.edu
www.jasonbengtson.com

NOTICE:
This e-mail is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure. If the reader of this e-mail is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please immediately notify us by replying to the original message at the listed email address. Thank You.

On Dec 9, 2013, at 7:48 PM, Roy Tennant <[log in to unmask]> wrote:

> For my money, the text transform should look only for exact matches (e.g.,
> "&aacute;", "&nbsp;", "&copy;") and replace them with their numeric
> counterparts.
> Roy
>
>
> On Mon, Dec 9, 2013 at 5:41 PM, jason bengtson <[log in to unmask]>wrote:
>
>> For testing purposes I just nixed them. As I noted, to rework the file a
>> person would probably want to use a more critical eye with find and
>> replace. Totally doable.
>>
>>
>> On Dec 9, 2013, at 7:37 PM, Jon Gorman <[log in to unmask]> wrote:
>>
>>> How did you fix the ampersands? I ask, because if you just did a simple
>>> text transform from & to &amp;, it would mask the problem of the entity
>>> escaping I think...
>>>
>>> Not at work, so I don't have a good example and the file is downloading
>>> very slowly here, so I'll try to do one from memory.
>>>
>>> There were several &aacute; in the XML which mapped to an accent
>> character
>>> in the DTD via the Entity.
>>>
>>> If you just substituted & with &amp;, you'd get &amp;aacute;, which would
>>> render inline as &accute;. It would superficially solve the issue since
>>> browsers would no longer give the errors about the dtd since it wouldn't
>> be
>>> trying to load entities from the DTDs. And depending how you did it, you
>>> likely could also replace a correctly encoded one to make &amp;amp;,
>>> leading to some very odd stuff.
>>>
>>> I wouldn't be surprised to find some unescaped ampersands, but the
>> solution
>>> I posted will essentially replace the entities with their text, hopefully
>>> causing most characters to appear correctly. You definitely still need to
>>> fix some of the other stuff. (I suspect it never worked for most browsers
>>> and XML systems, most likely only IE).
>>>
>>> Jon Gorman
>>> University of Illinois
>>
>> Best regards,
>>
>> Jason Bengtson, MLIS, MA
>> Head of Library Computing and Information SystemsAssistant Professor,
>> Graduate CollegeDepartment of Health Sciences Library and Information
>> ManagementUniversity of Oklahoma Health Sciences Center405-271-2285, opt.
>> 5405-271-3297 (fax)
>> [log in to unmask]
>> http://library.ouhsc.edu
>> www.jasonbengtson.com
>>
>> NOTICE:
>> This e-mail is intended solely for the use of the individual to whom it is
>> addressed and may contain information that is privileged, confidential or
>> otherwise exempt from disclosure. If the reader of this e-mail is not the
>> intended recipient or the employee or agent responsible for delivering the
>> message to the intended recipient, you are hereby notified that any
>> dissemination, distribution, or copying of this communication is strictly
>> prohibited. If you have received this communication in error, please
>> immediately notify us by replying to the original message at the listed
>> email address. Thank You.
>>

------------------------------

Date:    Tue, 10 Dec 2013 08:30:46 -0600
From:    Joshua Welker <[log in to unmask]>
Subject: Metadata generator (was: Lorem Ipsum metadata? Is there such a thing?)

I really like Ben's idea of programmatically reading the XML schema and
generating the XML structure based on that rather than hard-coding each
metadata schema. I've hit a snag. I'm using the MODS 3.5 schema as a
starting point.

http://www.loc.gov/standards/mods/v3/mods-3-5.xsd

By convention, it seems that a properly formed MODS file starts with a
<modsCollection> element that wraps the whole file and then an individual
<mods> element for each record. However, when you look at the schema file,
there doesn't seem to be anything that specifies that structure. Every
element, including the individual metadata fields and subfields, are
globally defined  top-level elements. As a result, I have no idea how I
could tell my program which element to use as my document root without
hard-coding that information for each schema. I couldn't even do something
as simple as saying that the first defined element should be the document
root because, in the case of MODS, the <mods> tag is defined before
<modsCollection>, whereas <modsCollection> is actually the root element.

Any suggestions?

Josh Welker


-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Ben
Companjen
Sent: Monday, December 09, 2013 10:52 AM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?

Cool!
My first thought on this topic was: give the program an XML schema, and
generate possible documents with the correct datatypes etc. (Something like
that must exist somewhere, right?) Does it happen to work anything like
that, or is it hardcoded to generate these specific elements?

Ben

On 09-12-13 17:27, "Joshua Welker" <[log in to unmask]> wrote:

>Challenge accepted.
>
>http://library.ucmo.edu/dev/metadata-generator.php
>
>Obviously in the prototype phase, but it works. Only MODS is available
>for now, and you can only select top-level elements (all child elements
>of the top-level selections will be auto-generated). I will try to
>expand it to more than just MODS. Admittedly, I know very little about
>METS, so I will need some assistance if I am going to make one of those.
>
>I'll eventually host this somewhere else once it's done, so don't
>bookmark it.
>
>Josh Welker
>Information Technology Librarian
>James C. Kirkpatrick Library
>University of Central Missouri
>Warrensburg, MO 64093
>JCKL 2260
>660.543.8022
>
>-----Original Message-----
>From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
>Kevin S. Clarke
>Sent: Sunday, December 08, 2013 12:26 PM
>To: [log in to unmask]
>Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a thing?
>
>When I first read this, I was imagining not having to give it your
>metadata but native support for most of our commonly used metadata
>records... so the interface is: "Give me 100 MODS records" and it spits
>that out... You could get fancy and say, "Give me X number of METS
>records that wrap TIFFs and JPGs and that uses MODS, etc."  That's not
>as trivial as hooking into an lorem ipsum machine, but it'd be pretty
>cool, imho.
>
>Kevin
>
>
>On Sat, Dec 7, 2013 at 11:51 PM, Pottinger, Hardy J. <
>[log in to unmask]> wrote:
>
>> Hi, I asked this on Google Plus earlier today, but I figured I'd
>> better take this question here: my brain is trying to tell me that
>> there's a service or app that makes "fake" metadata, kind of like
>> "Lorem Ipsum" but you feed it your fields and it gives you nonsense
>> metadata back. But, it looks right enough for testing. Yesterday, I
>> had to make up about 50 rows of fake metadata to test some code that
>> handles paging in a UI, and I had to make it all up by hand. This
>> hurts my soul. Someone please tell me such a service exists, and link
>> me to it, so I never have to do this again. Or else, I may just make
>> such a service, to save us all. But I don't want to go coding some
>> new service if it already exists, because that sort of thing is for
>> chumps.
>>
>>
>> --
>> HARDY POTTINGER <[log in to unmask]> University of Missouri
>> Library Systems http://lso.umsystem.edu/~pottingerhj/
>> https://MOspace.umsystem.edu/
>> "Making things that are beautiful is real fun." --Lou Reed
>>

------------------------------

Date:    Tue, 10 Dec 2013 14:49:57 -0000
From:    [log in to unmask]
Subject: Job: DIGITAL INITIATIVES LIBRARIAN at Western Carolina University

DIGITAL INITIATIVES LIBRARIAN
Western Carolina University
Cullowhee



The Digital Initiatives Librarian provides expertise in creating and managing
library digital collections, such as digital special collections, electronic
theses and dissertations, and other born-digital or retrospectively digitized
materials. The Digital Initiatives Librarian participates in the planning,
implementation, maintenance, expansion, communication and promotion of digital
library and institutional repository services, collections and content. The
Digital Initiatives Librarian will serve as a member of the Digital, Access,
and Technology Services Department's leadership team and, in coordination with
the department head, will be responsible for developing a strategy for the
current and future library's digital initiatives. He/she works closely with
the other units of the library to determine how digital assets should be
leveraged for both scholarly and popular audiences.

Reporting to the head of digital, access, and technology services, this is a
12-month, tenure-track position with an anticipated initial appointment at the
rank of assistant professor. Salary is commensurate with qualifications, with
a minimum salary of $50,000. Responsibilities include:


• Manages the daily operation of the Digital Initiatives unit, consisting of
1.5 FTE technical processing staff and student assistants.

• In collaboration with the DATS department head, the head of special
collections, and other library faculty and staff, identifies potential
collections for digitization and seeks out potential grant sources to
facilitate funding of projects.

• Collaborates with the library's systems unit in the management of CONTENTdm,
the library's hosted digital collections platform.

• Participates in the development of NC-DOCKS, the university's shared
institutional repository, including collection development strategy, policy
development, outreach, technical workflows, digital conversion, and management
and archiving of research datasets.

• Works closely with the metadata librarian and cataloging unit staff to
establish consistent descriptive metadata for digitized collections.

• Collaborates with the library's Web developer to develop and maintain
digital collections Websites. Serves as a member of the library's Web Steering
and Digital Collections Steering Committees.

• Keeps current with trends in digital library technologies, digital curation
practices, preservation formats and standards, and scholarly communication.

• Demonstrates and facilitates effective communication throughout the Library
and across the University with the various colleges, departments, and
faculty.


The University


Western Carolina University (WCU) is one of the 17 senior institutions of the
University of North Carolina (UNC) system. It is a dynamic,
regional comprehensive university with more than 10,000 students and is
dedicated to continuous enhancement of its academic programs and integrating
engaged learning with service to the region. WCU focuses on academic quality
through its Quality Enhancement Plan and through its commitment to integrated
learning and the scholarship of engagement. The university has implemented the
Ernest Boyer model of scholarship and is committed to its stewardship of place
role in serving western North Carolina. Located in
Cullowhee, NC, WCU is situated in a beautiful valley nestled between the Great
Smoky and Blue Ridge Mountains, 52 miles west of Asheville and near the Great
Smoky Mountains National Park, one of the nation's most spectacular and most
visited national parks. While in a small town setting, the university is only
three hours from the vibrant urban centers of Atlanta and Charlotte.




The Library


Hunter Library has a long, respected record of providing excellent service to
the university and region. The library is dedicated to the thoughtful
integration of print and digital resources and to supporting and developing
the general, information, digital, graphic, and visual literacy of students
and the teaching and research needs of faculty and staff. The library has 20
tenure- track faculty members, 29 staff members, and an annual budget of over
$4 million. The library offers approximately 2.6 million
items of intellectual content, including print and electronic titles and
volumes, and provides access to more than 45,000 journals through print
subscriptions and electronic databases. The library is an active partner in
the Western North Carolina Library Network, NC DOCKS, and the Carolina
Consortium.


Required Qualifications:


Masters degree in library and/or information science from an ALA-accredited
program. Must have a demonstrated understanding of the application of
technology as it relates to scholarship and teaching; two years of direct
experience with digital initiatives (digitization projects, digital content
management systems and/or Web-based delivery of digital objects); experience
with archives and archival practices; experience with digital image and text
creation (scanning); experience with digital asset management platforms or
extensible services such as DSpace, DigiTool, CONTENTdm, etc.; experience with
digital image file formats, conversion and software such as Adobe CS;
knowledge of audio and visual applications in the virtual environment.


Preferred Qualifications:


Supervisory experience; experience with grant writing and applications,
knowledge of copyright and licensing issues affecting digitization efforts;
experience with digital project management; experience with scholarly
communication initiatives in the digital humanities or related areas;
experience with Open Source journal publishing platforms such as OJS;
experience with Web programming languages such as Perl or Java; experience in
using metadata standards such as Dublin Core, EAD, or
METS.


To apply, access our online employment system at:
You will be required to attach a letter of application, resume and names and
telephone numbers of three references. For more information, contact Margaret
Watson at 828-227-2325 or [log in to unmask]


Review of applications will begin immediately and will continue until the
position is filled.


WCU is an Affirmative Action / Equal Opportunity Employer committed to
increasing the diversity of its faculty, staff, and students and to
strengthening sensitivity to diversity throughout the institution
(http://www.wcu.edu/28762.asp). Final candidates for employment will be
subject to criminal background checks. Proper documentation of identity and
employability are required at the time of employment. All new employees are
required to provide official transcripts within 30 days of employment. Degrees
must have been awarded by a regionally accredited U.S. college or university.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/11108/

------------------------------

Date:    Tue, 10 Dec 2013 08:50:08 -0600
From:    Jon Gorman <[log in to unmask]>
Subject: Re: problem in old etd xml files

Right, hence my earlier suggestion of just replacing the entities ;). It's
not exactly the approach you describe, as your would would deal with common
cases that didn't get properly set up in the dtd, but it also would be a
bit more difficult to map for weird custom entities.

My email was a bit rambling, but the magic sauce I recommended was
something like

xmllint --loaddtd --noent --dropdtd FRONT.XML > FRONT_nodtdent.xml

(In reality you'd want to automate that a little more, xmllint uses the
libxml libraries if I remember correctly, so there are likely bindings that
do the same thing.)

What that seems to do is loads the dtd (which xmllint no longer does unless
it needs to), takes any entity and replaces it with what's in the dtd, and
then just drops the dtd. I didn't look closely, but it doesn't seem to just
transplant it with the numeric code (&#255;), but use the actual unicode
character.

(You still need to fix the several mistakes that have already been observed
and pointed out by folks like Jason, the xml:stylesheet that needs to be
xml-stylesheet, making sure the filename are actually correct for
case-sensitive OSes.)

Jon G.


On Mon, Dec 9, 2013 at 7:48 PM, Roy Tennant <[log in to unmask]> wrote:

> For my money, the text transform should look only for exact matches (e.g.,
> "&aacute;", "&nbsp;", "&copy;") and replace them with their numeric
> counterparts.
> Roy
>
>
> On Mon, Dec 9, 2013 at 5:41 PM, jason bengtson <[log in to unmask]
> >wrote:
>
> > For testing purposes I just nixed them. As I noted, to rework the file a
> > person would probably want to use a more critical eye with find and
> > replace. Totally doable.
> >
> >
> > On Dec 9, 2013, at 7:37 PM, Jon Gorman <[log in to unmask]>
> wrote:
> >
> > > How did you fix the ampersands? I ask, because if you just did a simple
> > > text transform from & to &amp;, it would mask the problem of the entity
> > > escaping I think...
> > >
> > > Not at work, so I don't have a good example and the file is downloading
> > > very slowly here, so I'll try to do one from memory.
> > >
> > > There were several &aacute; in the XML which mapped to an accent
> > character
> > > in the DTD via the Entity.
> > >
> > > If you just substituted & with &amp;, you'd get &amp;aacute;, which
> would
> > > render inline as &accute;. It would superficially solve the issue since
> > > browsers would no longer give the errors about the dtd since it
> wouldn't
> > be
> > > trying to load entities from the DTDs. And depending how you did it,
> you
> > > likely could also replace a correctly encoded one to make &amp;amp;,
> > > leading to some very odd stuff.
> > >
> > > I wouldn't be surprised to find some unescaped ampersands, but the
> > solution
> > > I posted will essentially replace the entities with their text,
> hopefully
> > > causing most characters to appear correctly. You definitely still need
> to
> > > fix some of the other stuff. (I suspect it never worked for most
> browsers
> > > and XML systems, most likely only IE).
> > >
> > > Jon Gorman
> > > University of Illinois
> >
> > Best regards,
> >
> > Jason Bengtson, MLIS, MA
> > Head of Library Computing and Information SystemsAssistant Professor,
> > Graduate CollegeDepartment of Health Sciences Library and Information
> > ManagementUniversity of Oklahoma Health Sciences Center405-271-2285, opt.
> > 5405-271-3297 (fax)
> > [log in to unmask]
> > http://library.ouhsc.edu
> > www.jasonbengtson.com
> >
> > NOTICE:
> > This e-mail is intended solely for the use of the individual to whom it
> is
> > addressed and may contain information that is privileged, confidential or
> > otherwise exempt from disclosure. If the reader of this e-mail is not the
> > intended recipient or the employee or agent responsible for delivering
> the
> > message to the intended recipient, you are hereby notified that any
> > dissemination, distribution, or copying of this communication is strictly
> > prohibited. If you have received this communication in error, please
> > immediately notify us by replying to the original message at the listed
> > email address. Thank You.
> >
>

------------------------------

Date:    Tue, 10 Dec 2013 15:10:24 +0000
From:    "Robertson, Wendy C" <[log in to unmask]>
Subject: Re: problem in old etd xml files

Thanks all!  Yes, I was expecting to need to replace those text strings with the numeric entities

Wendy

-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Roy Tennant
Sent: Monday, December 09, 2013 7:48 PM
To: [log in to unmask]
Subject: Re: [CODE4LIB] problem in old etd xml files

For my money, the text transform should look only for exact matches (e.g., "&aacute;", "&nbsp;", "&copy;") and replace them with their numeric counterparts.
Roy


On Mon, Dec 9, 2013 at 5:41 PM, jason bengtson <[log in to unmask]>wrote:

> For testing purposes I just nixed them. As I noted, to rework the file
> a person would probably want to use a more critical eye with find and
> replace. Totally doable.
>
>
> On Dec 9, 2013, at 7:37 PM, Jon Gorman <[log in to unmask]> wrote:
>
> > How did you fix the ampersands? I ask, because if you just did a
> > simple text transform from & to &amp;, it would mask the problem of
> > the entity escaping I think...
> >
> > Not at work, so I don't have a good example and the file is
> > downloading very slowly here, so I'll try to do one from memory.
> >
> > There were several &aacute; in the XML which mapped to an accent
> character
> > in the DTD via the Entity.
> >
> > If you just substituted & with &amp;, you'd get &amp;aacute;, which
> > would render inline as &accute;. It would superficially solve the
> > issue since browsers would no longer give the errors about the dtd
> > since it wouldn't
> be
> > trying to load entities from the DTDs. And depending how you did it,
> > you likely could also replace a correctly encoded one to make
> > &amp;amp;, leading to some very odd stuff.
> >
> > I wouldn't be surprised to find some unescaped ampersands, but the
> solution
> > I posted will essentially replace the entities with their text,
> > hopefully causing most characters to appear correctly. You
> > definitely still need to fix some of the other stuff. (I suspect it
> > never worked for most browsers and XML systems, most likely only IE).
> >
> > Jon Gorman
> > University of Illinois
>
> Best regards,
>
> Jason Bengtson, MLIS, MA
> Head of Library Computing and Information SystemsAssistant Professor,
> Graduate CollegeDepartment of Health Sciences Library and Information
> ManagementUniversity of Oklahoma Health Sciences Center405-271-2285, opt.
> 5405-271-3297 (fax)
> [log in to unmask]
> http://library.ouhsc.edu
> www.jasonbengtson.com
>
> NOTICE:
> This e-mail is intended solely for the use of the individual to whom
> it is addressed and may contain information that is privileged,
> confidential or otherwise exempt from disclosure. If the reader of
> this e-mail is not the intended recipient or the employee or agent
> responsible for delivering the message to the intended recipient, you
> are hereby notified that any dissemination, distribution, or copying
> of this communication is strictly prohibited. If you have received
> this communication in error, please immediately notify us by replying
> to the original message at the listed email address. Thank You.
>

------------------------------

Date:    Tue, 10 Dec 2013 10:15:17 -0500
From:    Peter Murray <[log in to unmask]>
Subject: LYRASIS Open Source Case Study Call for Proposals

Share your story of implementing an open source system at your library. If selected, you will get paid to develop a case study of your open source system adoption experience and learning.

LYRASIS, in partnership with the Andrew W. Mellon Foundation<https://foss4lib.org/article/2012/nov/second-mellon-grant-for-foss4lib>,[0] is seeking academic and public libraries to share their experiences with open source systems, such as content repositories or institutional repositories, integrated library systems, or public-facing websites. The two selected case studies will be available on FOSS4Lib.org<http://FOSS4Lib.org>. This effort, part of the larger LYRASIS Digital<http://www.lyrasis.org/lyrasisdigital>[1] initiative, is a continuation of LYRASIS working with libraries and other cultural heritage organizations to learn about, evaluate, adopt, and use open source software systems.

More information is available in the formal call-for-proposals document<https://foss4lib.org/sites/default/files/inline/Open%20Source%20Case%20Study%20Call%20for%20Proposals.pdf>.[2] To apply, submit a brief description of the potential case study by email to Peter Murray<mailto:[log in to unmask]> [3] with the name of the proposed primary author as well as names of others at the library who may contribute to creation of the case study. The deadline for submission is Friday, January 11, 2014.

[0] https://foss4lib.org/article/2012/nov/second-mellon-grant-for-foss4lib
[1] http://www.lyrasis.org/lyrasisdigital
[2] https://foss4lib.org/sites/default/files/inline/Open%20Source%20Case%20Study%20Call%20for%20Proposals.pdf
[3] mailto:[log in to unmask]
--
Peter Murray
Assistant Director, Technology Services Development
LYRASIS
[log in to unmask]<mailto:[log in to unmask]>
+1 678-235-2955
800.999.8558 x2955

------------------------------

Date:    Tue, 10 Dec 2013 09:15:55 -0700
From:    Peter Binkley <[log in to unmask]>
Subject: Re: Lorem Ipsum metadata? Is there such a thing?

It all takes me back to the heady days of "MARC Must Die, and Be Disposed
of with Minimal Impact on the Environment". Now we're leaving the next
generation to deal with the North Pacific Subfield Whorl. Sad.

Peter

Peter Binkley
Digital Initiatives Technology Librarian
Information Technology Services
[log in to unmask]

2-10K Cameron Library
University of Alberta
Edmonton, Alberta
Canada T6G 2J8

phone 780-492-3743
fax 780-492-9243


On Mon, Dec 9, 2013 at 6:36 PM, Roy Tennant <[log in to unmask]> wrote:

> I ask you, would you want to work all day sitting on top of a huge pile of
> radioactive MARC records? I sure wouldn't...
> Roy
>
>
> On Mon, Dec 9, 2013 at 5:08 PM, Bill Dueber <[log in to unmask]> wrote:
>
> > The sad thing is that the Library of Congress spent billions of dollars
> of
> > taxpayer money building a safe storage facility in the stable caves under
> > Dublin, OH, but now no one will let them bury them there.
> >
> >
> > On Mon, Dec 9, 2013 at 4:50 PM, Roy Tennant <[log in to unmask]>
> wrote:
> >
> > > I can't help wondering what the half-life of a radioactive MARC record
> > is.
> > > My guess is it is either really, really short or really, really long.
> ;-)
> > > Roy
> > >
> > >
> > > On Mon, Dec 9, 2013 at 1:39 PM, Peter Binkley <
> [log in to unmask]
> > > >wrote:
> > >
> > > > Years ago Bill Moen had a set of "radioactive" MARC records with
> unique
> > > > tokens in all fields, to test Z39.50 retrieval. I don't know whether
> > they
> > > > were ever released anywhere, but I see the specs are here:
> > > >
> > > > http://digital.library.unt.edu/ark:/67531/metadc111015/m1/1/
> > > >
> > > > Peter
> > > >
> > > >
> > > > Peter Binkley
> > > > Digital Initiatives Technology Librarian
> > > > Information Technology Services
> > > > [log in to unmask]
> > > >
> > > > 2-10K Cameron Library
> > > > University of Alberta
> > > > Edmonton, Alberta
> > > > Canada T6G 2J8
> > > >
> > > > phone 780-492-3743
> > > > fax 780-492-9243
> > > >
> > > >
> > > > On Mon, Dec 9, 2013 at 12:50 PM, Joshua Welker <[log in to unmask]>
> > wrote:
> > > >
> > > > > I checked out the Eclipse option and was not able to get much use
> out
> > > of
> > > > > it.
> > > > > Maybe someone else will have better luck? It doesn't seem to align
> > very
> > > > > well
> > > > > with a library use case.
> > > > >
> > > > > Josh Welker
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: Code for Libraries [mailto:[log in to unmask]] On
> Behalf
> > > Of
> > > > > Ben
> > > > > Companjen
> > > > > Sent: Monday, December 09, 2013 11:14 AM
> > > > > To: [log in to unmask]
> > > > > Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a
> thing?
> > > > >
> > > > > Hi Josh,
> > > > >
> > > > > Before you start coding:
> > > > >
> > > >
> > >
> >
> http://stackoverflow.com/questions/17106/how-to-generate-sample-xml-documen
> > > > > ts-from-their-dtd-or-xsd suggests that Eclipse can generate XML
> from
> > an
> > > > > DTD
> > > > > or XSD file. First try with the EAC XSD shows I need to try other
> > > > options,
> > > > > but it's promising.
> > > > >
> > > > > (It's still an interesting problem to try to tackle yourself, of
> > > course.)
> > > > >
> > > > > Ben
> > > > >
> > > > > On 09-12-13 17:59, "Joshua Welker" <[log in to unmask]> wrote:
> > > > >
> > > > > >It's hard-coded to generate the specific elements. But your way
> > sounds
> > > > > >a lot cleaner, so I might try to do that instead :) It will be
> more
> > > > > >difficult initially but much easier once I start implementing
> other
> > > > > >metadata formats.
> > > > > >
> > > > > >Josh Welker
> > > > > >
> > > > > >
> > > > > >-----Original Message-----
> > > > > >From: Code for Libraries [mailto:[log in to unmask]] On
> > Behalf
> > > Of
> > > > > >Ben Companjen
> > > > > >Sent: Monday, December 09, 2013 10:52 AM
> > > > > >To: [log in to unmask]
> > > > > >Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a
> thing?
> > > > > >
> > > > > >Cool!
> > > > > >My first thought on this topic was: give the program an XML
> schema,
> > > and
> > > > > >generate possible documents with the correct datatypes etc.
> > (Something
> > > > > >like that must exist somewhere, right?) Does it happen to work
> > > anything
> > > > > >like that, or is it hardcoded to generate these specific elements?
> > > > > >
> > > > > >Ben
> > > > > >
> > > > > >On 09-12-13 17:27, "Joshua Welker" <[log in to unmask]> wrote:
> > > > > >
> > > > > >>Challenge accepted.
> > > > > >>
> > > > > >>http://library.ucmo.edu/dev/metadata-generator.php
> > > > > >>
> > > > > >>Obviously in the prototype phase, but it works. Only MODS is
> > > available
> > > > > >>for now, and you can only select top-level elements (all child
> > > > > >>elements of the top-level selections will be auto-generated). I
> > will
> > > > > >>try to expand it to more than just MODS. Admittedly, I know very
> > > > > >>little about METS, so I will need some assistance if I am going
> to
> > > make
> > > > > >>one of those.
> > > > > >>
> > > > > >>I'll eventually host this somewhere else once it's done, so don't
> > > > > >>bookmark it.
> > > > > >>
> > > > > >>Josh Welker
> > > > > >>Information Technology Librarian
> > > > > >>James C. Kirkpatrick Library
> > > > > >>University of Central Missouri
> > > > > >>Warrensburg, MO 64093
> > > > > >>JCKL 2260
> > > > > >>660.543.8022
> > > > > >>
> > > > > >>-----Original Message-----
> > > > > >>From: Code for Libraries [mailto:[log in to unmask]] On
> > Behalf
> > > > > >>Of Kevin S. Clarke
> > > > > >>Sent: Sunday, December 08, 2013 12:26 PM
> > > > > >>To: [log in to unmask]
> > > > > >>Subject: Re: [CODE4LIB] Lorem Ipsum metadata? Is there such a
> > thing?
> > > > > >>
> > > > > >>When I first read this, I was imagining not having to give it
> your
> > > > > >>metadata but native support for most of our commonly used
> metadata
> > > > > >>records... so the interface is: "Give me 100 MODS records" and it
> > > > > >>spits that out... You could get fancy and say, "Give me X number
> of
> > > > > >>METS records that wrap TIFFs and JPGs and that uses MODS, etc."
> > > > > >>That's not as trivial as hooking into an lorem ipsum machine, but
> > > it'd
> > > > > >>be pretty cool, imho.
> > > > > >>
> > > > > >>Kevin
> > > > > >>
> > > > > >>
> > > > > >>On Sat, Dec 7, 2013 at 11:51 PM, Pottinger, Hardy J. <
> > > > > >>[log in to unmask]> wrote:
> > > > > >>
> > > > > >>> Hi, I asked this on Google Plus earlier today, but I figured
> I'd
> > > > > >>> better take this question here: my brain is trying to tell me
> > that
> > > > > >>> there's a service or app that makes "fake" metadata, kind of
> like
> > > > > >>> "Lorem Ipsum" but you feed it your fields and it gives you
> > nonsense
> > > > > >>> metadata back. But, it looks right enough for testing.
> > Yesterday, I
> > > > > >>> had to make up about 50 rows of fake metadata to test some code
> > > that
> > > > > >>> handles paging in a UI, and I had to make it all up by hand.
> This
> > > > > >>> hurts my soul. Someone please tell me such a service exists,
> and
> > > > > >>> link me to it, so I never have to do this again. Or else, I may
> > > just
> > > > > >>> make such a service, to save us all. But I don't want to go
> > coding
> > > > > >>> some new service if it already exists, because that sort of
> thing
> > > is
> > > > > >>> for chumps.
> > > > > >>>
> > > > > >>>
> > > > > >>> --
> > > > > >>> HARDY POTTINGER <[log in to unmask]> University of
> > Missouri
> > > > > >>> Library Systems http://lso.umsystem.edu/~pottingerhj/
> > > > > >>> https://MOspace.umsystem.edu/
> > > > > >>> "Making things that are beautiful is real fun." --Lou Reed
> > > > > >>>
> > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Bill Dueber
> > Library Systems Programmer
> > University of Michigan Library
> >
>
>

------------------------------

Date:    Tue, 10 Dec 2013 11:50:57 -0500
From:    Robert Haschart <[log in to unmask]>
Subject: Fwd: FW: Lorem Ipsum metadata? Is there such a thing?

Forwarding a message for someone who's having trouble posting...

-------- Original Message --------

______
From: Roland, Perry (pdr4h)
Sent: Tuesday, December 10, 2013 11:41 AM
To: Code for Libraries
Subject: RE: Lorem Ipsum metadata? Is there such a thing?

oXygen can generate sample XML files from an XSD schema.  See http://www.oxygenxml.com/doc/ug-editorEclipse/topics/xml-schema-instance-generator.html.

I've attached an example of generated MODS.

--
p.

__________________________
Perry Roland
Music Library
University of Virginia
P. O. Box 400175
Charlottesville, VA 22904
434-982-2702 (w)
pdr4h (at) virginia (dot) edu

------------------------------

Date:    Tue, 10 Dec 2013 12:00:16 -0500
From:    Tim McGeary <[log in to unmask]>
Subject: Announcing Code4Lib2014 expected registration date and estimated registration cost

I am happy to announce that we have had a very successful sponsorship
campaign this year which has a tremendous impact on keeping the
registration cost low.  While the conference committee is still receiving
bids for the A/V contract, we can provide the estimated registration cost
for you to use to submit your travel requests at your institution.  If the
A/V contract amount is different than what we have budgeted, we will adjust
the registration cost appropriately before the start of registration.

    Expected Registration Date opens: January 13, 2013
    Estimated Registration Cost: $165
    Pre-conference Registration Cost, if attending Code4Lib: $10
    Pre-conference Registration Cost, if not attending Code4Lib: $25

The Pre-conference registration is to be used to offset the A/V costs for
that day.

We are very excited about the plans for the conference and hosting everyone
in North Carolina.
Cheers,
Tim McGeary
Code4Lib 2014 Conference co-Chair

Director of Library & Information Technology
University of North Carolina at Chapel Hill
[log in to unmask]

[log in to unmask]
GTalk/Yahoo/Skype/Twitter: timmcgeary

------------------------------

Date:    Tue, 10 Dec 2013 17:52:02 +0000
From:    "Hostetler,Shelley" <[log in to unmask]>
Subject: Developer House Nominations Close Friday

Hi Everyone,

We're pretty excited about the level of interest in our Developer House event - we've gotten lots of nominations for participants with a broad range of interests and experience, which should make for a terrific event.

It's not too late to nominate someone you know - including yourself! -- to join what will undoubtedly be a great group of library developers and a really fun week of learning, collaborating, and coding.  Nominations close this Friday - but why wait?   Submit a nomination today<http://registration.oclc.org/reg/?pc=Developerhousenominationform>.

Learn more about Developer House in our recent Little House on the Network<http://oc.lc/C87gzw> blog post, or feel free to contact me directly if you have any questions at all.

Shelley

Shelley Hostetler
Community Manager, WorldShare Platform
OCLC | 6565 Kilgour Place, Dublin, Ohio 43017
Phone: 847 701 8932
Email: [log in to unmask]<applewebdata:[log in to unmask]>
Skype: shelley_hostetler
http://www.oclc.org/developer

------------------------------

Date:    Tue, 10 Dec 2013 16:18:58 -0500
From:    Edward Summers <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

Not a naive idea at all. If you have the stomach for it, you could extract the Subject Heading / Dewey combinations out of say the LC Catalog MARC data [1] to use as training data for some kind of clustering [2] algorithm. You might even be able to do something simple like keep a count of the Dewey ranges associated with each subject heading.

I’m kind of curious myself, so I could work on getting the subject heading / dewey combinations if you want?

//Ed

[1] https://archive.org/details/marc_records_scriblio_net
[2] https://en.wikipedia.org/wiki/Cluster_analysis

On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:

> Hi CODE4LIB,
>
> we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings.
> Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?
>
> I understood, that WebDewey http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a service, but
>
> ·         we are no OCLC customers or subscribers to WebDewey
>
> ·         even if we were, I'm not sure, if the service matches our needs
>
> I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?
>
> Thanks for any clues,
> Irina
>
>
> -------
>
> Irina Arndt
> Max Planck Digital Library (MPDL)
> Library System Coordinator
> Amalienstr. 33
> D-80799 Muenchen, Germany
>
> Tel. +49 89 38602-254
> Fax +49 89 38602-290
>
> Email: [log in to unmask]<mailto:[log in to unmask]>
> http://www.mpdl.mpg.de

------------------------------

Date:    Tue, 10 Dec 2013 13:26:58 -0800
From:    Karen Coyle <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

I've often thought that this would be an interesting exercise if someone
would undertake it.

Just a reminder: in theory (IN THEORY) the first subject heading in an
LC record is the one most semantically close to the assigned subject
classification. So perhaps a first pass with the FIRST 6xx might give a
more refined matching. And then it would be interesting to compare that
with the results using all 600-651's.

kc

On 12/10/13, 1:18 PM, Edward Summers wrote:
> Not a naive idea at all. If you have the stomach for it, you could extract the Subject Heading / Dewey combinations out of say the LC Catalog MARC data [1] to use as training data for some kind of clustering [2] algorithm. You might even be able to do something simple like keep a count of the Dewey ranges associated with each subject heading.
>
> I’m kind of curious myself, so I could work on getting the subject heading / dewey combinations if you want?
>
> //Ed
>
> [1] https://archive.org/details/marc_records_scriblio_net
> [2] https://en.wikipedia.org/wiki/Cluster_analysis
>
> On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
>
>> Hi CODE4LIB,
>>
>> we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings.
>> Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?
>>
>> I understood, that WebDewey http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a service, but
>>
>> ·         we are no OCLC customers or subscribers to WebDewey
>>
>> ·         even if we were, I'm not sure, if the service matches our needs
>>
>> I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?
>>
>> Thanks for any clues,
>> Irina
>>
>>
>> -------
>>
>> Irina Arndt
>> Max Planck Digital Library (MPDL)
>> Library System Coordinator
>> Amalienstr. 33
>> D-80799 Muenchen, Germany
>>
>> Tel. +49 89 38602-254
>> Fax +49 89 38602-290
>>
>> Email: [log in to unmask]<mailto:[log in to unmask]>
>> http://www.mpdl.mpg.de

--
Karen Coyle
[log in to unmask] http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet

------------------------------

Date:    Tue, 10 Dec 2013 16:37:08 -0500
From:    Edward Summers <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

I was going to try to reduce the space a bit by focusing on 650 fields. Each record with a Dewey number will be a tab separated line, that will include each 650 field in order. So something like:

305.42/0973 <tab> Women's rights -- United States -- History -- Sources. <tab> Women -- United States -- History — Sources <tab> Manuscripts, American -- Facsimiles.

I thought it might be a place to start at least … it’s running on an ec2 instance right now :-)

//Ed

On Dec 10, 2013, at 4:26 PM, Karen Coyle <[log in to unmask]> wrote:

> I've often thought that this would be an interesting exercise if someone would undertake it.
>
> Just a reminder: in theory (IN THEORY) the first subject heading in an LC record is the one most semantically close to the assigned subject classification. So perhaps a first pass with the FIRST 6xx might give a more refined matching. And then it would be interesting to compare that with the results using all 600-651's.
>
> kc
>
> On 12/10/13, 1:18 PM, Edward Summers wrote:
>> Not a naive idea at all. If you have the stomach for it, you could extract the Subject Heading / Dewey combinations out of say the LC Catalog MARC data [1] to use as training data for some kind of clustering [2] algorithm. You might even be able to do something simple like keep a count of the Dewey ranges associated with each subject heading.
>>
>> I’m kind of curious myself, so I could work on getting the subject heading / dewey combinations if you want?
>>
>> //Ed
>>
>> [1] https://archive.org/details/marc_records_scriblio_net
>> [2] https://en.wikipedia.org/wiki/Cluster_analysis
>>
>> On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
>>
>>> Hi CODE4LIB,
>>>
>>> we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings.
>>> Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?
>>>
>>> I understood, that WebDewey http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a service, but
>>>
>>> ·         we are no OCLC customers or subscribers to WebDewey
>>>
>>> ·         even if we were, I'm not sure, if the service matches our needs
>>>
>>> I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?
>>>
>>> Thanks for any clues,
>>> Irina
>>>
>>>
>>> -------
>>>
>>> Irina Arndt
>>> Max Planck Digital Library (MPDL)
>>> Library System Coordinator
>>> Amalienstr. 33
>>> D-80799 Muenchen, Germany
>>>
>>> Tel. +49 89 38602-254
>>> Fax +49 89 38602-290
>>>
>>> Email: [log in to unmask]<mailto:[log in to unmask]>
>>> http://www.mpdl.mpg.de
>
> --
> Karen Coyle
> [log in to unmask] http://kcoyle.net
> m: 1-510-435-8234
> skype: kcoylenet

------------------------------

Date:    Tue, 10 Dec 2013 16:02:19 -0600
From:    Bryan Baldus <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

On Tuesday, December 10, 2013 7:18 AM, Irina Arndt wrote:
>we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings. Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?
...
>I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?

Classification Web offers a correlations feature between Dewey and the 1st LCSH, based on usage in LC's database (as well as correlations between LCC and LCSH, and DDC and LCC). It is of some use in helping the cataloger determine possible classifications or subject headings to use. Unfortunately, I don't believe ClassWeb is easily accessible by automated processes (even for subscribers). Even if it were, I doubt it is possible to automate a process of assigning Dewey based on 1st LCSH. As mentioned, the 1st LCSH and classification are generally supposed to be similar/linked, but that applies more to LCC/LCSH than DDC to LCSH, due to the way Dewey works. For example, ClassWeb correlation between LCSH Disease management (chosen while looking at Health, then Disease, then looking for an example showing a better variety of Deweys than the 1st 2) shows DDCs used by LC (counts of records in parentheses):

Disease management [Topical]
         362.1 (4)
         610.285 (1)
         615.1 (1)
         615.5071 (1)
         616.89142 (1)

####

That said, as Ed mentioned, given a large set of records for training, you should be able to develop something to help local catalogers determine possible Deweys record-by-record.

I hope this helps,

Bryan Baldus
Senior Cataloger
Quality Books Inc.
The Best of America's Independent Presses
1-800-323-4241x402
[log in to unmask]

------------------------------

Date:    Tue, 10 Dec 2013 14:15:52 -0800
From:    Karen Coyle <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

Yeah, Ed! I'm totally looking forward to results. Unlikely as it is, if
there's anything I can do....

and I understand about limiting to 650, but ... well, let's see how it goes.

kc

On 12/10/13, 1:37 PM, Edward Summers wrote:
> I was going to try to reduce the space a bit by focusing on 650 fields. Each record with a Dewey number will be a tab separated line, that will include each 650 field in order. So something like:
>
> 305.42/0973 <tab> Women's rights -- United States -- History -- Sources. <tab> Women -- United States -- History — Sources <tab> Manuscripts, American -- Facsimiles.
>
> I thought it might be a place to start at least … it’s running on an ec2 instance right now :-)
>
> //Ed
>
> On Dec 10, 2013, at 4:26 PM, Karen Coyle <[log in to unmask]> wrote:
>
>> I've often thought that this would be an interesting exercise if someone would undertake it.
>>
>> Just a reminder: in theory (IN THEORY) the first subject heading in an LC record is the one most semantically close to the assigned subject classification. So perhaps a first pass with the FIRST 6xx might give a more refined matching. And then it would be interesting to compare that with the results using all 600-651's.
>>
>> kc
>>
>> On 12/10/13, 1:18 PM, Edward Summers wrote:
>>> Not a naive idea at all. If you have the stomach for it, you could extract the Subject Heading / Dewey combinations out of say the LC Catalog MARC data [1] to use as training data for some kind of clustering [2] algorithm. You might even be able to do something simple like keep a count of the Dewey ranges associated with each subject heading.
>>>
>>> I’m kind of curious myself, so I could work on getting the subject heading / dewey combinations if you want?
>>>
>>> //Ed
>>>
>>> [1] https://archive.org/details/marc_records_scriblio_net
>>> [2] https://en.wikipedia.org/wiki/Cluster_analysis
>>>
>>> On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
>>>
>>>> Hi CODE4LIB,
>>>>
>>>> we would like to add DDC classes to a bunch of MARC records, which contains only LoC Subject Headings.
>>>> Does anybody know, if a mapping between LCSH and DDC is anywhere existent (and available)?
>>>>
>>>> I understood, that WebDewey http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a service, but
>>>>
>>>> ·         we are no OCLC customers or subscribers to WebDewey
>>>>
>>>> ·         even if we were, I'm not sure, if the service matches our needs
>>>>
>>>> I'm thinking of a tool, where I can upload my list of subject headings and get back a list, where the matching Dewey classes have been added (but a 'simple' csv file with LCSH terms and DDC classes would be helpful as well- I am fully aware, that neither LCSH nor DDC are simple at all...) . Naïve idea...?
>>>>
>>>> Thanks for any clues,
>>>> Irina
>>>>
>>>>
>>>> -------
>>>>
>>>> Irina Arndt
>>>> Max Planck Digital Library (MPDL)
>>>> Library System Coordinator
>>>> Amalienstr. 33
>>>> D-80799 Muenchen, Germany
>>>>
>>>> Tel. +49 89 38602-254
>>>> Fax +49 89 38602-290
>>>>
>>>> Email: [log in to unmask]<mailto:[log in to unmask]>
>>>> http://www.mpdl.mpg.de
>>
>> --
>> Karen Coyle
>> [log in to unmask] http://kcoyle.net
>> m: 1-510-435-8234
>> skype: kcoylenet
>
>

--
Karen Coyle
[log in to unmask] http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet

------------------------------

Date:    Tue, 10 Dec 2013 14:53:37 -0800
From:    Kyle Banerjee <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

This is my inclination. However, if the algorithm doesn't incorporate
values from the tables used to synthesize Dewey numbers, identifying the
stems of numbers may be tricky. It might be worth calling up someone at a
major Dewey library like UIUC or Northwestern to see if they might be
willing to provide data to add to what you get from LC.

kyle


On Tue, Dec 10, 2013 at 1:18 PM, Edward Summers <[log in to unmask]> wrote:

> Not a naive idea at all. If you have the stomach for it, you could extract
> the Subject Heading / Dewey combinations out of say the LC Catalog MARC
> data [1] to use as training data for some kind of clustering [2] algorithm.
> You might even be able to do something simple like keep a count of the
> Dewey ranges associated with each subject heading.
>
> I’m kind of curious myself, so I could work on getting the subject heading
> / dewey combinations if you want?
>
> //Ed
>
> [1] https://archive.org/details/marc_records_scriblio_net
> [2] https://en.wikipedia.org/wiki/Cluster_analysis
>
> On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
>
> > Hi CODE4LIB,
> >
> > we would like to add DDC classes to a bunch of MARC records, which
> contains only LoC Subject Headings.
> > Does anybody know, if a mapping between LCSH and DDC is anywhere
> existent (and available)?
> >
> > I understood, that WebDewey
> http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such a
> service, but
> >
> > ·         we are no OCLC customers or subscribers to WebDewey
> >
> > ·         even if we were, I'm not sure, if the service matches our needs
> >
> > I'm thinking of a tool, where I can upload my list of subject headings
> and get back a list, where the matching Dewey classes have been added (but
> a 'simple' csv file with LCSH terms and DDC classes would be helpful as
> well- I am fully aware, that neither LCSH nor DDC are simple at all...) .
> Naïve idea...?
> >
> > Thanks for any clues,
> > Irina
> >
> >
> > -------
> >
> > Irina Arndt
> > Max Planck Digital Library (MPDL)
> > Library System Coordinator
> > Amalienstr. 33
> > D-80799 Muenchen, Germany
> >
> > Tel. +49 89 38602-254
> > Fax +49 89 38602-290
> >
> > Email: [log in to unmask]<mailto:[log in to unmask]>
> > http://www.mpdl.mpg.de
>

------------------------------

Date:    Tue, 10 Dec 2013 15:11:00 -0800
From:    Roy Tennant <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

Has anyone looked at using the Classify web service for this? [1] It
doesn't have a batch mode, but it has a web service [2].
Roy

[1] http://oclc.org/research/activities/classify.html
[2] http://classify.oclc.org/classify2/api_docs/index.html


On Tue, Dec 10, 2013 at 2:53 PM, Kyle Banerjee <[log in to unmask]>wrote:

> This is my inclination. However, if the algorithm doesn't incorporate
> values from the tables used to synthesize Dewey numbers, identifying the
> stems of numbers may be tricky. It might be worth calling up someone at a
> major Dewey library like UIUC or Northwestern to see if they might be
> willing to provide data to add to what you get from LC.
>
> kyle
>
>
> On Tue, Dec 10, 2013 at 1:18 PM, Edward Summers <[log in to unmask]> wrote:
>
> > Not a naive idea at all. If you have the stomach for it, you could
> extract
> > the Subject Heading / Dewey combinations out of say the LC Catalog MARC
> > data [1] to use as training data for some kind of clustering [2]
> algorithm.
> > You might even be able to do something simple like keep a count of the
> > Dewey ranges associated with each subject heading.
> >
> > I’m kind of curious myself, so I could work on getting the subject
> heading
> > / dewey combinations if you want?
> >
> > //Ed
> >
> > [1] https://archive.org/details/marc_records_scriblio_net
> > [2] https://en.wikipedia.org/wiki/Cluster_analysis
> >
> > On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
> >
> > > Hi CODE4LIB,
> > >
> > > we would like to add DDC classes to a bunch of MARC records, which
> > contains only LoC Subject Headings.
> > > Does anybody know, if a mapping between LCSH and DDC is anywhere
> > existent (and available)?
> > >
> > > I understood, that WebDewey
> > http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such
> a
> > service, but
> > >
> > > ·         we are no OCLC customers or subscribers to WebDewey
> > >
> > > ·         even if we were, I'm not sure, if the service matches our
> needs
> > >
> > > I'm thinking of a tool, where I can upload my list of subject headings
> > and get back a list, where the matching Dewey classes have been added
> (but
> > a 'simple' csv file with LCSH terms and DDC classes would be helpful as
> > well- I am fully aware, that neither LCSH nor DDC are simple at all...) .
> > Naïve idea...?
> > >
> > > Thanks for any clues,
> > > Irina
> > >
> > >
> > > -------
> > >
> > > Irina Arndt
> > > Max Planck Digital Library (MPDL)
> > > Library System Coordinator
> > > Amalienstr. 33
> > > D-80799 Muenchen, Germany
> > >
> > > Tel. +49 89 38602-254
> > > Fax +49 89 38602-290
> > >
> > > Email: [log in to unmask]<mailto:[log in to unmask]>
> > > http://www.mpdl.mpg.de
> >
>

------------------------------

Date:    Tue, 10 Dec 2013 18:30:41 -0500
From:    Edward Summers <[log in to unmask]>
Subject: Re: Mapping LCSH to DDC

In case anyone wants to have a go, here’s the ddc/lcsh data I extracted from the LC 2007 retrospective file [1]:

    http://inkdroid.org/data/dewey-lcsh.gz

The file contains ddc/lcsh combinations from 2,909,673 records.

//Ed

[1] https://archive.org/details/marc_records_scriblio_net

On Dec 10, 2013, at 6:11 PM, Roy Tennant <[log in to unmask]> wrote:

> Has anyone looked at using the Classify web service for this? [1] It
> doesn't have a batch mode, but it has a web service [2].
> Roy
>
> [1] http://oclc.org/research/activities/classify.html
> [2] http://classify.oclc.org/classify2/api_docs/index.html
>
>
> On Tue, Dec 10, 2013 at 2:53 PM, Kyle Banerjee <[log in to unmask]>wrote:
>
>> This is my inclination. However, if the algorithm doesn't incorporate
>> values from the tables used to synthesize Dewey numbers, identifying the
>> stems of numbers may be tricky. It might be worth calling up someone at a
>> major Dewey library like UIUC or Northwestern to see if they might be
>> willing to provide data to add to what you get from LC.
>>
>> kyle
>>
>>
>> On Tue, Dec 10, 2013 at 1:18 PM, Edward Summers <[log in to unmask]> wrote:
>>
>>> Not a naive idea at all. If you have the stomach for it, you could
>> extract
>>> the Subject Heading / Dewey combinations out of say the LC Catalog MARC
>>> data [1] to use as training data for some kind of clustering [2]
>> algorithm.
>>> You might even be able to do something simple like keep a count of the
>>> Dewey ranges associated with each subject heading.
>>>
>>> I’m kind of curious myself, so I could work on getting the subject
>> heading
>>> / dewey combinations if you want?
>>>
>>> //Ed
>>>
>>> [1] https://archive.org/details/marc_records_scriblio_net
>>> [2] https://en.wikipedia.org/wiki/Cluster_analysis
>>>
>>> On Dec 10, 2013, at 8:18 AM, Irina Arndt <[log in to unmask]> wrote:
>>>
>>>> Hi CODE4LIB,
>>>>
>>>> we would like to add DDC classes to a bunch of MARC records, which
>>> contains only LoC Subject Headings.
>>>> Does anybody know, if a mapping between LCSH and DDC is anywhere
>>> existent (and available)?
>>>>
>>>> I understood, that WebDewey
>>> http://www.oclc.org/dewey/versions/webdewey.en.html  might provide such
>> a
>>> service, but
>>>>
>>>> ·         we are no OCLC customers or subscribers to WebDewey
>>>>
>>>> ·         even if we were, I'm not sure, if the service matches our
>> needs
>>>>
>>>> I'm thinking of a tool, where I can upload my list of subject headings
>>> and get back a list, where the matching Dewey classes have been added
>> (but
>>> a 'simple' csv file with LCSH terms and DDC classes would be helpful as
>>> well- I am fully aware, that neither LCSH nor DDC are simple at all...) .
>>> Naïve idea...?
>>>>
>>>> Thanks for any clues,
>>>> Irina
>>>>
>>>>
>>>> -------
>>>>
>>>> Irina Arndt
>>>> Max Planck Digital Library (MPDL)
>>>> Library System Coordinator
>>>> Amalienstr. 33
>>>> D-80799 Muenchen, Germany
>>>>
>>>> Tel. +49 89 38602-254
>>>> Fax +49 89 38602-290
>>>>
>>>> Email: [log in to unmask]<mailto:[log in to unmask]>
>>>> http://www.mpdl.mpg.de
>>>
>>

------------------------------

Date:    Tue, 10 Dec 2013 17:30:58 -0800
From:    Ron Peterson <[log in to unmask]>
Subject: Call for Proposals: Code4Lib Journal

Call for Proposals (and apologies for cross-posting):

The Code4Lib Journal (C4LJ) exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.

We are now accepting proposals for publication in our 24th issue. Don't miss out on this opportunity to share your ideas and experiences. To be included in the 24th issue, which is scheduled for publication in mid April 2014, please submit articles, abstracts, or proposals at http://journal.code4lib.org/submit-proposal or to [log in to unmask] by Friday, January 10, 2014.  When submitting, please include the title or subject of the proposal in the subject line of the email message.

C4LJ encourages creativity and flexibility, and the editors welcome submissions across a broad variety of topics that support the mission of the journal.  Possible topics include, but are not limited to:

* Practical applications of library technology (both actual and hypothetical)
* Technology projects (failed, successful, or proposed), including how they were done and challenges faced
* Case studies
* Best practices
* Reviews
* Comparisons of third party software or libraries
* Analyses of library metadata for use with technology
* Project management and communication within the library environment
* Assessment and user studies

C4LJ strives to promote professional communication by minimizing the barriers to publication.  While articles should be of a high quality, they need not follow any formal structure.  Writers should aim for the middle ground between blog posts and articles in traditional refereed journals.  Where appropriate, we encourage authors to submit code samples, algorithms, and pseudo-code.  For more information, visit C4LJ's Article Guidelines or browse articles from the first 23 issues published on our website: http://journal.code4lib.org.

Remember, for consideration for the 24th issue, please send proposals, abstracts, or draft articles to [log in to unmask] no later than Friday, January 10, 2014.

Send in a submission.  Your peers would like to hear what you are doing.


------------------------------

End of CODE4LIB Digest - 9 Dec 2013 to 10 Dec 2013 (#2013-320)
**************************************************************

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager