Print

Print


If all people need is to look up MARC tags, there is also the Cataloging
Calculator http://calculate.alptown.com/  Unless you want to want to feel
totally disgusted, avoid looking source code as it was my first javascript
program which was cobbled together in a day (i.e. it is garbage) and hasn't
been gone through a substantial revision since 1997. The good news is that
if you're still on Netscape 4.0, it should work fine...

kyle


On Mon, Sep 30, 2013 at 1:56 PM, Roy Tennant <[log in to unmask]> wrote:

> As seen on Twitter, OCLC also has our version of MARC documentation here:
>
> <http://www.oclc.org/bibformats/en.html>
>
> It's mostly exactly the same except for the places where we have inserted
> small but effective messages that "RESISTANCE IS FUTILE, YOU WILL BE
> ASSIMILATED".
> Roy
>
>
> On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose <[log in to unmask]> wrote:
>
> > FYI - this also means that there's a very good chance that the MARC
> > standards site [1] and the Source Codes site [2] will be down as well. I
> > don't know if there are any mirror sites out there for these pages.
> >
> > [1] http://www.loc.gov/marc/
> > [2] http://www.loc.gov/standards/sourcelist/index.html
> >
> > Thanks,
> > Becky, about to be (forcefully) departed with her standards documentation
> >
> >
> > On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider <[log in to unmask]
> > >wrote:
> >
> > > Interesting -- thanks, Birkin -- and tell us what you think when you
> get
> > it
> > > implemented!
> > >
> > > :) -Jodi
> > >
> > >
> > > On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana <[log in to unmask]
> > > >wrote:
> > >
> > > > > ...you'd want to create a caching service...
> > > >
> > > >
> > > > One solution for a relevant particular problem (not full-blown
> > > linked-data
> > > > caching):
> > > >
> > > > http://en.wikipedia.org/wiki/XML_Catalog
> > > >
> > > > excerpt: "However, if they are absolute URLs, they only work when
> your
> > > > network can reach them. Relying on remote resources makes XML
> > processing
> > > > susceptible to both planned and unplanned network downtime."
> > > >
> > > > We'd heard about this a while ago, but, Jodi, you and David Riordan
> and
> > > > Congress have caused a temporary retreat from normal sprint-work here
> > at
> > > > Brown today to investigate implementing this!  :/
> > > >
> > > > The particular problem that would affect us: if your processing tool
> > > > checks, say, an loc.gov mods namespace url, that processing will
> fail
> > if
> > > > the loc.gov url isn't available, unless you've implemented xml
> > catalog,
> > > > which is a formal way to locally resolve such external references.
> > > >
> > > > -b
> > > > ---
> > > > Birkin James Diana
> > > > Programmer, Digital Technologies
> > > > Brown University Library
> > > > [log in to unmask]
> > > >
> > > >
> > > > On Sep 30, 2013, at 7:15 AM, Uldis Bojars <[log in to unmask]>
> wrote:
> > > >
> > > > > What are best practices for preventing problems in cases like this
> > when
> > > > an
> > > > > important Linked Data service may go offline?
> > > > >
> > > > > --- originally this was a reply to Jodi which she suggested to post
> > on
> > > > the
> > > > > list too ---
> > > > >
> > > > > A safe [pessimistic?] approach would be to say "we don't trust
> > > > [reliability
> > > > > of] linked data on the Web as services can and will go down" and to
> > > cache
> > > > > everything.
> > > > >
> > > > > In that case you'd want to create a caching service that would keep
> > > > updated
> > > > > copies of all important Linked Data sources and a fall-back
> strategy
> > > for
> > > > > switching to this caching service when needed. Like archive.orgfor
> > > > Linked
> > > > > Data.
> > > > >
> > > > > Some semantic web search engines might already have subsets of
> Linked
> > > > Data
> > > > > web cached, but not sure how much they cover (e.g., if they have
> all
> > of
> > > > LoC
> > > > > data, up-to-date).
> > > > >
> > > > > If one were to create such a service how to best update it,
> > considering
> > > > > you'd be requesting *all* Linked Data URIs from each source? An
> > > efficient
> > > > > approach would be to regularly load RDF dumps for every major
> source
> > if
> > > > > available (e.g., LoC says - here's a full dump of all our RDF data
> > ...
> > > > and
> > > > > a .torrent too).
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Uldis
> > > > >
> > > > >
> > > > > On 29 September 2013 12:33, Jodi Schneider <[log in to unmask]>
> > > wrote:
> > > > >
> > > > >> Any best practices for caching authorities/vocabs to suggest for
> > this
> > > > >> thread on the Code4Lib list?
> > > > >>
> > > > >> Linked Data authorities & vocabularies at Library of Congress (
> > > > id.loc.gov)
> > > > >> are going to be affected by the website shutdown -- because of
> lack
> > of
> > > > >> government funds.
> > > > >>
> > > > >> -Jodi
> > > >
> > >
> >
>