LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  July 2014

CODE4LIB July 2014

Subject:

Code4Lib Journal issue 25 is now available!

From:

Dan Scott <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Mon, 21 Jul 2014 14:24:39 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (169 lines)

The 25th (wow) issue of the Code4Lib Journal is now available at
http://journal.code4lib.org/issues/issues/issue25

Here is what you will find inside:

Editorial introduction: On libraries, code, support, inspiration, and
collaboration
Dan Scott
Reflections on the occasion of the 25th issue of the Code4Lib Journal:
sustaining a community for support, inspiration, and collaboration at the
intersection of libraries and information technology.

Getting What We Paid for: a Script to Verify Full Access to E-Resources
Kristina M. Spurgin
Libraries regularly pay for packages of e-resources containing hundreds to
thousands of individual titles. Ideally, library patrons could access the
full content of all titles in such packages. In reality, library staff and
patrons inevitably stumble across inaccessible titles, but no library has
the resources to manually verify full access to all titles, and basic URL
checkers cannot check for access. This article describes the E-Resource
Access Checker—a script that automates the verification of full access.
With the Access Checker, library staff can identify all inaccessible titles
in a package and bring these problems to content providers’ attention to
ensure we get what we pay for.

Opening the Door: A First Look at the OCLC WorldCat Metadata API
Terry Reese
Libraries have long relied on OCLC’s WorldCat database as a way to
cooperatively share bibliographic data and declare library holdings to
support interlibrary loan services. As curator, OCLC has traditionally
mediated all interactions with the WorldCat database through their various
cataloging clients to control access to the information. As more and more
libraries look for new ways to interact with their data and streamline
metadata operations and workflows, these clients have become bottlenecks
and an inhibitor of library innovation. To address some of these concerns,
in early 2013 OCLC announced the release of a set of application
programming interfaces (APIs) supporting read and write access to the
WorldCat database. These APIs offer libraries their first opportunity to
develop new services and workflows that directly interact with the WorldCat
database, and provide opportunities for catalogers to begin redefining how
they work with OCLC and their data.

Docker: a Software as a Service, Operating System-Level Virtualization
Framework
John Fink
Docker is a relatively new method of virtualization available natively for
64-bit Linux. Compared to more traditional virtualization techniques,
Docker is lighter on system resources, offers a git-like system of commits
and tags, and can be scaled from your laptop to the cloud.

A Metadata Schema for Geospatial Resource Discovery Use Cases
Darren Hardy and Kim Durante
We introduce a metadata schema that focuses on GIS discovery use cases for
patrons in a research library setting. Text search, faceted refinement, and
spatial search and relevancy are among GeoBlacklight’s primary use cases
for federated geospatial holdings. The schema supports a variety of GIS
data types and enables contextual, collection-oriented discovery
applications as well as traditional portal applications. One key limitation
of GIS resource discovery is the general lack of normative metadata
practices, which has led to a proliferation of metadata schemas and
duplicate records. The ISO 19115/19139 and FGDC standards specify metadata
formats, but are intricate, lengthy, and not focused on discovery.
Moreover, they require sophisticated authoring environments and cataloging
expertise. Geographic metadata standards target preservation and quality
measure use cases, but they do not provide for simple inter-institutional
sharing of metadata for discovery use cases. To this end, our schema reuses
elements from Dublin Core and GeoRSS to leverage their normative semantics,
community best practices, open-source software implementations, and
extensive examples already deployed in discovery contexts such as web
search and mapping. Finally, we discuss a Solr implementation of the schema
using a “geo” extension to MODS.

Ebooks without Vendors: Using Open Source Software to Create and Share
Meaningful Ebook Collections
Matt Weaver
The Community Cookbook project began with wondering how to take local
cookbooks in the library’s collection and create a recipe database. The
final website is both a recipe website and collection of ebook versions of
local cookbooks. This article will discuss the use of open source software
at every stage in the project, which proves that an open source publishing
model is possible for any library.

Within Limits: mass-digitization from scratch
Pieter De Praetere
The provincial library of West-Vlaanderen (Belgium) is digitizing a large
part of its iconographic collection. Due to various (technical and
financial) reasons no specialist software was used. FastScan is a set of
VBS-scripts that was developed by the author using off-the-shelf software
that was either included in MS Windows (XP & 7) or already installed
(imageMagick, Irfanview, littlecms, exiv2). This scripting package has
increased the digitization efforts immensely. The article will show what
software was used, the problems that occurred and how they were scripted
together.

A Web Service for File-Level Access to Disk Images
Sunitha Misra, Christopher A. Lee and Kam Woods
Digital forensics tools have many potential applications in the curation of
digital materials in libraries, archives and museums (LAMs). Open source
digital forensics tools can help LAM professionals to extract digital
contents from born-digital media and make more informed preservation
decisions. Many of these tools have ways to display the metadata of the
digital media, but few provide file-level access without having to mount
the device or use complex command-line utilities. This paper describes a
project to develop software that supports access to the contents of digital
media without having to mount or download the entire image. The work
examines two approaches in creating this tool: First, a graphical user
interface running on a local machine. Second, a web-based application
running in web browser. The project incorporates existing open source
forensics tools and libraries including The Sleuth Kit and libewf along
with the Flask web application framework and custom Python scripts to
generate web pages supporting disk image browsing.

Processing Government Data: ZIP Codes, Python, and OpenRefine
Frank Donnelly
While there is a vast amount of useful US government data on the web, some
of it is in a raw state that is not readily accessible to the average user.
Data librarians can improve accessibility and usability for their patrons
by processing data to create subsets of local interest and by appending
geographic identifiers to help users select and aggregate data. This case
study illustrates how census geography crosswalks, Python, and OpenRefine
were used to create spreadsheets of non-profit organizations in New York
City from the IRS Tax-Exempt Organization Masterfile. This paper
illustrates the utility of Python for data librarians and should be
particularly insightful for those who work with address-based data.

Indexing Bibliographic Database Content Using MariaDB and Sphinx Search
Server
Arie Nugraha
Fast retrieval of digital content has become mandatory for library and
archive information systems. Many software applications have emerged to
handle the indexing of digital content, from low-level ones such Apache
Lucene, to more RESTful and web-services-ready ones such Apache Solr and
ElasticSearch. Solr’s popularity among library software developers makes it
the “de-facto” standard software for indexing digital content. For content
(full-text content or bibliographic description) already stored inside a
relational DBMS such as MariaDB (a fork of MySQL) or PostgreSQL, Sphinx
Search Server (Sphinx) is a suitable alternative. This article will cover
an introduction on how to use Sphinx with MariaDB databases to index
database content as well as some examples of Sphinx API usage.

Solving Advanced Encoding Problems with FFMPEG
Josh Romphf
Previous articles in the Code4Lib Journal touch on the capabilities of
FFMPEG in great detail, and given these excellent introductions, the
purpose of this article is to tackle some of the common problems users
might face, dissecting more complicated commands and suggesting their
possible uses.

HathiTrust Ingest of Locally Managed Content: A Case Study from the
University of Illinois at Urbana-Champaign
Kyle R. Rimkus & Kirk M. Hess
In March 2013, the University of Illinois at Urbana-Champaign Library
adopted a policy to more closely integrate the HathiTrust Digital Library
into its own infrastructure for digital collections. Specifically, the
Library decided that the HathiTrust Digital Library would serve as a
trusted repository for many of the library’s digitized book collections, a
strategy that favors relying on HathiTrust over locally managed access
solutions whenever this is feasible. This article details the thinking
behind this policy, as well as the challenges of its implementation,
focusing primarily on technical solutions for “remediating” hundreds of
thousands of image files to bring them in line with HathiTrust’s strict
specifications for deposit. This involved implementing HTFeed, a Perl 5
application developed at the University of Michigan for packaging content
for ingest into Hathi Trust, and its many helper applications (JHOVE to
detect metadata problems, Exiftool to detect metadata issues and repair
missing image metadata, and Kakadu to create JP2000 files), as well as a
file format conversion process using ImageMagick. Today, Illinois has over
1600 locally managed volumes queued for ingest, and has submitted over 2300
publicly available titles to the HathiTrust Digital Library.

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager