The Code4Lib Journal editors are pleased to bring you this latest issue.
You can find it at http://journal.code4lib.org/issues/issues/issue23;
titles and abstracts below.
=======================
Editorial Introduction: Conscious Resolutions
by Shawn Averkamp
URL: http://journal.code4lib.org/articles/9389
Hack your life with 10 New Year’s resolutions from Code4Lib Journal.
=======================
The Road to Responsive: University of Toronto Libraries’ Journey to a New
Library Catalogue Interface
By Lisa Gayhart, Bilal Khalid, Gordon Belray
URL: http://journal.code4lib.org/articles/9195
With the recent surge in the mobile device market and an ever expanding
patron base with increasingly divergent levels of technical ability, the
University of Toronto Libraries embarked on the development of a new
catalogue discovery layer to fit the needs of its diverse users. The
result: a mobile-friendly, flexible and intuitive web application that
brings the full power of a faceted library catalogue to users without
compromising quality or performance, employing Responsive Web Design
principles.
=======================
Recipes for Enhancing Digital Collections with Linked Data
by Thomas Johnson and Karen Estlund
URL: http://journal.code4lib.org/articles/9214
Standards-based metadata in digital library collections are commonly less
than standard. Limitations brought on by routine cataloging errors,
sporadic use of authority and controlled vocabularies, and systems that
cannot effectively handle text encoding lead to pervasive quality issues.
This paper describes the use of Linked Data for enhancement and quality
control of existing digital collections metadata. We provide practical
recipes for transforming uncontrolled text values into semantically rich
data, performing automated cleanup on hand-entered fields, and discovering
new information from links between legacy metadata and external datasets.
=======================
Implementing a Collaborative Workflow for Metadata Analysis, Quality
Improvement, and Mapping
by Mark Phillips, Hannah Tarver, and Stacy Frakes
URL: http://journal.code4lib.org/articles/9199
The University of North Texas (UNT) and the Oklahoma Historical Society
(OHS) are collaborating to digitize, process, and make publicly available
more than one million photographs from the Oklahoma Publishing Company’s
historic photo archive. The project, started in 2013, is expected to span a
year an a half and will result in digitized photographs and metadata
available through The Gateway to Oklahoma History. The project team
developed the workflow described in this article to meet the specific
criterion that all of the metadata work occurs in two locations
simultaneously.
=======================
How the WSLS-TV News Digitization Project Helped to Launch a Project
Management Office
by Ivey Glendon and Melinda Baumann
URL: http://journal.code4lib.org/articles/8652
This article discusses how the WSLS-TV News Digitization Project at the
University of Virginia Libraries was the catalyst for creating a more
formalized project workflow and the eventual creation of a Project
Management Office. The project revealed the need for better coordination
between various groups in the library and more transparent processes. By
creating well documented policies and processes, the new project workflow
clarified roles, improved communication, and created greater transparency.
The new processes enabled staff to understand how decisions are made and
resources allocated which allowed them to work more efficiently.
=======================
Use of Cue Sheets in Audio Digitization
by Austin Dixon
URL: http://journal.code4lib.org/articles/9314
Audio digitization is becoming essential to many libraries. As more and
more audio files are being digitally preserved, the workflows for handling
those digital objects need to be examined to ensure efficiency. In some
instances, files are being manually manipulated when it would be more
efficient to manipulate them programmatically. This article describes a
time-saving solution to the problem of how to split master audio files into
sub-item tracks.
=======================
A Video Digital Library to Support Physicians’ Decision-making About Autism
by Matthew A. Griffin, MLIS, Dan Albertson, Ph.D., and Angela B. Barber,
Ph.D.
URL: http://journal.code4lib.org/articles/9281
A prototype Digital Video Library was developed as part of a project to
assist rural primary care clinics with diagnosis of autism, funded by the
National Network of Libraries of Medicine. The Digital Video Library takes
play sample videos generated by a rural clinic and makes it available to
experts at the Autism Spectrum Disorders (ASD) Clinic at The University of
Alabama. The experts are able to annotate segments of the video using an
integrated version of the Childhood Autism Ratings Scale-Second Edition
Standard Version (CARS2). The Digital Video Library then extracts the
annotated segments, and provides a robust search and browse feature. The
videos can then be accessed by the subject’s primary care physician. This
article summarizes the development and features of the Digital Video
Library.
=======================
Unix Commands and Batch Processing for the Reluctant Librarian or Archivist
by Anthony Cocciolo
URL: http://journal.code4lib.org/articles/9158
The Unix environment offers librarians and archivists high-quality tools
for quickly transforming born-digital and digitized assets, such as
resizing videos, creating access copies of digitized photos, and making
fair-use reproductions of audio recordings. These tools, such as ffmpeg,
lame, sox, and ImageMagick, can apply one or more manipulations to digital
assets without the need to manually process individual items, which can be
error prone, time consuming, and tedious. This article will provide
information on getting started in using the Unix environment to take
advantage of these tools for batch processing.
=======================
Automated Processing of Massive Audio/Video Content Using FFmpeg
by Kia Siang Hock, Li Lingxia
URL: http://journal.code4lib.org/articles/9128
Audio and video content forms an integral, important and expanding part of
the digital collections in libraries and archives world-wide. While these
memory institutions are familiar and well-versed in the management of more
conventional materials such as books, periodicals, ephemera and images, the
handling of audio (e.g., oral history recordings) and video content (e.g.,
audio-visual recordings, broadcast content) requires additional toolkits.
In particular, a robust and comprehensive tool that provides a programmable
interface is indispensable when dealing with tens of thousands of hours of
audio and video content.
FFmpeg is comprehensive and well-established open source software that is
capable of the full-range of audio/video processing tasks (such as encode,
decode, transcode, mux, demux, stream and filter). It is also capable of
handling a wide-range of audio and video formats, a unique challenge in
memory institutions. It comes with a command line interface, as well as a
set of developer libraries that can be incorporated into applications.
=======================
On behalf of the Code4Lib Journal Editorial Committee,
Shawn Averkamp
Code4Lib Journal Coordinating Editor for Issue 23
--
Shawn Averkamp
Interim Head, Digital Research & Publishing
University of Iowa Libraries
[log in to unmask]
319.384.3526
|