The latest issue of the Code4Lib Journal is now available at
https://journal.code4lib.org/issues/issues/issue53
Here's what you'll find in this issue:
Editorial — New name change policy
<https://journal.code4lib.org/articles/16465>
Ron Peterson
The Code4Lib Journal Editorial Committee is implementing a new name change
policy aimed to facilitate the process and ensure timely and comprehensive
name changes for anyone who needs to change their name within the Journal.
Works, Expressions, Manifestations, Items: An Ontology
<https://journal.code4lib.org/articles/16491>
Karen Coyle
The concepts first introduced in the FRBR document and known as “WEMI” have
been employed in situations quite different from the library bibliographic
catalog. This is evidence that a definition of similar classes that are
more general than those developed for library usage would benefit metadata
developers broadly. This article proposes a minimally constrained set of
classes and relationships that could form the basis for a useful model of
created works.
Citation Needed: Adding Citations to CONTENTdm Records
<https://journal.code4lib.org/articles/16289>
Jenn Randles & Andrew Bullen
The Tennessee State Library and Archives and the Illinois State Library
identified a need to add citation information to individual image records
in OCLC’s CONTENTdm (https://www.oclc.org/en/contentdm.html). Experience
with digital archives at both institutions showed that citation information
was one of the most requested features. Unfortunately, CONTENTdm does not
natively display citation information about image records; to add this
functionality, custom JavaScript had to be written that would interact with
the underlying React environment and parse out or retrieve the appropriate
metadata to dynamically build record citations. Detailed code and a
description of methods for building two different models of citation
generators are presented.
Fractal in detail: What information is in a file format identification
report? <https://journal.code4lib.org/articles/16351>
Ross Spencer
A file format identification report, such as those generated by digital
preservation tools, DROID, Siegfried, or FIDO, contain an incredible wealth
of information. Used to scan discrete sets of files comprising a part of,
or the entirety of a digital collection, these datasets can serve as entry
points for further activities including appraisal, identification of future
work efforts, and the facilitation of transfer of digital objects into
preservation storage. The information contained in them is fractal in
detail and there are numerous outputs that can be generated from that
detail. This paper describes the purpose of a file format identification
report and the extensive information that can be extracted from one. It
summarizes a number of ways of transforming them into the inputs for other
systems and describes a handful of the tools already doing so. The paper
concludes that describing a format identification report is a pivotal
artefact in the digital transfer process, and asks the reader to consider
how they might leverage them and the benefits doing so might provide.
Automated 3D Printing in Libraries
<https://journal.code4lib.org/articles/16310>
Brandon Patterson, Ben Engel, and Willis Holle
This article highlights the creation of an automated 3D printed system
created at a health sciences library at a large research university. As
COVID-19 limited in-person interaction with 3D printers, a group of library
staff came together to code a form that took users’ 3D printed files and
connected them to machines automatically. A ticketing system and payment
form was also automated via this system. The only in-person interactions
are dedicated staff members that unload the prints. This article will
describe the journey in getting to an automated system and share code and
strategies so others can try it for themselves.
Automating reference consultation requests with JavaScript and a Google Form
<https://journal.code4lib.org/articles/16414>
Stephen Zweibel
At the CUNY Graduate Center Library, reference consultation requests were
previously sent to a central email address, then manually directed by our
head of reference to the appropriate subject expert. This process was
cumbersome and because the inbox was not checked every day, responses were
delayed and messages were occasionally missed. In order to streamline this
process, I created a form and wrote a script that uses the answers in the
form to automatically forward any consultation requests to the correct
subject specialist. This was done using JavaScript, Google Sheets, and the
Google Apps Script backend. When a patron requesting a consultation fills
out the form, they include their field of research. This field is
associated in my script with a particular subject specialist librarian, who
then receives an email with the pertinent information. Rather than
requiring either that patrons themselves search for the right subject
specialist, or that library faculty spend time distributing messages to the
right liaison, this enables a smoother, more direct interaction. In this
article, I will describe the steps I took to write this script, using only
freely available online software.
Lantern: A Pandoc Template for OER Publishing
<https://journal.code4lib.org/articles/16329>
Chris Diaz
Lantern is a template and workflow for using Pandoc and GitHub to create
and host multi-format open educational resources (OER) online. It applies
minimal computing methods to OER publishing practices. The purpose is to
minimize the technical footprint for digital publishing while maximizing
control over the form, content, and distribution of OER texts. Lantern uses
Markdown and YAML to capture an OER’s source content and metadata and
Pandoc to transform it into HTML, PDF, EPUB, and DOCX formats. Pandoc’s
options and arguments are pre-configured in a Bash script to simplify the
process for users. Lantern is available as a template repository on GitHub.
The template repository is set up to run Pandoc with GitHub Actions and
serve output files on GitHub Pages for convenience; however, GitHub is not
a required dependency. Lantern can be used on any modern computer to
produce OER files that can be uploaded to any modern web server.
Strategies for Preserving Digital Scholarship / Humanities Projects
<https://journal.code4lib.org/articles/16370>
Kirsta Stapelfeldt, Sukhvir Khera, Natkeeran Ledchumykanthan, Lara Gomez,
Erin Liu, and Sonia Dhaliwal
The Digital Scholarship Unit (DSU) at the University of Toronto Scarborough
library frequently partners with faculty for the creation of digital
scholarship (DS) projects. However, managing completed projects can be
challenging when it is no longer under active development by the original
project team, and resources allocated to its ongoing maintenance are
scarce. Maintaining inactive projects on the live web bloats staff
workloads or is not possible due to limited staff capacity. As technical
obsolescence meets a lack of staff capacity, the gradual disappearance of
digital scholarship projects forms a gap in the scholarly record. This
article discusses the Library DSU’s experimentations with using web
archiving technologies to capture and describe digital scholarship
projects, with the goal of accessioning the resulting web archives into the
Library’s digital collections. In addition to comparing some common
technologies used for crawling and replay of archives, this article
describes aspects of the technical infrastructure the DSU is building with
the goal of making web archives discoverable and playable through the
library’s digital collections interface.
The DSA Toolkit Shines Light Into Dark and Stormy Archives
<https://journal.code4lib.org/articles/16441>
Shawn M. Jones, Himarsha R. Jayanetti, Alex Osborne, Paul Koerbin, Martin
Klein, Michele C. Weigle, Michael L. Nelson
Themed web archive collections exist to make sense of archived web pages
(mementos). Some collections contain hundreds of thousands of mementos.
There are many collections about the same topic. Few collections on
platforms like Archive-It include standardized metadata. Reviewing the
documents in a single collection thus becomes an expensive proposition.
Search engines help find individual documents but do not provide an overall
understanding of each collection as a whole. Visitors need to be able to
understand what individual collections contain so they can make decisions
about individual collections and compare them to each other. The Dark and
Stormy Archives (DSA) Project applies social media storytelling to a subset
of a collection to facilitate collection understanding at a glance. As part
of this work, we developed the DSA Toolkit, which helps archivists and
visitors leverage this capability. As part of our recent International
Internet Preservation Consortium (IIPC) grant, Los Alamos National
Laboratory (LANL) and Old Dominion University (ODU) piloted the DSA toolkit
with the National Library of Australia (NLA). Collectively we have made
numerous improvements, from better handling of NLA mementos to native Linux
installers to more approachable Web User Interfaces. Our goal is to make
the DSA approachable for everyone so that end-users and archivists alike
can apply social media storytelling to web archives.
Supporting open access, integrating distributed research platforms, and
building a research information management platform
<https://journal.code4lib.org/articles/16479>
Daniel M. Coughlin, Cynthia Hudson Vitale
Academic libraries are often called upon by their university communities to
collect, manage, and curate information about the research activity
produced at their campuses. Proper research information management (RIM)
can be leveraged for multiple institutional contexts, including networking,
reporting activities, building faculty profiles, and supporting the
reputation management of the institution.
In the last ten to fifteen years the adoption and implementation of RIM
infrastructure has become widespread throughout the academic world.
Approaches to developing and implementing this infrastructure have varied,
from commercial and open-source options to locally developed instances.
Each piece of infrastructure has its own functionality, features, and
metadata sources. There is no single application or data source to meet all
the needs of these varying pieces of research information, many of these
systems together create an ecosystem to provide for the diverse set of
needs and contexts.
This paper examines the systems at Pennsylvania State University that
contribute to our RIM ecosystem; how and why we developed another piece of
supporting infrastructure for our Open Access policy and the successes and
challenges of this work.
|