Print

Print


The Code4Lib Journal issue 57 has been published!
https://journal.code4lib.org/

Apologies for cross-posting.

Paraphrasing the title of Christine L. Borgman’s inaugural lecture in
Göttingen some years ago “Big data, little data, open data” I could
say that the current issue of Code4Lib is about big code, little code,
open code, old code. The good side of coding is that effective
contribution could be done with different levels and types of
background knowledge. The issue proves to us that even small
modifications or sharing knowledge about command line usage of a tool
might be very useful for the user community. Let’s see what we have!

Stefano Cossu, Ruven Pillay, Glen Robson and Michael D. Smith
published the result of a classical measurement based experiment in
"Evaluating HTJ2K as a Drop-In Replacement for JPEG2000 with IIIF".
They compared the effects of using different image formats (TIFF, JPEG
2000 and High-Throughput JPEG 2000 - abbreviated as HTJ2K) in the
context of IIIF requests.
https://journal.code4lib.org/articles/17596

Jennifer Ye Moon-Chung’s article "Standardization of Journal Title
Information from Interlibrary Loan Data: A Customized Python Code
Approach" shows that when we would like to use the log files that
contains data entered by the users of interlibrary loan service we
have clean the input data: normalize values such as ISSN and ISBN
numbers, and make use of external services to retrieve standardized
title information.
https://journal.code4lib.org/articles/17450

Several noteworthy analyses called attention to the limitations of
text corpora without proper metadata of the individual documents. Erin
Wolfe in "ChronoNLP: Exploration and Analysis of Chronological Textual
Corpora" presents a web based application called ChronoLNP to help the
usage of corpora in which the historical aspects are important.
ChronoLNP enables the combination of temporal trend analysis and a
variety of natural language processing approaches.
https://journal.code4lib.org/articles/17502

Do you use (or plan to use) FOLIO in the backend? Then Aaron Neslin
and Jaime Taylor's review "A Very Small Pond: Discovery Systems That
Can Be Used with FOLIO in Academic Libraries" is for you. They check
all possible commercial and open source front-end options. For this
review they had talks with library sysadmins to get informed about the
practicalities. A plus: information about the support of accessibility
for each tool.
https://journal.code4lib.org/articles/17433

Elizabeth Joan Kelly’s "Supporting Library Consortia Website Needs:
Two Case Studies" shows how a central unit of a library consortia
could support the partner institutions with customizable central
services when the details of their requirements are different.
https://journal.code4lib.org/articles/17452

The paper by Vlastimil Krejčíř, Alžbeta Strakošová, and Jan Adler
"From DSpace to Islandora: Why and How" is the only one from Europe in
this issue, and as far I remember the first one from the Czech
Republic. The authors compare the two popular repository software
(technological stack, data structure, customization etc.) and describe
the process of migrating several services from one place to the other.
https://journal.code4lib.org/articles/17398

"Creating a Full Multitenant Back End User Experience in Omeka S with
the Teams Module" written by Alexander Dryden, Daniel G. Tracy
highlights the problems of content and rights separation in Omeka S
when an institution would like to run a single instance for multiple
projects. The authors not only make their usage scenarios clear, but
provide with a solution, a new, open source module written by
themselves.
https://journal.code4lib.org/articles/17389

"The Forgotten Disc: Synthesis and Recommendations for Viable VCD
Preservation" by Andrew Weaver, and Ashley Blewer introduces the
reader the preservation of contents of video discs: what is this
format, where it was popular, how to save the bitstream from it
(including the metadata), and how we can view and further manipulate
it.
https://journal.code4lib.org/articles/17406

Krista L. Gray’s "Breathing Life into Archon: A Case Study in Working
with an Unsupported System" is a nice account on how a devoted
archivist with some programming knowledge could keep a legacy software
in sync with the changed requirements. Personally for me it is very
sympathetic that the author confesses her limitations. I hope it will
encourage others with similar backgrounds to follow a similar path.
https://journal.code4lib.org/articles/17509

How to select an open source software backing our service? There are
some answers to this frequently asked question, but I don’t think that
the library (or - thinking of the research software landscape - the
whole academic) community encounters all possible aspects Jenn Colt’s
"An introduction to using metrics to assess the health and
sustainability of library open source software projects" sheds light
to four recent metrics scrutinizing the behaviors of the development
community behind a software.
https://journal.code4lib.org/articles/17514

Finally, let’s party like it’s 2023! Kent Fitch’s "Searching for
meaning rather than keywords and returning answers rather than links"
shows his experiments with the usage of Large Language Models (yes,
including ChatGPT). Instead of talking from a bird’s perspective the
paper discusses the advantages, disadvantages, limitations, and costs
of down-to-earth use cases.
https://journal.code4lib.org/articles/17443

Many thanks to the authors for bringing all these to Code4Lib!

Best regards,
Péter Király
coordinating editor of the issue