LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  February 2020

CODE4LIB February 2020

Subject:

Code4Lib Journal issue #47

From:

Péter Király <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Mon, 17 Feb 2020 22:30:59 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (213 lines)

Dear Code4Lib list members,

I am happy to announce that the Code4Lib Journal's latest issue (#47)
has been just published.

https://journal.code4lib.org/

Table of contents

Editorial
Péter Király
https://journal.code4lib.org/articles/15055

on diversity and mentoring

Scraping BePress: Downloading Dissertations for Preservation
Stephen Zweibel
https://journal.code4lib.org/articles/15016

This article will describe our process developing a script to automate
downloading of documents and secondary materials from our library’s
BePress repository. Our objective was to collect the full archive of
dissertations and associated files from our repository into a local
disk for potential future applications and to build out a preservation
system.

Unlike at some institutions, our students submit directly into
BePress, so we did not have a separate repository of the files; and
the backup of BePress content that we had access to was not in an
ideal format (for example, it included “withdrawn” items and did not
effectively isolate electronic theses and dissertations). Perhaps more
importantly, the fact that BePress was not SWORD-enabled and lacked a
robust API or batch export option meant that we needed to develop a
data-scraping approach that would allow us to both extract files and
have metadata fields populated. Using a CSV of all of our records
provided by BePress, we wrote a script to loop through those records
and download their documents, placing them in directories according to
a local schema. We dealt with over 3,000 records and about three times
that many items, and now have an established process for retrieving
our files from BePress. Details of our experience and code are
included.

Persistent identifiers for heritage objects
Lukas Koster
https://journal.code4lib.org/articles/14978

Persistent identifiers (PID’s) are essential for getting access and
referring to library, archive and museum (LAM) collection objects in a
sustainable and unambiguous way, both internally and externally.
Heritage institutions need a universal policy for the use of PID’s in
order to have an efficient digital infrastructure at their disposal
and to achieve optimal interoperability, leading to open data, open
collections and efficient resource management.

Here the discussion is limited to PID’s that institutions can assign
to objects they own or administer themselves. PID’s for people,
subjects etc. can be used by heritage institutions, but are generally
managed by other parties.

The first part of this article consists of a general theoretical
description of persistent identifiers. First of all, I discuss the
questions of what persistent identifiers are and what they are not,
and what is needed to administer and use them. The most commonly used
existing PID systems are briefly characterized. Then I discuss the
types of objects PID’s can be assigned to. This section concludes with
an overview of the requirements that apply if PIDs should also be used
for linked data.

The second part examines current infrastructural practices, and
existing PID systems and their advantages and shortcomings. Based on
these practical issues and the pros and cons of existing PID systems a
list of requirements for PID systems is presented which is used to
address a number of practical considerations. This section concludes
with a number of recommendations.

Dimensions & VOSViewer Bibliometrics in the Reference Interview
Brett Williams
https://journal.code4lib.org/articles/14964

The VOSviewer software provides easy access to bibliometric mapping
using data from Dimensions, Scopus and Web of Science. The properly
formatted and structured citation data, and the ease in which it can
be exported open up new avenues for use during citation searches and
reference interviews. This paper details specific techniques for using
advanced searches in Dimensions, exporting the citation data, and
drawing insights from the maps produced in VOS Viewer. These search
techniques and data export practices are fast and accurate enough to
build into reference interviews for graduate students, faculty, and
post-PhD researchers. The search results derived from them are
accurate and allow a more comprehensive view of citation networks
embedded in ordinary complex boolean searches.

Automating Authority Control Processes
Stacey Wolf
https://journal.code4lib.org/articles/15014

Authority control is an important part of cataloging since it helps
provide consistent access to names, titles, subjects, and genre/forms.
There are a variety of methods for providing authority control,
ranging from manual, time-consuming processes to automated processes.
However, the automated processes often seem out of reach for small
libraries when it comes to using a pricey vendor or expert cataloger.
This paper introduces ideas on how to handle authority control using a
variety of tools, both paid and free. The author describes how their
library handles authority control; compares vendors and programs that
can be used to provide varying levels of authority control; and
demonstrates authority control using MarcEdit.

Managing Electronic Resources Without Buying into the Library Vendor Singularity
James Fournie
https://journal.code4lib.org/articles/14955

Over the past decade, the library automation market has faced
continuing consolidation. Many vendors in this space have pushed
towards monolithic and expensive Library Services Platforms. Other
vendors have taken “walled garden” approaches which force vendor
lock-in due to lack of interoperability. For these reasons and others,
many libraries have turned to open-source Integrated Library Systems
(ILSes) such as Koha and Evergreen. These systems offer more
flexibility and interoperability options, but tend to be developed
with a focus on public libraries and legacy print resource
functionality. They lack tools important to academic libraries such as
knowledge bases, link resolvers, and electronic resource management
systems (ERMs). Several open-source ERM options exist, including CORAL
and FOLIO. This article analyzes the current state of these and other
options for libraries considering supplementing their open-source ILS
either alone, hosted or in a consortial environment.

Shiny Fabric: A Lightweight, Open-source Tool for Visualizing and
Reporting Library Relationships
Atalay Kutlay, Cal Murgu
https://journal.code4lib.org/articles/14938

This article details the development and functionalities of an
open-source application called Fabric. Fabric is a simple to use
application that renders library data in the form of network graphs
(sociograms). Fabric is built in R using the Shiny package and is
meant to offer an easy-to-use alternative to other software, such as
Gephi and UCInet. In addition to being user friendly, Fabric can run
locally as well as on a hosted server. This article discusses the
development process and functionality of Fabric, use cases at the New
College of Florida’s Jane Bancroft Cook Library, as well as plans for
future development.

Analyzing and Normalizing Type Metadata for a Large Aggregated Digital Library
Joshua D. Lynch, Jessica Gibson, and Myung-Ja Han
https://journal.code4lib.org/articles/14995

The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata
from contributing institutions around the state of Illinois and
provides this metadata to the Digital Public Library of America (DPLA)
for greater access. The IDHH helps contributors shape their metadata
to the standards recommended and required by the DPLA in part by
analyzing and enhancing aggregated metadata. In late 2018, the IDHH
undertook a project to address a particularly problematic field, Type
metadata. This paper walks through the project, detailing the process
of gathering and analyzing metadata using the DPLA API and OpenRefine,
data remediation through XSL transformations in conjunction with local
improvements by contributing institutions, and the DPLA ingestion
system’s quality controls.

Scaling IIIF Image Tiling in the Cloud
Yinlin Chen, Soumik Ghosh, Tingting Jiang, James Tuttle
https://journal.code4lib.org/articles/14933

The International Archive of Women in Architecture, established at
Virginia Tech in 1985, collects books, biographical information, and
published materials from nearly 40 countries that are divided into
around 450 collections. In order to provide public access to these
collections, we built an application using the IIIF APIs to
pre-generate image tiles and manifests which are statically served in
the AWS cloud. We established an automatic image processing pipeline
using a suite of AWS services to implement microservices in Lambda and
Docker. By doing so, we reduced the processing time for terabytes of
images from weeks to days.

In this article, we describe our serverless architecture design and
implementations, elaborate the technical solution on integrating
multiple AWS services with other techniques into the application, and
describe our streamlined and scalable approach to handle extremely
large image datasets. Finally, we show the significantly improved
performance compared to traditional processing architectures along
with a cost evaluation.

Where Do We Go From Here: A Review of Technology Solutions for
Providing Access to Digital Collections
Kelli Babcock, Sunny Lee, Jana Rajakumar, Andy Wagner
https://journal.code4lib.org/articles/15000

The University of Toronto Libraries is currently reviewing technology
to support its Collections U of T service. Collections U of T provides
search and browse access to 375 digital collections (and over 203,000
digital objects) at the University of Toronto Libraries. Digital
objects typically include special collections material from the
university as well as faculty digital collections, all with unique
metadata requirements. The service is currently supported by
IIIF-enabled Islandora, with one Fedora back end and multiple Drupal
sites per parent collection (see attached image). Like many
institutions making use of Islandora, UTL is now confronted with
Drupal 7 end of life and has begun to investigate a migration path
forward. This article will summarise the Collections U of T functional
requirements and lessons learned from our current technology stack. It
will go on to outline our research to date for alternate solutions.
The article will review both emerging micro-service solutions, as well
as out-of-the-box platforms, to provide an overview of the digital
collection technology landscape in 2019. Note that our research is
focused on reviewing technology solutions for providing access to
digital collections, as preservation services are offered through
other services at the University of Toronto Libraries.

Best regards,
Péter Király
Coordinating Editor

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager