On Nov 8, 2018, at 9:26 AM, Andrew Darby <[log in to unmask]> wrote:
> Hello, all (and apologies for cross-posting),
>
> Issue 42 of the Code4Lib Journal is now available for your delectation. Please enjoy responsibly...
:-D
While I don't know if it is "responsible", but I fed the URL's from the most recent issue of the Journal into a system I call The Distant Reader. [1] The Reader cached the content, transformed it into plain text, did various natural language processing against the text, saved the result in a database, semantically indexed the whole, created a summary report, and made everything available for downloading as a .zip file. Using The Distant Reader I learned a few things about the most recent issue:
* github.com and oclc.org are frequently cited home pages,
but there are quite a number of others
* some of the more statistically significant keywords are
libraries, data, developers, services, and publishers
* unlike most content I evaluate, the lemma "use" is... used
frequently as well as "link", "create", and "include"
* moreover a more common lemma ("do") is lower in the frequency
list
Here are a few automatically generated summaries of the articles:
* There are currently fourteen of us listed on the Editorial
Committee page, which sounds like a lot, but when you have eleven
articles as we do in Issue 42, and each article needs an assigned
editor and a “second reader,” we can get stretched thin.
* Many repositories offer the capacity for batch upload via CSV,
so our method provides a template Python script that leverages
the Habanero library for populating CSV files with existing
metadata retrieved from the CrossRef API.
* Using the Python programming language with its collection of
modules created specifically for data analysis can help with this
task, and ultimately result in better and more useful data
customized to the needs of the library using it.
* Northwestern University Libraries is using Jekyll and Bookdown,
two open source static site generators, for its digital
publishing service.
* By using Python, Alma’s API and much trial and error, the
Wartburg College library was able to parse the serial item
descriptions into enumeration and chronology data that was
uploaded back into Alma.
I have temporarily made this evaluation (I call them "study carrels") available on the Web. [2, 3]
The Reader is far from perfect. While it includes a great deal of latent functionality, it suffers from a huge lack of usability. On the other hand, with the advent of so much content available on the Web, I believe just about anyone who reads can benefit from a system which does analysis against a corpora.
[1] The Distant Reader - https://github.com/ericleasemorgan/reader
[2] Code4Lib Journal, Issue 42 "study carrel" - http://dh.crc.nd.edu/tmp/code4lib-journal-042/
[3] summary report - http://dh.crc.nd.edu/tmp/code4lib-journal-042/etc/report.txt
--
Eric Lease Morgan
Digital Initiatives Librarian, Navari Family Center for Digital Scholarship
University of Notre Dame
|