Print

Print


With JSTOR and HathiTrust I don't think it's an accident. If you want to do TDM with their stuff you need to go through official channels (which you have done with Hathi, I know). 

Take, on the other hand, Wiley vs. ACS for chemistry content. Wiley has a click through agreement administered (I guess that's the term) by CrossRef. Three seconds later you're on your way with a nice, clean, well-documented API. In contrast, if you can get your ACS rep, you can beg and beg and beg and then send them a list of DOIs of interest and they'll package and deliver you the f/t in XML for TDM. 

So... it's not *technology* 

Probably the Theological folks never guessed that's what you want to do. So not malicious, just annoying and short-sighted.

All these are my opinions and do not represent my employer, of course. Also, I am NOT endorsing any for profit publishers. 

Christina

-----Original Message-----
From: Code for Libraries <[log in to unmask]> On Behalf Of Eric Lease Morgan
Sent: Wednesday, February 05, 2020 6:02 PM
To: [log in to unmask]
Subject: [EXT] [CODE4LIB] getting content

APL external email warning: Verify sender [log in to unmask] before clicking links or attachments 

Do you find it difficult to get content? I do, and I sometimes feel as if I've been sold a bill of goods.

With the advent of the Internet (and Google), it is relatively easy to find content, but it is still very difficult to actually get content, especially at scale; content is very often hidden behind obscure links, splash pages, etc.

Take for example a mature open access publication with all the right intentions, Theological Librarianship. There you will find a pointer to the current issue and links to the archives. Cool & wonderful. But what are the actual links (URLs) to the articles? What is a link that will actually download an article to my desktop? I want to "save the time of the reader", and share a link with my colleague. Go ahead. Try to figure it out. I'll wait...

"So what?", you might say. Yes, but what if I want to download the whole of Theological Librarianship for the purposes of distant reading? What if I want to study trends in the journal? What if I want to compare & contrast Theological Librarianship with other open access publications? Downloading all of those articles one by one would deter me from ever getting started. In the past I could go to the shelf, see all the bound issues, and begin to read.

Got tired of looking for the links? Well, the links look like this, and there are about 350 of them:

  https://theolib.atla.com/theolib/article/download/14/407
  https://theolib.atla.com/theolib/article/download/17/403
  https://theolib.atla.com/theolib/article/download/18/424
  https://theolib.atla.com/theolib/article/download/19/410
  https://theolib.atla.com/theolib/article/download/20/426
  ...

Given such a list and saved in a file, it is trivial to download all the PDF documents in less than 60 seconds, all 350 of them. [2]

Suppose you maintain an institutional repository. Suppose it suports search. Do the search results point to the actual identified items, or do the search results point to some sort of "splash" page or "about" page? Again, for single items splash pages are not bad things, but what if I want to download all those preprints from a specific author, department, or school? What if I want to use & understand the whole of the College of Arts & Letters dissertation output? What if you wanted to download all those images, drop them into a machine learning process, and output metadata tags? Your research is stymied because, while you can find the content, you can not actually get it.

The HaitiTrust is FULL of content. Do cool search. List results. Show me the links to download even plain text (OCR) versions of the open access content. They don't exist. Instead, one must identify a given book's key, and then programmatically download each page of the document one by one. [3]

Our licensed databases are just as bad, if not worse. For example, do cool search against JSTOR. Get a list of results. Go the extra step and use some sort of browser extension to list all the URLs on a given page. [4] Copy the list. Paste it into a text editor. Sort the list. Remove the duplicates. Remove all the navigation links, and eventually discover that links to documents look like this:

  https://www.jstor.org/stable/j.ctvpg85k6.22

Fine, but when you go there you are presented with a splash page and another link:

  https://www.jstor.org/stable/pdf/j.ctvd58v2r.6.pdf

So you get smart, and you perform a find/replace operation against your links to point to the PDF files, but when you go to these links you are presented with a challenge which is (by design) very difficult to circumvent. By this time you are tired and give up. But still you have done the perfect search, identified the perfect set of twenty five articles, and despite all of this cool Internet hipe, you can not get the content.

Other examples are innumerable. 

With the advent of the Internet I feel as if we have gone one step forward and half a step back. "Look at all the glorious content that is here, but you can't have it... unless you pay." We pay in terms of time, energy, or real money, and even then it is not enough. Intellectually I understand, especially from a for-profit publisher's point of view, but I don't think this makes very much sense when it comes to content from libraries, other cultural heritage institutions, publishers who simply what to get their content out there, or content which was licensed and paid for.

As people who do Internet stuff in libraries, I think we can do better. 


Links

[1] Theological Librarianship - https://theolib.atla.com/theolib

[2] cat urls.txt | parallel wget

[3] A hack to do just this is located at https://github.com/ericleasemorgan/htid2books

[4] For example, Link Grabber at https://chrome.google.com/webstore/detail/link-grabber/caodelkhipncidmoebgbbeemedohcdma  which is simply a really bad URL in and of itself


--
Eric Lease Morgan, Librarian