Print

Print


Thanks, Andromeda.  I take your point on thinking about the names.  Calling it a Tree only makes sense if there are Nodes.  If I don't use Nodes, I should use a class named Package for the package of data and metadata rather than Tree.  The name helps clarify the mental model.

You make a good point that knowing how to deal with specific metadata types should be a Node function.  I will have to think about making specific subclasses of Node for different metadata types.  It had occurred to me before while considering the issue.  That would make sense, and is a good argument for using a Node class.

Let me give a couple examples of the datasets I'm dealing with.
(A)  Once a week, Springer adds a zipfile to an FTP site, containing open access articles written by our faculty.  The zipfile contains a directory structure eight layers deep.  For each article, there is one XML file at a middle layer and one PDF file at a deeper layer, plus an optional ninth layer containing supplemental files for the article.
(B)  Faculty, students, and departments submit materials to be entered into the repository.  A person in the department or a liaison in the library reference staff handles requests from that department, using a template to create a spreadsheet with the related metadata, one row per title.  The materials may be PDF, images, videos, or sound recordings.  Catalogers in the library review the spreadsheet, replacing keyword subjects with controlled vocabulary and assuring general quality control.  The spreadsheet, PDFs, and media files are placed in a filesystem directory on a remote server waiting to be loaded.
(C)  The library purchases an electronic title (text or media) which must be hosted locally rather than on the vendor server.  A cataloger finds or creates a MARC record on WorldCat and downloads it.  The files are placed on Box until they can be loaded into the repository.

I don't know whether the toolkit will be able to pull directly from Box, but that would be interesting.  Similarly, having the toolkit pull files directly from an FTP site would be interesting but not strictly necessary.  I definitely want to handle filesystem directories, zipfiles, tarfiles, and possibly others.  That would relieve the staff from unpacking the files themselves.  The batch ingest process within the repository uses a file browser to select an XML metadata file (custom schema with elements from several schemas) and a list of data files.  So pre-processing needs to convert the submitted metadata into XML, run it through an XSL transform, collect several files into a single XML file if necessary, and reorganize the data files so they can be conveniently found in the file browser.  

The absurdity of writing, maintaining, teaching, and following the complicated instructions to manually prepare all these materials and several more is why we wrote the current toolkit.  Much of conversion processing exists in the current version of the toolkit, but it needs to be a lot more modular and flexible.  I expect we will continue to get new sources, each with their own quirks.

Optimally, I would write the toolkit so someone could write libraries to prepare datasets for a different repository system.  I don't know enough about the ingest processes of systems other than ours (Fedora 4 with Samvera interface and lots of local code), but I will try to make the core functionality as general as I can and keep the repository-specific pieces in separate libraries.  Going with Nodes sounds like the right direction.  Thank you for your help!

					Steve McDonald
					[log in to unmask]


-----Original Message-----
From: Code for Libraries <[log in to unmask]> On Behalf Of Andromeda Yelton
Sent: Wednesday, November 11, 2020 3:42 PM
To: [log in to unmask]
Subject: Re: [CODE4LIB] modeling data and metadata for repository ingest

I think you will be happiest in the long run if Tree exposes an interface that is the same as other interfaces you are familiar with, and it is entirely reasonable for a Node object to 1) exist and 2) know its own path.
Also I think a "copy" method should only copy, not "copy and instantiate"
(if a function is most accurately described with a phrase containing 'and', it wants to be at least two functions). Keeping its responsibilities small will make it easier to write, test, and maintain.

There's something pulling at my brain about this class structure that I can't quite identify without seeing the data, but it is something about the name and responsibilities of Tree. Knowing how to copy is treelike. But knowing how to deal with specific metadata types is possibly more Nodelike?

You say there are lots of possible input types and output types -- what does the part between them look like? Does everything go through some sort of common state? If so, it would make sense for a Node to know how to transform between its content type and that common form, and for Trees to deal only with the common form. Admittedly I cannot imagine what that common form would look like. But otherwise you're writing a fully-connected graph of transforms between everything and everything and you will be extremely sad as this graph grows.

Anyway. I'm not quite sure where I'm going with this, without having the code in front of me. But I think it's worth being very explicit with yourself about what you expect the responsibilities of each class to be, because then you can look at whether those responsibilities make sense, whether the class names correctly describe those sets of responsibilities, and what interfaces you need to expose to make it harmonize.

On Tue, Nov 10, 2020 at 4:34 PM McDonald, Stephen <[log in to unmask]>
wrote:

> Fellow library code wranglers,
>
> Coding questions don't come up often here, but I think this might be 
> the best group to ask, as my question somewhat involves both coding 
> and the nature of metadata and data.  A considerable amount of my work 
> involves ingesting materials into our institutional repository.  We 
> get this material from many sources in many formats; PDF, Quicktime, 
> WAV, etc., with metadata in XML, MARC21, or even spreadsheets.  It 
> might be organized as filesystem directories, zip files, or images with imbedded metadata.
> Before loading into the repository, the metadata must be extracted and 
> transformed, and the data files reorganized for convenient ingest.
>
> To make this easier, we have written a toolkit (in Ruby) which handles 
> the conversion.  You select the source type (e.g. zipfile of 
> electronic theses from Proquest), specify the 
> directory/zipfile/whatever containing the data, and the toolkit 
> executes all the transforms and organizes into a convenient directory 
> structure, ready to ingest into the repository.  The problem is that 
> the code in the toolkit is clunky, making it difficult to add new sources and the needed transformations.
>
> I am rewriting the toolkit from scratch, with a modular design.  I 
> want a consistent set of methods defined in an abstract class for a 
> package of data (which I am calling a Tree), with subclasses defining 
> the exact behavior of the methods for directories, zipfiles, images 
> with imbedded metadata, etc.  I'm sure this is familiar to some of 
> you.  A file or directory (or analog) within a Tree is defined as a 
> path from the root of the Tree
>
> The question I have is the best model to use for the arguments of the 
> methods of this class.  For instance, I want an analog to the copy 
> method, to copy a file from the input Tree to the new ingest Tree.  
> The ruby filesystem copy method is .cp(src, dest).  An analog method 
> would have to specify the input Tree along with the input path, and 
> the output Tree plus the output path.  So I could define the method as 
> Tree.cp(srctree, srcpath, desttree, destpath).  Or I could go a little 
> more abstract and define a class Node which is a combination of a Tree 
> and a path.  Then I could create Tree.cp(srcnode, destnode), which 
> looks more like the familiar filesystem methods.
>
> Does anyone have an opinion on which would be better?  Using Nodes 
> looks a lot cleaner and appeals to my sense of organization.  I will 
> be defining a Tree.glob method, so that should handle instantiating 
> source Nodes, but output Nodes would need to be instantiated.  The 
> first method avoids the complication of instantiating Nodes before 
> using them in copy and move commands.  I'm not sure which would be 
> easier for writing specific ingest routines for a new data source, 
> since someday someone else will have to write them.  Any thoughts?
>
>
>       Steve McDonald
>
>       [log in to unmask]<mailto:[log in to unmask]>
>


--
Andromeda Yelton
Web Applications Developer, Berkman Klein Center: https://cyber.harvard.edu Lecturer, San José State University iSchool http://andromedayelton.com @ThatAndromeda <http://twitter.com/ThatAndromeda>