At my previous job we often had to ingest and support several versions of ingest files from multiple outside data providers. One approach we took was to use the "supportable" pattern. Basically we would create a singleton component per file/version, with two methods required.
1) Boolean supports(file) - singleton did whatever necessary to see if it could support that file
2) void process(file) - singleton did its magic transforming. We ended up transforming them into objects that we then stored in our database, your requirements might be different.
What was really nice about this is was that it made it super easy to test (unit test), as well as have spring framework automatically create a list of these singletons during startup and simply loop over each singleton per file. When performance became an issue it was also easy to put this in a thread since the singletons didn't keep state.
I know for xml it can conform to a specific schema. Do you have any formal schema for the files you are ingesting? If so, is there any way to enforce they follow their schema?
As for needing a human to quality review the ingest, if after looping through all singletons and none applying you know it needs human review. The code could put that file in a list and present it via some web interface (restful or GUI based).
As for the wiki, we found that a good front page that linked to the most used pages was the best. From what I recall mediawiki was "ok" at search but not the best. We had the understanding that if ANYONE found something wrong or duplicate, it was up to them to fix it (including new members of the team). But even given the worst case of it being disorganized, I would still take it over just a select few people knowing it. Communication is key, so if you are finding it becoming a mess perhaps that is a better indication that there is some communication gap. To combat this we made it mandatory to be in the "dev" chat room on our IM client. People would often post links to the wiki in the chat room, which allowed anyone to mention if it was a duplicate.
Anyways, these are just my ramblings as a developer. Take it with a grain of salt, your mileage may vary.
John
-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of davesgonechina
Sent: Tuesday, March 10, 2015 10:35 PM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Data Lifecycle Tracking & Documentation Tools
Hi John,
Good question - we're taking in XLS, CSV, JSON, XML, and on a bad day PDF of varying file sizes, each requiring different transformation and audit strategies, on both regular and irregular schedules. New batches often feature schema changes requiring modification to ingest procedures, which we're trying to automate as much as possible but obviously require a human chaperone.
Mediawiki is our default choice at the moment, but then I would still be looking for a good workflow management model for the structure of the wiki, especially since in my experience wikis are often a graveyard for the best intentions.
Dave
On Tue, Mar 10, 2015 at 8:10 PM, Scancella, John <[log in to unmask]> wrote:
> Dave,
>
> How are you getting the metadata streams? Are they actual stream
> objects, or files, or database dumps, etc?
>
> As for the tools, I have used a number of the ones you listed below. I
> personally prefer JIRA (and it is free for non-profit). If you are ok
> if editing in wiki syntax I would recommend mediaWiki (it is what
> powers Wikipedia). You could also take a look at continuous deployment
> technologies like Virtual Machines (virtualbox), linux containers
> (docker), and rapid deployment tools (ansible, salt). Of course if you
> are doing lots of code changes you will want to test all of this continually (Jenkins).
>
> John Scancella
> Library of Congress, OSI
>
> -----Original Message-----
> From: Code for Libraries [mailto:[log in to unmask]] On Behalf
> Of davesgonechina
> Sent: Tuesday, March 10, 2015 6:05 AM
> To: [log in to unmask]
> Subject: [CODE4LIB] Data Lifecycle Tracking & Documentation Tools
>
> Hi all,
>
> One of my projects involves harvesting, cleaning and transforming
> steady streams of metadata from numerous publishers. It's an infinite
> loop but every cycle can be a little bit or significantly different.
> Many issue tracking tools are designed for a linear progression that
> ends in deployment, not a circular workflow, and I've not hit upon a
> tool or use strategy that really fits.
>
> The best illustration I've found so far of the type of workflow I'm
> talking about is the DCC Curation Lifecycle Model <
> http://www.dcc.ac.uk/sites/default/files/documents/publications/DCCLif
> ecycle.pdf
> >
> .
>
> Here are some things I've tried or thought about trying:
>
> - Git comments
> - Github Issues
> - MySQL comments
> - Bash script logs
> - JIRA
> - Trac
> - Trello
> - Wiki
> - Unfuddle
> - Redmine
> - Zendesk
> - Request Tracker
> - Basecamp
> - Asana
>
> Thoughts?
>
> Dave
>
|