How are you getting the metadata streams? Are they actual stream objects, or files, or database dumps, etc?
As for the tools, I have used a number of the ones you listed below. I personally prefer JIRA (and it is free for non-profit). If you are ok if editing in wiki syntax I would recommend mediaWiki (it is what powers Wikipedia). You could also take a look at continuous deployment technologies like Virtual Machines (virtualbox), linux containers (docker), and rapid deployment tools (ansible, salt). Of course if you are doing lots of code changes you will want to test all of this continually (Jenkins).
Library of Congress, OSI
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of davesgonechina
Sent: Tuesday, March 10, 2015 6:05 AM
To: [log in to unmask]
Subject: [CODE4LIB] Data Lifecycle Tracking & Documentation Tools
One of my projects involves harvesting, cleaning and transforming steady streams of metadata from numerous publishers. It's an infinite loop but every cycle can be a little bit or significantly different. Many issue tracking tools are designed for a linear progression that ends in deployment, not a circular workflow, and I've not hit upon a tool or use strategy that really fits.
The best illustration I've found so far of the type of workflow I'm talking about is the DCC Curation Lifecycle Model <http://www.dcc.ac.uk/sites/default/files/documents/publications/DCCLifecycle.pdf>
Here are some things I've tried or thought about trying:
- Git comments
- Github Issues
- MySQL comments
- Bash script logs
- Request Tracker