Thanks to all the input. I've always thought of XML as a transmission
tool, but more lately I've been thinking of it more for almost a
preservation mechanism for data, especially in terms of the institutional
repository movement. However, it seems ackward to deal with it in its
native form, as you also indicate, unless it's completely static and more
closely tied to the content. The projects I've been working have a fairly
structured format (such as gazetteer and census-type data), so it seems
like sticking with the database for storage might be the best way to go
for now.



Eileen Llona
International Studies Computer Services Librarian
Digital Initiatives, Suzzallo Library
University of Washington Seattle, WA 98195-2900

On Wed, 5 Jan 2005, Automatic digest processor wrote:

> There are 2 messages totalling 123 lines in this issue.
> Topics of the day:
>  1. advice about xml as storage
>  2. pymarc
> ----------------------------------------------------------------------
> Date:    Wed, 5 Jan 2005 10:31:00 -0500
> From:    Eric Lease Morgan <[log in to unmask]>
> Subject: Re: advice about xml as storage
> On Jan 4, 2005, at 5:56 PM, Ed Summers wrote:
>>> Can you offer advice on going/not going fully with XML for the
>>> storage mechanism?
>> I tend to use XML as a transmission format: for serializing the
>> contents of an
>> database for consumption by a third party. Internally I use a
>> relational db as
>> a foundation for services I want to provide. So the rdbms serves as the
>> primary data source.
> I tend to agree with Ed, although this debate has been going on since
> the inception of XML. There is no one correct answer.
> I like using XML as the archival format for my text-based documents.
> This means I like to use TEI and XHTML as the basis of my writings and
> electronic texts. This technique allows me to separate my content from
> a specific application and/or operating system. I should be able to
> read these XML documents for years to come.
> Ironically, I use database applications to build the XML files. I use
> MySQL but stay away from any of their MySQL-isms such as the
> auto-increment feature. This allows me to use things like mysqldump to
> create a .sql files. Ideally I should be able to import these .sql
> files into other database applications. In reality, this is not always
> the case unless I tweak some of the SQL commands in the files, but
> since I'm dealing with plain text, this is not too difficult.
> Like Ed, I use databases to provide services against the data. More
> importantly, databases make it is easier to do things like global
> changes and update. It is more difficult to read bunches o' XML files,
> parse them, update accordingly, and write them again.
> In the XML world there are essentially two types of XML files:
> mixed-content files and not mixed content files. Good examples of
> mixed-content files are narrative texts. While still highly structured,
> narrative texts contains a large mixture of XML elements; there is
> relatively little repeating of elements in the same order. Non-mixed
> content have more pattern. These are more akin to data files with much
> more structure. In these cases the content is intended for statistical
> analysis. Average this. Sum that. Etc. Analysis of this data would be
> better done in a database application, not necessarily in an XML file
> through XSLT. Put another way, if your data is narrative in nature,
> like stories, consider more strongly saving your data as XML. On the
> other hand, if your data is more statistical in nature, think more
> strongly about using a database.
> In short, if you want to preserve your data, then use XML. If you want
> to do a lot of maintaining of the data (changing values, adding new
> content), then use databases. If you want to transform the data into
> other things like reports, printed documents, do analysis, then you
> will probably want to use a combination of both technologies.
> HTH.
> --
> Eric Lease Morgan
> ------------------------------
> Date:    Wed, 5 Jan 2005 17:21:25 -0600
> From:    Ed Summers <[log in to unmask]>
> Subject: pymarc
> Over the past few months I've been exploring python, and given my
> background with the perl module MARC::Record I set about writing pymarc
> which (I hope) extracts the essence of MARC::Record but provides a
> pythonic interface.
> The module works (ahem), and has a test suite, but I'm looking for feedback,
> so here's the URL:
> Here's a couple of examples from the docs:
> 1. Reading a batch of records and printing out the 245 subfield a:
>    from pymarc import MARCReader
>    reader = MARCReader( 'test/marc.dat' )
>    for record in reader:
>       print record['245']['a']
> 2. Print multiple fields from a record:
>    print record.fields( '600', '610', '650' )
> 3. Creating a record and writing it out to a file:
>    from pymarc import Record, Field
>    record = Record()
>    record.addField( \
>        Field( \
>            tag = '245',
>            indicators = ['0','1'],
>            subfields = [ \
>                'a', 'The pragmatic programmer : ',
>                'b', 'from journeyman to master /',
>                'c', 'Andrew Hunt, David Thomas.' ] ) )
>    out = file( 'file.dat', 'w' )
>    out.write( record.asMARC21() )
> There's a TODO list of things I'd like the module to do eventually. If
> anyone is interested in helping develop the library please let me know
> and I'll get you cvs (soon to be svn) access.
> Many thanks to Dan Chudnov for the python tips, advice and guidance which
> helped me get this far.
> //Ed
> ------------------------------
> End of CODE4LIB Digest - 4 Jan 2005 to 5 Jan 2005 (#2005-2)
> ***********************************************************