Print

Print


Regarding the latest discussion on db's vs. xml, I have more questions
(almost on the newbie level). I have some data that currently resides in a
relational db, and I'd like to make it available over the web. In the
past, I've used MySQL and PHP, or Perl and flat text files. Given present
infrastructure constraints, I'd like to use XML for the storage mechanism
instead of a database. However, I'm wondering what the constraints are for
having the data stored in XML. I've been tinkering with Perl to do the
searching of the XML file, and can see how this would work in conjunction
with the CGI module, however, I'm wondering if using the Postgres or
MySQL/PHP/Perl is still the way to go. What I'm gleaning from the recent
discussion is that at least some of you are using a combination of the db
and xml. Can you offer advice on going/not going fully with XML for the
storage mechanism?

Thanks much for any advice.

Eileen Llona
International Studies Computer Services Librarian
Digital Initiatives, Suzzallo Library
University of Washington Seattle, WA 98195-2900

On Mon, 20 Dec 2004, Automatic digest processor wrote:

> There are 4 messages totalling 137 lines in this issue.
>
> Topics of the day:
>
>  1. 2 db || ~2 db [indexing] (4)
>
> ----------------------------------------------------------------------
>
> Date:    Mon, 20 Dec 2004 12:42:42 -0500
> From:    Eric Lease Morgan <[log in to unmask]>
> Subject: Re: 2 db || ~2 db [indexing]
>
> On Dec 17, 2004, at 12:50 PM, Clay Redding wrote:
>
>> What you describe is very close to what I've done with my Postgres
>> solution to search some EAD docs using a Perl/CGI.  The XML starts on
>> the filesystem.  I then index it with swish-e and insert the XML blob
>> into Postgres since swish-e isn't entirely XML aware.  In case I need
>> extra ability to deliver XML text fragments to enrichen the output of
>> my
>> HTML in the CGI,  I use the Postgres/Throwingbeans XPath functionality
>> with a simple select SQL.  The database really does very little in my
>> app (it's only one table, actually) -- it's swish-e that drives it, and
>> it's really fast.
>
> This is interesting, very.
>
> Yes, I intend to index entire works with swish-e. Searches against
> swish-e indexes return pointers to entire documents or keys to
> databases. Consequently, unless I index bunches o' paragraphs as
> individual documents, it will be difficult to use swish-e as my indexer
> as well as return paragraphs/lines from my texts. The idea of using
> XPATH queries to extract particular paragraphs from texts is
> intriguing. 'Food for thought. Thank you.
>
> --
> Eric Morgan
>
> ------------------------------
>
> Date:    Mon, 20 Dec 2004 12:18:52 -0600
> From:    Chuck Bearden <[log in to unmask]>
> Subject: Re: 2 db || ~2 db [indexing]
>
> On Mon, 20 Dec 2004 12:42:42 -0500, Eric Lease Morgan <[log in to unmask]> wrote:
>> On Dec 17, 2004, at 12:50 PM, Clay Redding wrote:
>>
>>> What you describe is very close to what I've done with my Postgres
>>> solution to search some EAD docs using a Perl/CGI.  The XML starts on
>>> the filesystem.  I then index it with swish-e and insert the XML blob
>>> into Postgres since swish-e isn't entirely XML aware.  In case I need
>>> extra ability to deliver XML text fragments to enrichen the output of
>>> my
>>> HTML in the CGI,  I use the Postgres/Throwingbeans XPath functionality
>>> with a simple select SQL.  The database really does very little in my
>>> app (it's only one table, actually) -- it's swish-e that drives it, and
>>> it's really fast.
>>
>> This is interesting, very.
>>
>> Yes, I intend to index entire works with swish-e. Searches against
>> swish-e indexes return pointers to entire documents or keys to
>> databases. Consequently, unless I index bunches o' paragraphs as
>> individual documents, it will be difficult to use swish-e as my indexer
>> as well as return paragraphs/lines from my texts. The idea of using
>> XPATH queries to extract particular paragraphs from texts is
>> intriguing. 'Food for thought. Thank you.
>
> One could think of the XPath expressions pointing to retrievable
> chunks of XML as analogous to database keys.  That's how I was viewing
> them in my hypothetical (Lucene && (eXist || ThrowingBeans)) solution.
>
> Chuck
>
> ------------------------------
>
> Date:    Mon, 20 Dec 2004 15:45:30 -0500
> From:    Art Rhyno <[log in to unmask]>
> Subject: Re: 2 db || ~2 db [indexing]
>
> One other variation with lucene is to use a relational database underneath
> of cocoon and index a view of the content that pulls out the XML in the
> blob and any other data in the database tables that fits. I think this
> would let you use cocoon's scheduler to keep the index up to date, use
> database pooling and caching for throughput, and insert other kinds of
> content into the pipeline if it made sense, e.g. comments from a website.
> It used to be 30 to 40% slower to deliver images from mysql as a blob than
> directly from disk, which might argue for the need for pooling and caching
> for whatever blob-like field holds something like EAD content, though I
> haven't seen figures on this in a long time and network latency probably
> obliterates all other factors anyway.
>
> art
>
> ------------------------------
>
> Date:    Mon, 20 Dec 2004 21:17:26 -0500
> From:    Walter Lewis <[log in to unmask]>
> Subject: Re: 2 db || ~2 db [indexing]
>
> Chuck Bearden wrote:
>
> > One could think of the XPath expressions pointing to retrievable
> > chunks of XML as analogous to database keys.  That's how I was viewing
> > them in my hypothetical (Lucene && (eXist || ThrowingBeans)) solution.
>
> For one of my solutions using TEI documents and swish, the public
> interface is designed to deliver "chapters" or "sections" of the work to
> the public.  The "key" to the HTML page is the "ID" of the appropriate
> <div> in TEI.
>
> I wrote a routine in perl that simply extracted that part of the XML
> document and fed it to swish-e with the appropriate ID anchoring the
> Swishpath/URL string. (There was, in fact, an XML config file that told
> the routine where to find the documents in the file system and supplied
> a couple of other useful values)
>
> If you
>     give the paragraph tags an ID attribute,
>     feed it to swish-e one unit at a time,
>     set up the path as http://yoururl#p=[id attribute]
>     add <a name="[id attribute]" /> inside your <p>s
> then you should have a swish index with a path pointing to anchors
> attached to individual paragraphs of the document.
>
> One of the issues is that the weights for relevance will be different at
> the paragraph level than at the article/chapter/section levels.  On the
> other hand, this might not be a bad thing.
>
> Walter Lewis
> Halton Hills
>
> ------------------------------
>
> End of CODE4LIB Digest - 17 Dec 2004 to 20 Dec 2004 (#2004-60)
> **************************************************************
>