Print

Print


> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Bill Dueber
> Sent: Friday, March 05, 2010 05:22 PM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] Q: XML2JSON converter
> 
> This is my central point. I'm actually saying that JSON streaming is
> painful
> and rare enough that it should be avoided as a requirement for working
> with
> any new format.

OK, in principle we are in agreement here.

> I guess, in sum, I'm making the following assertions:
> 
> 1. Streaming APIs for JSON, where they exist, are a pain in the ass.
> And
> they don't exist everywhere. Without a JSON streaming parser, you have
> to
> pull the whole array of documents up into memory, which may be
> impossible.
> This is the crux of my argument -- if you disagree with it, then I
> would
> assume you disagree with the other points as well.

Agree with streaming APIs for JSON are a pain and not universal across all clients.

Agree that without a streaming API you are limited by memory constraints on the client. 

> 2. Many people -- and I don't think I'm exaggerating here, honestly --
> really don't like using MARC-XML but have to because of the length
> restrictions on MARC-binary. A useful alternative, based on dead-easy
> parsing and production, is very appealing.

Cannot address this concern.  MARC (ISO 2709) and MARC-XML are library community standards.  Doesn't matter whether I like them or not, or you like them or not.  This is what the library community has agreed to as a communications format between systems for interoperability.

> 2.5 Having to deal with a streaming API takes away the "dead-easy"
> part.

My assumption is that 2.5 is dealing with using a streaming API with MARC-XML.  I agree that using SAX in XML on MARC-XML is a pain, but that's an issue with dealing with large XML datasets, in general, and has nothing to do with MARC-21.  In general, when processing large MARC-XML I use SAX to get me a complete record and process at the record level, that isn't too bad, but I'll concede it's still a pain.  Usually, I break up large datasets into 10,000 record chunks and process them that way since most XML and XSLT tools cannot effectively deal with documents that are 100MB or larger, so I rarely ever use SAX anymore.

> 3. If you accept my assertions about streaming parsers, then dealing
> with
> the format you've proposed for large sets is either painful (with a
> streaming API) or impossible (where such an API doesn't exist) due to
> memory
> constraints.

Large datasets, period, are a pain to deal with.  I deal with them all day long and have to deal with tool issues, disk space, processing times, etc.
I don't disagree with you here in principle, but as I previously point out this is an API issue.

If your API never allows you to return a collection of more than 10 records which is less than 1MB, you are not dealing with large datasets.  If your API is returning a large collection of records that is 100MB or larger, then you got problems and need to rethink your API.

This is no different than a large MARC-XML collection.  The entire LC authority dataset, names and subjects, is 8GB of MARC-XML.  Do I process that as 8GB of MARC-XML, heck no!!  I break it up into smaller chunks and process the chunks.  This allows me to take those chunks and run parallel algorithms on them or throw the chunks at our cluster and get the results back quicker.

It's the size of the data that is the crux of your argument not the format of the data, e.g., XML, JSON, CSV, etc.

> 4. Streaming JSON writer APIs are also painful; everything that applies
> to
> reading applies to writing. Sans a streaming writer, trying to *write*
> a
> large JSON document also results in you having to have the whole thing
> in
> memory.

No disagreement here.
 
> 5. People are going to want to deal with this format, because of its
> benefits over marc21 (record length) and marc-xml (ease of processing),
> which means we're going to want to deal with big sets of data and/or
> dump batches of it to a file. Which brings us back to #1, the pain or
> absence of streaming apis.

So we are back to the general argument that large datasets, regardless of format, are a pain to deal with, and that tool sets have issues dealing with large datasets.  I don't disagree with these statements and run into these issues on a daily basis whether dealing with MARC datasets or other large datasets.  A solution to this issue is to create batches of stuff that can be processed in parallel, ever heard of Google and map-reduce :)

> "Write a better JSON parser/writer" or "use a different language" seem
> like
> bad solutions to me, especially when a (potentially) useful alternative
> exists.

OK, I will bite, you stated:

1. That large datasets are a problem.
2. That streaming APIs are a pain to deal with.
3. That tool sets have memory constraints.

So how do you propose to process large JSON datasets that:

1. Comply with the JSON specification.
2. Can be read by any JavaScript/JSON processor.
3. Do not require the use of streaming API.
4. Do not exceed the memory limitations of current JSON processors.

I'm open to any suggestions that comply with standards or work within a standard's extension framework. 


Andy.