LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  March 2010

CODE4LIB March 2010

Subject:

Re: Q: XML2JSON converter

From:

"Houghton,Andrew" <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Sat, 6 Mar 2010 13:57:17 -0500

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (99 lines)

> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
> Bill Dueber
> Sent: Friday, March 05, 2010 08:48 PM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] Q: XML2JSON converter
> 
> On Fri, Mar 5, 2010 at 6:25 PM, Houghton,Andrew <[log in to unmask]>
> wrote:
> 
> > OK, I will bite, you stated:
> >
> > 1. That large datasets are a problem.
> > 2. That streaming APIs are a pain to deal with.
> > 3. That tool sets have memory constraints.
> >
> > So how do you propose to process large JSON datasets that:
> >
> > 1. Comply with the JSON specification.
> > 2. Can be read by any JavaScript/JSON processor.
> > 3. Do not require the use of streaming API.
> > 4. Do not exceed the memory limitations of current JSON processors.
> >
> >
> What I'm proposing is that we don't process large JSON datasets; I'm
> proposing that we process smallish JSON documents one at a time by
> pulling
> them out of a stream based on an end-of-record character.
> 
> This is basically what we use for MARC21 binary format -- have a
> defined
> structure for a valid record, and separate multiple well-formed record
> structures with an end-of-record character. This preserves JSON
> specification adherence at the record level and uses a different scheme
> to represent collections. Obviously, MARC-XML uses a different 
> mechanism to define a collection of records -- putting well-formed 
> record structures inside a <collection> tag.
> 
> So... I'm proposing define what we mean by a single MARC record
> serialized to JSON (in whatever format; I'm not very opinionated 
> on this point) that preserves the order, indicators, tags, data, 
> etc. we need to round-trip between marc21binary, marc-xml, and 
> marc-json.
> 
> And then separate those valid records with an end-of-record character
> -- "\n".

Ok, what I see here are divergent use cases and the willingness of the library community to break existing Web standards.  This is how the library community makes it more difficult to use their data and places additional barriers for people and organizations to enter their market because of these library centric protocols and standards.

If I were to try to sell this idea to the Web community, at large, and tell them that when they send an HTTP request with an Accept: application/json header to our services, our services will respond with a 200 HTTP status and deliver them malformed JSON, I would be immediately impaled with multiple arrows and daggers :(  Not to mention that OCLC would disparaged by a certain crowd in their blogs as being idiots who cannot follow standards.

OCLC's goals are use and conform to Web standards to make library data easier to use by people or organizations outside the library community, otherwise libraries and their data will become irrelevant.  The JSON serialization is a standard and the Web community expects that when they make HTTP requests with an Accept: application/json header that they will be get back JSON conforming to the standard.  JSON's main use case is in AJAX scenarios where you are not suppose to be sending megabytes of data across the wire.

Your proposal is asking me to break a widely deployed Web standard that is used by AJAX frameworks and to access millions (ok, many) Web sites.

> Unless I've read all this wrong, you've come to the conclusion that the
> benefit of having a JSON serialization that is valid JSON at both the
> record and collection level outweighs the pain of having to deal with
> a streaming parser and writer.  This allows a single collection to be
> treated as any other JSON document, which has obvious benefits (which 
> I certainly don't mean to minimize) and all the drawbacks we've been 
> talking about *ad nauseam*.

The goal is to adhere to existing Web standards and your underlying assumption is that you can or will be retrieving large datasets through an AJAX scenario.  As I pointed out this is more an API design issue and due to the way AJAX works you should never design an API in that manner.  Your assumption that you can or will be retrieving large datasets through an AJAX scenario is false given the caveat of a well designed API.  Therefore you will never be put into the scenario requiring the use of JSON streaming so your argument from this point of view is mute.

But for arguments sake let's say you could retrieve a line delimited list of JSON objects.  You can no longer use any existing AJAX framework for getting back that JSON since it's malformed.  You could use the AJAX framework's XMLHTTP to retrieve this line delimited list of JSON objects, but this still doesn't help because the XMLHTTP object will keep the entire response in memory.

So when our service sends the user agent 100MB of line delimited JSON objects, the XMLHTTP object is going to try to slurp the entire 100MB HTTP response into memory and that is going to exceed the memory requirement of the JSON/Javascrpt processor or the browser that is controlling the XMLHTTP object and the application will never get to process it one per line.

In addition, I wouldn't be surprised that whatever programming libraries or frameworks you use to read a line from the stream will have issues reading lines longer than several thousand characters which could easily be exceeded by MARC-21 records serialized into JSON.

> I go the the other way. I think the pain of dealing with a streaming
> API outweighs the benefits of having a single valid JSON structure for 
> a collection, and instead have put forward that we use a combination 
> of JSON records and a well-defined end-of-record character ("\n") to 
> represent a collection.  I recognize that this involves providing 
> special-purpose code which must call for JSON-deserialization on each 
> line, instead of being able to throw the whole stream/file/whatever 
> at your json parser is. I accept that because getting each line of a 
> text file is something I find easy compared to dealing with streaming
> parsers.

What I see here is divergent use cases:

Use case #1: retrieve a single MARC-21 format record serialized as an object according to the JSON specification.

Use case #2: retrieve a collection of MARC-21 format records serialized as an array according to the JSON specification.

Use case #3: retrieve a collection of MARC-21 format records serializing each record as an object according to the JSON specification with the restriction that all whitespace tokens are converted to spaces and each JSON object is terminated by a newline.

Personally, I have some minor issues with use case #3 in that it requires the entire serialization to be on one line.  Programming libraries and frameworks often have issues when line lengths exceed certain buffer requirements.  In addition, compressing the stream makes it difficult for humans to read when things eventually do go wrong and need human intervention.  Other alternatives to serializing the object to a single line would be to use VT (vertical tab), FF (form feed) or a double-newline to terminate the serialized objects.

Other issues with use case #3 are that this use case is primarily a file format to be read by library centric tool chains that can feed the individual objects to a JSON/Javascript processor.  Use case #3 works no differently from and provides no advantages over use case #2 in AJAX scenarios because both use cases are limited by memory constraints of the JSON/Javascript processor, e.g., if you can keep use case #2 in memory you will be able to keep use case #3 in memory.  A disadvantage to use case #3 is that it cannot use existing AJAX frameworks to deserialize JSON objects and each application must build their own infrastructure to deserialize these line delimited JSON objects.

Use cases #2 and #3 diverge because of standards compliance expectations.  So the question becomes how can use case #3 be made standards compliant?  It seems to me that use case #3 is defining a different media type than use case #1 and #2 whose media types are defined by the JSON specification.  A way to fix this issue is to say that use cases #1 and #2 conform to media type application/json and use case #3 conforms to a new media type say: application/marc+json.  This new application/marc+json media type now becomes a library centric standard and it avoids breaking a widely deployed Web standard.

Given the above discussion, use cases #1 and #2 are already defined by our MARC-JSON serialization format and meet existing standards compliance.  No changes are required by our existing specification.  Our MARC-JSON serialization for an object (MARC-21 record) could be used in use case #3 with the restriction that all whitespace tokens in serialized objects can only be spaces, given your current proposal.  Use case #3 can be satisfied by an alternate specification which defines a new media type and suggested file extension, e.g., application/marc+json and .mrj vs. application/marc and .mrc as defined by RFC 2220.


Andy.

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager