That's a pretty reasonable way to approach it, I like Erik's thinking on this. Although I'm not sure if a content in a URL vs one negotiated via HTTP headers are quite "entirely different" like Erik says -- rather, it's a question of whether you intend to identify/refer/link to a specific version/representation of the document, or the overall document itself, no?
I'd also note that in an open search description context, you don't need/can't have a "free parameter" for this, because it's baked into the opensearch URL template "type" parameter. It however would be appropriate to apply an OpenSearch URL template fixed to type X, which has a URL that has httpAccept=X in it, where the X is _not_ parameterized in the OpenSearch Desc URL Template, it's just fixed.
I'm curious how/whether Erik's analysis would apply to other HTTP headers that SRU (at least 2.0?) also makes available as query parameters too. Charset, encoding, language? I guess those are kind of analagous, there are times where you do need to refer/identify/link to a document in a very specific encoding, charset, or language. Other times where you just want to identify the overall document, and let content negotiation pick the appropriate representation (which could (should?) be done by 3xx redirecting to the specific appropriate representation with the encoding, charset, language, and/or content type fixed into the URL itself?).
Accept-Ranges, I have no idea, I don't understand that header's purpose well enough. But SRU also provides a query param for that, it seems less clear to me if that's ever useful or justifiable.
Jonathan
________________________________________
From: Code for Libraries [[log in to unmask]] On Behalf Of Erik Hetzner [[log in to unmask]]
Sent: Tuesday, June 01, 2010 6:35 PM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Inlining HTTP Headers in URLs
Hi Ralph -
At Tue, 1 Jun 2010 15:17:02 -0400,
LeVan,Ralph wrote:
> A simple use case comes from OpenSearch and its use of URL
> templates. To enable the return of RSS as the response to an SRU
> query, we added the parameter "httpAccept=application/rss+xml" to
> the SRU URL in the OpenSearch template and coded for it in the SRU
> server. Had we had a filter in the request, the servlet's life would
> have been easier.
>
> That seemed like a specific solution to what could be a
> generalizable problem.
There have been long discussions on the rest-discuss mailing list
about this issue. (See, e.g, [1].)
Frankly I think that it is wrong to think of your httpAccept param as
equivalent to an HTTP header.
There is a time for a URI that can use content-negotiation (the Accept
header, etc.) to get, e.g., PDF, HTML, or plain text. As an example:
http://example.org/RFC1
And there is a time when we want to explicitly refer to a particular
resource that has only ONE type. For example, the canonical version of
an RFC:
http://example.org/RFC1.txt
But these are different resources. If you want to be able to link to
search results that must be returned in RSS, a query parameter or file
extension is proper.
But this query param or file extension, in my opinion, is quite
different than HTTP content negotiation or the Accept header.
At Tue, 1 Jun 2010 15:36:23 -0400,
Joe Hourcle wrote:
>
> On Tue, 1 Jun 2010, Erik Hetzner wrote:
> > I am having a hard time imagining the use case for this.
> >
> > Why should you allow a link to determine things like the User-Agent
> > header? HTTP headers are set by the client for a reason.
>
> I can think of a few cases -- debugging is the most obvious, but possibly
> also to work around cases where someone's browser is sending a header
> that's screwing something up. (which is basically debugging, as I'd have
> the user try a few things, and then once I knew what was going wrong, I'd
> fix it so we didn't have to have workarounds)
Not the solution I would prefer for debugging, but if it is not
exposed to the outside world, OK.
> But all of the cases that I can think of where it'd be useful, there's
> already work arounds --
>
> Cache-Control : add a random query string
> Accept (if using content negotiation) : add a file extension
> Accept-Language : add a language extension
Yes, with caveats above.
> > Furthermore, as somebody involved in web archiving, I would like to
> > ask you not to do this.
> >
> > It is already hard enough for us to tell that:
>
> [trimmed]
>
> You'll always have those problems when assuming that URL is a good
> identifier.
I don’t assume, I know a URL is a good identifier. :)
> The only good solution would be for webservers to respond back with a sort
> of 'preferred URI' with the response -- some do it via redirection, but
> you're never going to get everyone to agree -- and in the example above
> with the various 'Accept' headers, you have the question about what it is
> that you're trying to identify (the general concept, the specific
> translation, or the translation + packaging? ... and then we get into FRBR
> territory)
As far as I know there are only 2 solutions, a 301 response and
Content-Location [2]. Either one works fine. This is not a contentious
issue, it is just one that a lot of web sites do not handle properly.
I’m not sure what FRBR has to do with it. I think the web architecture
documents have good, practical things to say about what is identified
by a URI.
best, Erik Hetzner
1. http://tech.groups.yahoo.com/group/rest-discuss/message/11508
2. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14
|