Print

Print


On Dec 1, 2013, at 7:57 PM, Barnes, Hugh wrote:

> +1 to all of Richard's points here. Making something easier for you to develop is no justification for making it harder to consume or deviating from well supported standards.
> 
> [Robert]
>> You can't 
>> just put a file in the file system, unlike with separate URIs for 
>> distinct representations where it just works, instead you need server 
>> side processing.
> 
> If we introduce languages into the negotiation, this won't scale.

It depends on what you qualify as 'scaling'.  You can configure
Apache and some other servers so that you pre-generate files such
as :

	index.en.html
	index.de.html
	index.es.html
	index.fr.html

... It's even the default for some distributions.

Then, depending on what the Accept-Language header is sent,
the server returns the appropriate response.  The only issue
is that the server assumes that the 'quality' of all of the
translations are equivalent.

You know that 'q=0.9' stuff?  There's actually a scale in
RFC 2295, that equates the different qualities to how much
content is lost in that particular version:

  Servers should use the following table a guide when assigning source
  quality values:

     1.000  perfect representation
     0.900  threshold of noticeable loss of quality
     0.800  noticeable, but acceptable quality reduction
     0.500  barely acceptable quality
     0.300  severely degraded quality
     0.000  completely degraded quality





> [Robert]
>> This also makes it much harder to cache the 
>> responses, as the cache needs to determine whether or not the 
>> representation has changed -- the cache also needs to parse the 
>> headers rather than just comparing URI and content.  
> 
> Don't know caches intimately, but I don't see why that's algorithmically difficult. Just look at the Content-type of the response. Is it harder for caches to examine headers than content or URI? (That's an earnest, perhaps naïve, question.)

See my earlier response.  The problem is without a 'Vary' header or
other cache-control headers, caches may assume that a URL is a fixed
resource.

If it were to assume that was static, then it wouldn't matter what
was sent for the Accept, Accept-Encoding or Accept-Language ... and
so the first request proxied gets cached, and then subsequent
requests get the cached copy, even if that's not what the server
would have sent.


> If we are talking about caching on the client here (not caching proxies), I would think in most cases requests are issued with the same Accept-* headers, so caching will work as expected anyway.

I assume he's talking about caching proxies, where it's a real
problem.


> [Robert]
>> Link headers 
>> can be added with a simple apache configuration rule, and as they're 
>> static are easy to cache. So the server side is easy, and the client side is trivial.
> 
> Hadn't heard of these. (They are on Wikipedia so they must be real.) What do they offer over HTML <link> elements populated from the Dublin Core Element Set?

Wikipedia was the first place you looked?  Not IETF or W3C?
No wonder people say libraries are doomed, if even people who work
in libraries go straight to Wikipedia.


...


oh, and I should follow up to my posting from earlier tonight --
upon re-reading the HTTP/1.1 spec, it seems that there *is* a way to
specify the authoritative URL returned without an HTTP round-trip,
Content-Location :

	http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14

Of course, it doesn't look like my web browser does anything with
it:

	http://www.w3.org/Protocols/rfc2616/rfc2616
	http://www.w3.org/Protocols/rfc2616/rfc2616.html
	http://www.w3.org/Protocols/rfc2616/rfc2616.txt

... so you'd still have to use Location: if you wanted it to 
show up to the general public.

-Joe