Print

Print


Erik Hetzner wrote:
>
> Accept-Encoding is a little strange. It is used for gzip or deflate
> compression, largely. I cannot imagine needing a link to a version
> that is gzipped.
>
> It is also hard to imagine why a link would want to specify the
> charset to be used, possibly overriding a client’s preference. If my
> browser says it can only supports UTF-8 or latin-1, it is probably
> telling the truth.
>   
Perhaps when the client/user-agent is not actually a "web browser" that 
is simply going to display the document to the user, but is some kind of 
other software. Imagine perhaps archiving software that, by policy, only 
will take UTF-8 encoded documents, and you need to supply a URL which is 
guaranteed to deliver such a thing.

Sure, the hypothetical archiving software could/should(?)  just send an 
actual HTTP header to make sure it gets a UTF-8 charset document.  But 
maybe sometimes it makes sense to provide an identifier that actually 
identifies/points to the UTF-8 charset version -- and that in the actual 
in-practice real world is more guaranteed to return that UTF-8 charset 
version from an HTTP request, without relying on content negotation 
which is often mis-implemented. 

We could probably come up with a similar reasonable-if-edge-case for 
encoding.

So I'm not thinking so much of "over-riding" the conneg -- I'm thinking 
of your initial useful framework, one URI identifies a more abstract 
'document', the other identifies a specific representation. And 
sometimes it's probably useful to identify a specific representation in 
a specific charset, or, more of a stretch, encoding. No?

I notice you didn't mention 'language', I assume we agree that one is 
even less of a stretch, and has more clear use cases for including in a 
URL, like content-type.

Jonathan