On Dec 1, 2013, at 11:12 PM, Simon Spero wrote:
> On Dec 1, 2013 6:42 PM, "Joe Hourcle" <[log in to unmask]> wrote:
>> So that you don't screw up web proxies, you have to specify the 'Vary'
> header to tell which parameters you consider significant so that it knows
> what is or isn't cacheable.
> I believe that if a Vary isn't specified, and the content is not marked as
> non cachable, a cache must assume Vary:*, but I might be misremembering
That would be horrible for caching proxies to assume that nothing's
cacheable unless it said it was. (as typically only the really big
websites or those that have seen some obvious problems bother with
setting cache control headers.)
I haven't done any exhaustive tests in many years, but I was noticing
that proxies were starting to cache GET requests with query strings,
which bothered me -- it used to be that anything that was an obvious
CGI wasn't cached. (I guess that enough sites use it, it has to make
the assumption that the sites aren't stateful, and that the parameters
in the URL are enough information for hashing)
>> (who has been managing web servers since HTTP/0.9, and gets annoyed when
> I have to explain to our security folks each year why I don't reject
> pre-HTTP/1.1 requests or follow the rest of the CIS benchmark
> recommendations that cause our web services to fail horribly)
> Old school represent (0.9 could out perform 1.0 if the request headers were
> more than 1 MTU or the first line was sent in a separate packet with nagle
> enabled). [Accept was a major cause of header bloat].
Don't even get me started on header bloat ...
My main complaint about HTTP/1.1 is that it requires clients to support
chunked encoding, and I've got to support a client that's got a buggy
implementation. (and then my CGIs that serve 2GB tarballs start
failing, and it's calling a program that's not smart enough to look
for SIG_PIPE, so I end up with a dozen of 'em going all stupid and
sucking down CPU on one of my servers)
Most people don't have to support a community written HTTP client,
though. (and the one alternative HTTP client in IDL doesn't let me
interactive w/ the HTTP headers directly, so I can't put a wrapper
around it to extract the tarball's filename from the Content-Disposition
ps. yep, still having writer's block on posters.