On 24 Feb 2012, at 18:20, Joe Hourcle wrote:
>
> I see it like the people who request that their pages not be cached elsewhere -- they want to make their object 'discoverable', but they want to control the access to those objects -- so it's one thing for a search engine to get a copy, but they don't want that search engine being an agent to distribute copies to others.
Also meant to say that Google (and others) support a 'Noarchive' instruction (not quite sure if this can be implemented in robots.txt or only via robots meta tags and x-robots-tags - if anyone can tell me I'd be grateful) which I think would fulfil this type of instruction - index, but don't keep a copy.
Owen
|