http?X and https?X are different URIs. You may fetch a document containing
a serialized graph using TLS but that is quite separate from the URIs that
may be used as identifiers.
In fact, the denotation of a IRI used to name a graph is explicitly
unspecified (I am having to craft a semantics for applying PROVO to such an
IRI (so that provenance can be asserted over a bunch of related statements
with reification implicit, and without having to accept an entire graph).
TLS does not provide the security properties you may need (privacy is weak,
and non-repudiability not available).
Connecting two recent c4l threads... It seems that the web is rapidly
moving toward https. I'm tempted to wonder how soon it will be before https
is the default protocol when you type a bare domain name into your browser?
 With linked data we want cool URIs, where one element of coolness is
persistence. If it is likely that http URIs will be seen to be "unclean"
 in the near future that would surely be a pressure to change them.
Should we just go ahead and always use https URIs for linked data now?
 Of course you can do this yourself much of time with HTTPS Everywhere <
https://www.eff.org/https-everywhere> but I really mean when is it so much
the norm that chrome/firefox/safari/etc. do that expansion out of the box,
instead of assuming http.
 Perhaps snoopability of http traffic doesn't matter in the bulk harvest
case but in the case of an individual following a link, any use of an http
URI could leak significant info about what is being looked at even the
server immediately redirects to an ssl page.