Robert Sanderson <[log in to unmask]> wrote:
> c) I've never used a Topic Maps application. (and see (a))
How do you know?
> There /are/ challenges with RDF [...]
> But for the vast majority of cases, the problems are solved (JSON-LD) or no
> one cares any more (httpRange14).
What are you trying to say here? That httpRange14 somehow solves some
issue, and we no longer need to worry about it?
>> Having said that, there's tuples of many kinds, it's only that the
>> triplet is the most used under the W3C banner. Many are using to a
>> more expressive quad, a few crazies , for example, even though that
> ad hominem? really? Your argument ceased to be valid right about here.
I think you're a touch sensitive, mate. "Crazies" as in, few and
knowledgeable (most RDF users these days don't know what tuples are,
and how they fit into the representation of data) but not mainstream.
I'm one of those crazies. It was meant in jest.
>> may or may not be a better way of dealing with it. In the end, it all
>> comes down to some variation over frames theory (or bundles); a
>> serialisation of key/value pairs with some ontological denotation for
>> what the semantics of that might be.
> Except that RDF follows the web architecture through the use of URIs for
> everything. That is not to be under-estimated in terms of scalability and
> long term usage.
So does Topic Maps. Not sure I get your point? This is just semantics
of the key dominator in tuple serialisation, there's nothing
revolutionary about that, it's just an ontological commitment used by
systems. URIs don't give you some magic advantage; they're still a
string of characters as far as representation is concerned, and I dare
say, this points out the flaw in httpRange14 right there; in order to
know representation you need to resolve the identifier, ie. there's a
movable dynamic part to what in most cases needs to be static. Not
saying I have the answer, mind you, but there are some fundamental
problems with knowledge representation in RDF that a lot of people
don't "care about" which I do feel people of a library bent should
>> But wait, there's more! [big snip]
> Your point? You don't like an ontology? #DDTT
My point was the very first words in the following paragraph;
And of course I like ontologies. I've bandied them around these parts
for the last 10 years or so, and I'm very happy with RDA/FRBR
directions of late, taking at least RDF/Linked Data seriously. I'm
thus not convinced you understood what I wrote, and if nothing else,
my bad. I'll try again.
> That's no more a problem of RDF than any other system.
Yes, it is. RDF is promoted as a solution to a big problem of findable
and shareable meta data, however until you understand and use the full
RDF cake, you're scratching the surface and doing things sloppy (and
I'd argue, badly). The whole idea of strict ontologies is rigor,
consistency and better means of normalising the meta data so we all
can use it to represent the same things we're talking about. But the
question to every piece of meta data is *authority*, which is the part
of RDF that sucks. Currently it's all balanced on WikiPedia and
dbPedia, which isn't a bad thing all in itself, but neither of those
two are static nor authoritative in the same way, say, a global
library organisation might be. With RDF, people are slowly being
trained to accept all manners of crap meta data, and we as librarians
should not be so eager to accept that. We can say what we like about
the current library tools and models (and, of course, we do; they're
not perfect), but there's a whole missing chunk of what makes RDF
'work' that is, well, sub-par for *knowledge representation*. And
that's our game, no?
The shorter version; the RDF cake with it myriad of layers and
standards are too complex for most people to get right, so Linked Data
comes along and try to be simpler by making the long goal harder to
I'm not, however, *against* RDF. But I am for pointing out that RDF is
neither easy to work with, nor ideal for any long-term goals we might
have in knowledge representation. RDF could have been made a lot
better which has better solutions upstream, but most of this RDF talk
is stuck in 1.0 territory, suffering the sins of former versions.
>> And then there's that tedious distinction between a web resource and
>> something that represents the thing "in reality" that RDF skipped (and
>> hacked a 304 "solution" to). It's all a bit messy.
> That RDF skipped? No, *RDF* didn't skip it nor did RDF propose the *303*
> solution. You can use URIs to identify anything.
I think my point was that since representation is so important to any
goal you have for RDF (and the rest of the stack) it was a mistake to
not get it right *first*. OWL has better means of dealing with it, but
then, complexity, yadda, yadda.
> And it's not messy, it's very clean.
Subjective, of course. Have you ever played with an inference machine
that slurps up millions of RDF triples and then try to map what
resources are representing? You need a large wallet and a fat pipe to
get that right. Sure, caching, yadda, yadda, but in practical terms
it's a kludge which could have been solved with a better framework for
identification (well, ontological commitments of persistent
identification, really) and some global agreement to what the
semantics of that might be. W3C were that global entity, but they
didn't do it. I've suggested the library world be that, though;
authoritative and dedicated, and well worth funding for the future of
digital knowledge representation.
> What it is not, is pragmatic. URIs are
> like kittens ... practically free to get, but then you have a kitten to
> look after and that costs money. Thus doubling up your URIs is increasing
> the number of kittens you have. [though likely not, in practice, doubling
> the cost]
Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
http://shelter.nu/blog | google.com/+AlexanderJohannesen |