Just a quick follow-up to Steve's email. For the bibliographic datastore (https://github.com/jermnelson/redis-library-services-platform) I've been developing using Redis, we are able to use Redis commands to distinguish between these three states (True, False, Unknown/null) for an entity's properties that is stored in either hash, set, or sorted set data primitives.
For an entity with its properties in a Redis hash data primitive:
HGET 'bf:Book:1' 'title' => Either returns a value (can be any string; either a literal of URI) or null
HEXISTS 'bf:Book:1' 'title' => True or False Boolean
In implementing the client-side programming logic to handle these three states, we are able to distinguish the negative from the null value and provide appropriate logic to handle each case.
Jeremy Nelson
Metadata and Systems Librarian
Colorado College
-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of Karen Coyle
Sent: Sunday, September 22, 2013 10:14 AM
To: [log in to unmask]
Subject: Re: [CODE4LIB] Expressing negatives and similar in RDF
Steve, yes, you've nailed it, IMO.
There's a paper from some DERI folk that addresses negations, and it's all so complex that it does make one want to say: fuggetaboudit. Here's a snippet:
************begin snippet
The semantics of RDF(S) is purely monotonic and described in terms of positive inference rules, so even if Charles added instead a new statement
:me myfoaf:doesntknow <http://alice.exa.org/i> .
he would not be able to state that statements with the property myfoaf:doesntknow should single out foaf:knows statements.
Tim Berners-Lee’s Notation 3 (N3) [2] provides to some extent means to express what we are looking for by the ability to declare falsehood over reified statements which would be written as:
{ :me foaf:knows <http://alice.exa.org/i> } a n3:falsehood .
Nonetheless, this solution is somewhat unsatisfactory, due to the lack of formal semantics for N3; N3’s operational semantics is mainly defined in terms of its implementation
cwm3 only.
The falsehood of Charles knowing Alice can be expressed in OWL, however in a pretty contrived way, as follows (for the sake of brevity we use DL notation here, the reader might translate this to OWL syntax straightforwardly):
fcharlesg 2 8foaf:knows::faliceg [kc:this is a bunch of logical symbols that don't come through w copy/paste]
Reasoning with such statements firstly involves OWL reasoning with nominals, which most DL reasoners are not particularly good at, and secondly does not buy us too much, as the simple merge of this DL statement with the information in Bob’s FOAF file would just generate a contradiction, invalidating all, even the useful answers. Para-consistent reasoning on top of OWL, such as for instance proposed in [9] and related approaches, solve this problem of classical inference, but still requiring full OWL DL reasoning.
***********end snippet
Anyway, for one's reading pleasure:
http://ftp.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-314/55.pdf
I suspect this is something like validation that will come up in some STEM communities in areas where they really must solve it, and therefore there will be folks working on a solution. i'm happy to leave it to others, personally. It's way over my head.
kc
On 9/22/13 7:05 AM, Steve Meyer wrote:
> Isn't the issue here that it is very hard to break from the object/property
> model into an RDF/assertion model [1]? It seems to me that the rare book
> cataloger's assertion:
>
> "This book does not have a title"
>
> only looks like it should translate to
>
> example:book1 dc:title someOntology:nil
>
> because of our familiarity with object oriented programming:
>
> BibliographicResource bib = new BibliographicResource();
> bib.getTitleStatement().getTitle(); // returns null
>
> However, does this incorrectly assign the predicate and object of the
> triple? Shouldn't it be:
>
> Subject: This book
> Predicate: does not have a
> Object: title
>
> and therefore look more like:
>
> example:book1 someOntology:lacksProperty dc:title
>
> The statement cannot be about the value of a book's title if it does not
> have one. But the rare book cataloger is not making an assertion about the
> book's title, but an assertion about the book's metadata.
>
> We seem to be mislead into thinking that null is a value because most
> programming languages use equivalence syntax as shorthand for determining
> if an object's property is set. That is, this:
>
> object.getTitle == "Semantic Web"
>
> looks a lot like this
>
> object.getTitle == null
>
> But it might be better to understand this problem in terms of a set of
> key/value pairs in a Hash or Map:
>
> object.hasKey("title")
>
> -sm
>
> [1] "It is difficult to get a man to understand something, when his salary
> depends upon his not understanding it!"
>
>
> On Wed, Sep 18, 2013 at 10:58 AM, Karen Coyle <[log in to unmask]> wrote:
>
>> On 9/18/13 6:25 AM, [log in to unmask] wrote:
>>
>>> -----BEGIN PGP SIGNED MESSAGE----
>>>
>>> and without disagreeing with you, I would point out that if you say that
>>> a given type of resource can have at most one dct:title (which is easy to
>>> declare using OWL), and then apply that ontology to an instance that
>>> features a resource of that type with two dct:titles, you're going to get
>>> back useful information from the operation of your reasoner. An
>>> inconsistency in your claims about the world will become apparent. I now
>>> realize I should have been using the word "consistency" and not "validity".
>>>
>>> I suppose what I really want to know, if you're willing to keep "playing
>>> reporter" on the workshop you attended, is whether there was an
>>> understanding present that people are using OWL in this way, and that it's
>>> useful in this way (far more useful than writing and maintaining lots and
>>> lots and lots of SPARQL) and that this is a use case for ontology languages.
>>>
>> The workshop was expressly on validation of data. No one reported using
>> "reasoners" to do validation, and one speaker talked about relying on OWL
>> for their validation rules (but admitted that it was all in their closed
>> world and was a bit apologetic about it). I don't have experience with
>> reasoners, but one of the issues for validation using SPARQL is getting
>> back specific information about what precise part of the query returned
>> "false". I suspect that reasoners aren't good at returning such
>> information, since that is not their purpose. I don't believe that they
>> operate on a T/F basis, but now I'll start looking into them.
>>
>> One thing to remember about OWL is that it affects the semantics of your
>> classes and properties in the open world. OWL intends to describe truths
>> about a world of your design. It should affect not only your use of your
>> data, but EVERYONE's use of your data in the cloud. Yet even you may have
>> more than one application operating on the data, and those applications may
>> have different requirements. Also, remember that the graph grows, so
>> something that may be true at the moment of cataloging, for example, may
>> not be true when your graph combines with other graphs. So you may say that
>> there is one and only one main author to a work title, but that means one
>> and only one URI. If your data combines with data from another source, and
>> that source has used a different author URI, then what should happen? Each
>> OWL rule makes a statement about a supposed reality, yet you may not have
>> much control over that reality. Fewer rules ("least ontological
>> commitment") means more possibilities for re-use and re-combining of your
>> data; more rules makes it very hard for your data to play well in the world
>> graph.
>>
>> There are cases where OWL *increases* the utility of your properties and
>> classes, in particular declaring sub-class/sub-property relations. If we
>> say that RDA:titleProper is a subproperty of dct:title then anyone who
>> "knows" dct:title can make use of RDA:titleProper. But OWL as a way to
>> *restrict* the definition of the world should be used with caution.
>>
>> I would like to see a discussion of what kinds of inferences we would like
>> to make (or see made) of our data in the open world, and then those
>> inferences should inform how we would use OWL. Do we want to infer that
>> every resource has a title? Obviously not, from how this discussion
>> started. How about that every resource has a known creator? (Not) Do we
>> want to limit the number of titles or creators of a resource in the world
>> graph? The number of identities they can have? Does it make sense to say
>> that a FRBR:Work can have an "adaptationOf" relationship *only* with
>> another FRBR:Work (when no one except libraries is defining their
>> resources in terms of FRBR:Work)?
>>
>> On the other hand, if two resources have the same title and the same date,
>> are they the same resource? (maybe, maybe not).
>>
>> Oops. gotta run. I'm going to try to pull all of this together into
>> something more coherent.
>>
>> Thanks,
>> kc
>>
>>
>>
>>
>>> - ---
>>> A. Soroka
>>> The University of Virginia Library
>>>
>>> On Sep 17, 2013, at 11:00 PM, CODE4LIB automatic digest system wrote:
>>>
>>> From: Karen Coyle <[log in to unmask]>
>>>> Date: September 17, 2013 12:54:33 PM EDT
>>>> Subject: Re: Expressing negatives and similar in RDF
>>>>
>>>>
>>>> Agreed that SPARQL is ugly, and there was discussion at the RDF
>>>> validation workshop about the need for friendly interfaces that then create
>>>> the appropriate SPARQL queries in the background. This shouldn't be
>>>> surprising, since most business systems do not require users to write raw
>>>> SQL or even anything resembling code - often users fill in a form with data
>>>> that is turned into code.
>>>>
>>>> But it really is a mistake to see OWL as a constraint language in the
>>>> sense of validation. An ontology cannot constrain; OWL is solely
>>>> *descriptive* not *prescriptive.* [1]
>>>>
>>>> Inferencing is very different from validation, and this is an area where
>>>> the initial RDF documentation was (IMO) quite unclear. The OWL 2 documents
>>>> are better, but everyone admits that it's still an area of confusion. (In a
>>>> major act of confession at the DC2013 meeting, Ivan Herman, head of the W3C
>>>> semantic web work, said that this was a mistake that he himself made for
>>>> many years. Fortunately, he now helps write the documentation, and it's
>>>> good that he has that perspective.) In effect, inferencing is the
>>>> *opposite* of constraining. Inferencing is:
>>>>
>>>> "All men are liars. Socrates is a man. Therefore Socrates is a liar."
>>>> "Every child has a parent. Johnny is a child. Therefore, Johnny has a
>>>> parent." (whether you can find one or not is irrelevant)
>>>> "Every child has two parents. Johnny is a child. Therefore Johnny has
>>>> two parents. Mary is Johnny's parent." (no contradiction here, we just
>>>> don't know who the other parent is)
>>>> "Every child has two parents. Johnny is a child. Therefore Johnny has
>>>> two parents. Mary is Johnny's parent. Jane is Johnny's parent. Fred is
>>>> Johnny's parent." Here the reasoner detects a contradiction.
>>>>
>>>> The issue of dct:titles is an interesting example. dct:title takes a
>>>> literal value. If you create a dct:title with:
>>>>
>>>> X dct:title http://example.com/junk
>>>>
>>>> with OWL rules that is NOT wrong. It simply provides the inference that "
>>>> http://example.com/junk" is a string - but it can't prevent you from
>>>> creating that triple, because it only operates on existing data.
>>>>
>>>> If you say that every resource MUST have a dct:title, then if you come
>>>> across a resource without a dct:title that is NOT wrong. The reasoner would
>>>> conclude that there is a dct:title somewhere because that's the rule.
>>>> (This is where the Open World comes in) When data contradicts reasoners,
>>>> they can't work correctly, but they act on existing data, they do not
>>>> modify or correct data.
>>>>
>>>> I'm thinking that OWL and constraints would be an ideal training
>>>> webinar, and I think I know who could do it!
>>>>
>>>> kc
>>>>
>>>> [1] http://www.w3.org/TR/2012/REC-**owl2-primer-20121211/<http://www.w3.org/TR/2012/REC-owl2-primer-20121211/>
>>>> "OWL 2 is not a schema language for syntax conformance. Unlike XML, OWL
>>>> 2 does not provide elaborate means to prescribe how a document should be
>>>> structured syntactically. In particular, there is no way to enforce that a
>>>> certain piece of information (like the social security number of a person)
>>>> has to be syntactically present. This should be kept in mind as OWL has
>>>> some features that a user might misinterpret this way."
>>>>
>>> -----BEGIN PGP SIGNATURE-----
>>> Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
>>> Comment: GPGTools - http://gpgtools.org
>>>
>>> iQEcBAEBAgAGBQJSOam0AAoJEATpPY**SyaoIkGpoIAIsIMO+Ev2d/**vdru8O9fQdz0
>>> v770CxK1Dh/**x3GHY9HO7mrbEBpF2IoEWfhuC5UfUu**npaKUBybSCmngu9gBelRm59
>>> AmPA6FAP+T/**JT2cbDRKUXxkGf0v0qjgt4etALI/**tdDK6Yhhtz2/hqvouJxxzvyld
>>> PkATKiZVVSpIUT6pcz4nskOqVB8L1+**ef8kfls06Va78Vboic5Y5vtZgvxS1f**WIxZ
>>> C0m9kwcvfVpBePbaaYm5mpoSuVJv/**p6DE/tMdtt3H60Qgp8CPA9v+fMrq+**DrVvZ6
>>> DXAV4yUzAGTP5Qmkb4p+Ep3k08UN+**O9ndlpsvz830pmE7S0aMeyu8lQKjIm**RiOE=
>>> =K8Si
>>> -----END PGP SIGNATURE-----
>>>
>> --
>> Karen Coyle
>> [log in to unmask] http://kcoyle.net
>> m: 1-510-435-8234
>> skype: kcoylenet
>>
--
Karen Coyle
[log in to unmask] http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet
|