Ethan, thanks, it's good to have examples.
I'd say that for "simple linking" SPARQL may not be necessary, perhaps
should be avoided, but IF you need something ELSE, say a query WHERE you
have conditions, THEN you may find that a query language is needed.
kc
On 11/6/13 9:14 AM, Ethan Gruber wrote:
> I think that the answer to #1 is that if you want or expect people to use
> your endpoint that you should document how it works: the ontologies, the
> models, and a variety of example SPARQL queries, ranging from simple to
> complex. The British Museum's SPARQL endpoint (
> http://collection.britishmuseum.org/sparql) is highly touted, but how many
> people actually use it? I understand your point about SPARQL being too
> complicated for an API interface, but the best examples of services built
> on SPARQL are probably the ones you don't even realize are built on SPARQL
> (e.g., http://numismatics.org/ocre/id/ric.1%282%29.aug.4A#mapTab). So on
> one hand, perhaps only the most dedicated and hardcore researchers will
> venture to construct SPARQL queries for your endpoint, but on the other,
> you can build some pretty visualizations based on SPARQL queries conducted
> in the background from the user's interaction with a simple html/javascript
> based interface.
>
> Ethan
>
>
> On Wed, Nov 6, 2013 at 11:54 AM, Ross Singer <[log in to unmask]> wrote:
>
>> Hey Karen,
>>
>> It's purely anecdotal (albeit anecdotes borne from working at a company
>> that offered, and has since abandoned, a sparql-based triple store
>> service), but I just don't see the interest in arbitrary SPARQL queries
>> against remote datasets that I do against linking to (and grabbing) known
>> items. I think there are multiple reasons for this:
>>
>> 1) Unless you're already familiar with the dataset behind the SPARQL
>> endpoint, where do you even start with constructing useful queries?
>> 2) SPARQL as a query language is a combination of being too powerful and
>> completely useless in practice: query timeouts are commonplace, endpoints
>> don't support all of 1.1, etc. And, going back to point #1, it's hard to
>> know how to optimize your queries unless you are already pretty familiar
>> with the data
>> 3) SPARQL is a flawed "API interface" from the get-go (IMHO) for the same
>> reason we don't offer a public SQL interface to our RDBMSes
>>
>> Which isn't to say it doesn't have its uses or applications.
>>
>> I just think that in most cases domain/service-specific APIs (be they
>> RESTful, based on the Linked Data API [0], whatever) will likely be favored
>> over generic SPARQL endpoints. Are n+1 different APIs ideal? I am pretty
>> sure the answer is "no", but that's the future I foresee, personally.
>>
>> -Ross.
>> 0. https://code.google.com/p/linked-data-api/wiki/Specification
>>
>>
>> On Wed, Nov 6, 2013 at 11:28 AM, Karen Coyle <[log in to unmask]> wrote:
>>
>>> Ross, I agree with your statement that data doesn't have to be "RDF all
>>> the way down", etc. But I'd like to hear more about why you think SPARQL
>>> availability has less value, and if you see an alternative to SPARQL for
>>> querying.
>>>
>>> kc
>>>
>>>
>>>
>>> On 11/6/13 8:11 AM, Ross Singer wrote:
>>>
>>>> Hugh, I don't think you're in the weeds with your question (and, while I
>>>> think that named graphs can provide a solution to your particular
>> problem,
>>>> that doesn't necessarily mean that it doesn't raise more questions or
>>>> potentially more frustrations down the line - like any new power, it can
>>>> be
>>>> used for good or evil and the difference might not be obvious at first).
>>>>
>>>> My question for you, however, is why are you using a triple store for
>>>> this?
>>>> That is, why bother with the broad and general model in what I assume
>>>> is a
>>>> closed world assumption in your application?
>>>>
>>>> We don't generally use XML databases (Marklogic being a notable
>>>> exception),
>>>> or MARC databases, or <insert your transmission format of
>> choice>-specific
>>>> databases because usually transmission formats are designed to account
>> for
>>>> lots and lots of variations and maximum flexibility, which generally is
>>>> the
>>>> opposite of the modeling that goes into a specific app.
>>>>
>>>> I think there's a world of difference between modeling your data so it
>> can
>>>> be represented in RDF (and, possibly, available via SPARQL, but I think
>>>> there is *far* less value there) and committing to RDF all the way down.
>>>> RDF is a generalization so multiple parties can agree on what data
>>>> means,
>>>> but I would have a hard time swallowing the argument that
>> domain-specific
>>>> data must be RDF-native.
>>>>
>>>> -Ross.
>>>>
>>>>
>>>> On Wed, Nov 6, 2013 at 10:52 AM, Hugh Cayless <[log in to unmask]>
>>>> wrote:
>>>>
>>>> Does that work right down to the level of the individual triple though?
>>>>> If
>>>>> a large percentage of my triples are each in their own individual
>> graphs,
>>>>> won't that be chaos? I really don't know the answer, it's not a
>>>>> rhetorical
>>>>> question!
>>>>>
>>>>> Hugh
>>>>>
>>>>> On Nov 6, 2013, at 10:40 , Robert Sanderson <[log in to unmask]>
>> wrote:
>>>>> Named Graphs are the way to solve the issue you bring up in that post,
>>>>>> in
>>>>>> my opinion. You mint an identifier for the graph, and associate the
>>>>>> provenance and other information with that. This then gets ingested
>> as
>>>>> the
>>>>>
>>>>>> 4th URI into a quad store, so you don't lose the provenance
>> information.
>>>>>> In JSON-LD:
>>>>>> {
>>>>>> "@id" : "uri-for-graph",
>>>>>> "dcterms:creator" : "uri-for-hugh",
>>>>>> "@graph" : [
>>>>>> // ... triples go here ...
>>>>>> ]
>>>>>> }
>>>>>>
>>>>>> Rob
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 6, 2013 at 7:42 AM, Hugh Cayless <[log in to unmask]>
>>>>>>
>>>>> wrote:
>>>>>
>>>>>> I wrote about this a few months back at
>>>>>>> http://blogs.library.duke.edu/dcthree/2013/07/27/the-
>>>>> trouble-with-triples/
>>>>>
>>>>>> I'd be very interested to hear what the smart folks here think!
>>>>>>> Hugh
>>>>>>>
>>>>>>> On Nov 5, 2013, at 18:28 , Alexander Johannesen <
>>>>>>> [log in to unmask]> wrote:
>>>>>>>
>>>>>>> But the
>>>>>>>> question to every piece of meta data is *authority*, which is the
>> part
>>>>>>>> of RDF that sucks.
>>>>>>>>
>>> --
>>> Karen Coyle
>>> [log in to unmask] http://kcoyle.net
>>> m: 1-510-435-8234
>>> skype: kcoylenet
>>>
--
Karen Coyle
[log in to unmask] http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet
|