Of course after sending that I noticed a mistake, the curl example
should look like:
curl -i --header "Accept: application/json"
http://id.library.osu.edu/person/123
HTTP/1.1 303 See Other
date: Thu, 31 Jan 2013 10:47:44 GMT
server: Apache/2.2.14 (Ubuntu)
location: http://id.library.osu.edu/person/123.json
vary: Accept-Encoding
I didn't have it redirecting to the JSON previously.
//Ed
On Wed, Jan 30, 2013 at 4:19 PM, Phillips, Mark <[log in to unmask]> wrote:
> Thanks for the prompt Ed,
>
> We've had a stupid simple vocabulary app for a few years now which we use
> to manage all of our controlled vocabularies [1]. These are represented in our
> metadata editing application as drop-downs and type ahead values as described
> in the first email in this thread. Nothing too exciting. The entire vocabulary app
> is available to our systems as xml, python or json objects. When we export our
> records as RDF we try and use the links for these values instead of the strings.
>
> We are currently working on another simple app to manage names for our system
> (UNT Name App). It takes into account some of the use cases described in this thread such as
> disambiguation, variant names, and the all important linking to other vocabularies
> of which VIAF, LC, and Wikipedia are the primary expected targets. Once populated
> it is to be integrated into the metadata editing system to provide auto-complete
> functions to the various name fields in our repository.
>
> As far as technology we've tried to crib off the Chronicling America site as much
> as possible and follow the pattern of using the suggestions extension of OpenSearch [2]
> to provide the API.
>
> Mark
>
>
>
> [1] http://digital2.library.unt.edu/vocabularies/
> [2] http://www.opensearch.org/Specifications/OpenSearch/Extensions/Suggestions/1.1
>
>
>
> ________________________________________
> From: Code for Libraries [[log in to unmask]] on behalf of Ed Summers [[log in to unmask]]
> Sent: Wednesday, January 30, 2013 2:15 PM
> To: [log in to unmask]
> Subject: Re: [CODE4LIB] Adding authority control to IR's that don't have it built in
>
> On Tue, Jan 29, 2013 at 5:19 PM, Kyle Banerjee <[log in to unmask]> wrote:
>> This would certainly be a possibility for other projects, but the use case
>> we're immediately concerned with requires an authority file that's
>> maintained by our local archives. It contains all kinds of information
>> about people (degrees, nicknames, etc) as well as terminology which is not
>> technically kosher but which we know people use.
>
> Just as an aside really, I think there's a real opportunity for
> libraries and archives to make their local thesauri and name indexes
> available for integration into other applications both inside and
> outside their institutional walls. Wikipedia, Freebase, VIAF are
> great, but their notability guidelines don't always the greatest match
> for cultural heritage organizations. So seriously consider putting a
> little web app around the information you have, using it for
> maintaining the data, making it available programatically (API), and
> linking it out to other databases (VIAF, etc) as needed.
>
> A briefer/pithier way of saying this is to quote Mark Matienzo [1]
>
> Sooner or later, everyone needs a vocabulary management app.
>
> :-)
>
> //Ed
>
> PS. I think Mark Phillips has done some interesting work in this area
> at UNT. But I don't have anything to point you at, maybe Mark is tuned
> in, and can chime in.
>
> [1] https://twitter.com/anarchivist/status/269654403701682176
|