Print

Print


What process would I need to go through in order to expose sets of RDF  
files as linked data?

My Alex Catalogue includes the full text of approximately 15,000  
electronic texts. Each text is represented in a (MyLibrary) database,  
and because of that I am able to create reams of reports against the  
collection. For example, I use this infrastructure to index the  
content with Solr/Lucene. Recently I have implemented on-the-fly  
concordances. I have created HTML files complete with a floating  
palette supporting various services against the texts. I am currently  
in the process of using statistical analysis to extract keywords and  
two-word phrases from the texts to use as descriptors. In the near  
future I hope to associate a URI with each work and/or author to  
supplement the user experience with content from the Web. I have a  
bunch more ideas too, but they are too numerous to mention.

More than a couple of years ago I created sets RDF files against the  
documents. Thomas More's Utopia is a good example. [1] The RDF is not  
always perfect, mostly for encoding reasons. The RDF is not always  
complete nor as easily parsable as possible. For example, names and  
titles would ideally point to URIs or consistently formatted. But on  
the whole it is great first step since each RDF file contains the full  
text of each text.

Given I have these RDF files and I am able to easily update them (more  
or less), what are some of the things I need to do in order expose  
them more systematically and in a way that can truly be called linked  
data?

[1] http://infomotions.com/etexts/literature/english/1500-1599/more-utopia-221.rdf

-- 
Eric Lease Morgan