Print

Print


Thanks, Shaun--that overview was great. More info is better than less! I
think this is the step that I want to know more about:
In the case of a “compound object” you may need to have a script iterate
over lots of separate content files and add them to the Solr document that
represents a yearbook.

Is is common to add all text content from the multi-page yearbook to one
Solr field? So, the script would essentially extract and concatenate text
from the multiple full-text files that the METS record points to and add it
to one Solr field? That would make sense to me.

However, when the user selects the item from a results set, they expect to
be taken to the place within the item that contains their search term, or
to not have to do much if any work to figure out where their search term
is. At least, our users are accustomed to that behavior and expect the
application to do that work for them. For example, a search for "ethel
knotts" in OregonDigital
<http://oregondigital.org/catalog/?utf8=%E2%9C%93&search_field=all_fields&q=ethel+knotts>
gives
some results, and a user can select the first item (Commencement Program,
1922) and can see the location pin for the file that contains their term. I
thought some institutions automatically open the item to the first result,
but now that I'm trying to find examples to cite, I'm not seeing that
happen.

Would this probably work by having the application do a second search
(without the user needing to know) within the item after the user selects
it? That search would be triggered by the IA bookreader, in the case of
Oregon Digital, it seems. Or is something else happening? To get this
functionality, the application would have to know which ranges of text
belong to which files, and I'm curious about how that info would be stored
and provided, whether in METS or Solr or something else.

For better general context to these questions: I'm trying to understand how
things are commonly done so I can better talk with our developer, who is in
campus IT. We will be leaving ContentDM and going with a homegrown system
that uses Solr among other components. We don't have any METS records, but
when I think of structural metadata records, I think METS. If there's other
ways of structuring metadata and content to provide the same functionality,
that's good too.

Thanks again for your help!

On Tue, Jan 26, 2016 at 8:24 PM, Shaun D. Ellis <[log in to unmask]>
wrote:

> Hi Laura,
> Great question.  Unfortunately, I think you’re going to be fairly limited
> when it comes to having granular control over fields and facet indexing in
> ContentDM (someone correct me if I’m wrong).
>
> But to answer your question about general steps involved with indexing the
> metadata AND full text of a METS document…
>
> To have the most control over how your data is indexed, you will want to
> use a search platform.  Apache Solr<http://lucene.apache.org/solr/> is
> used in a majority of library-related software, so I’ll use that in my
> examples, although there are several others.  Solr doesn’t have a concept
> of “metadata” and “content”, just “fields" that you can use to search both.
>
> In the case of your METS data, you will need to first transform it into a
> more simplified document (Solr XML) containing the fields that matter for a
> particular search interface and are defined in the schema<
> https://wiki.apache.org/solr/SchemaXml>.  This transform step can be done
> in any number of ways, but XSLT is fairly common.  To index the full-text
> content that your METS document points to, you can build that into your
> transform script/stylesheet, or you can run a separate script/process later
> that updates the record with the full-text.  In the case of a “compound
> object” you may need to have a script iterate over lots of separate content
> files and add them to the Solr document that represents a yearbook.
>
> There are a few ways to add data to a solr index, but a common one in
> library-land is to add (and update) records to the Solr index by POSTing
> your freshly “transformed" data via HTTP (here’s the Solr quickstart
> tutorial<http://lucene.apache.org/solr/quickstart.html>).
>
> Customizing your search results (weighting, stemming, rows per page, etc.)
> can be handled in the Solr config file<
> https://wiki.apache.org/solr/SolrConfigXml>.  For example, you can tweak
> the weight/relevance of the query based on which fields it matches.
>
> When you query Solr over HTTP, it will return results in XML or JSON that
> you can then render in a display or discovery interface. Blacklight<
> http://projectblacklight.org/> is one example of a discovery interface.
>
> Sorry if I’ve covered stuff you already know.  There are lots of tools,
> applications, and frameworks that will simplify the process (perhaps too
> much in some cases!), but the best give you the most control over how you
> index and retrieve your data.  I think that covers the basics and hopefully
> answers your question.
>
> Cheers,
> Shaun
> P.S. -  I’m not sure that even Solr will help you locate the Doyle Owl. ;)
>
> On Jan 26, 2016, at 7:30 PM, Laura Buchholz <[log in to unmask]
> <mailto:[log in to unmask]>> wrote:
>
> Hi all,
>
> I'm trying to understand how digital library systems work when there is a
> need to search both metadata and item text content (plain text/full text),
> and when the item is made up of more than one file (so, think a digitized
> multi-page yearbook or newspaper). I'm not looking for answers to a
> specific problem, really, just looking to know what is the current state of
> community practice.
>
> In our current system (ContentDM), the "full text" of something lives in
> the metadata record, so it is indexed and searched along with the metadata,
> and essentially treated as if it were metadata. (Correct?). This causes
> problems in advanced searching and muddies the relationship between what is
> typically a descriptive metadata record and the file that is associated
> with the record. It doesn't seem like a great model for the average digital
> library. True? I know the answer is "it depends", but humor me... :)
>
> If it isn't great, and there are better models, what are they? I was taught
> METS in school, and based on that, I'd approach the metadata in a METS or
> METS-like fashion. But I'm unclear on the steps from having a bunch of METS
> records that include descriptive metadata and pointers to text files of the
> OCR (we don't, but if we did...) to indexing and providing results to
> users. I think another way of phrasing this question might be: how is the
> full text of a compound object (in the sense of a digitized yearbook or
> similar) typically indexed?
>
> The user requirements for this situation are essentially:
> 1. User can search for something and get a list of results. If something
> (let's say a pamphlet) appears in results based on a hit in full text, the
> user selects the pamphlet which opens to the file (or page of the pamphlet)
> that contains the text that was matched. This is pretty normal and does
> work in our current system.
> 2. In an advanced search, a user might search for a name in the "author"
> field and a phrase in the "full text" field, and say they want both
> conditions to be fulfilled. In our current system, this won't provide
> results when it should, because the full text content is in one record and
> the author's name is in another record, so the AND condition can't be met.
> 3. Librarians can link description metadata records (DC in our case) to
> particular files, sometimes one to one, sometimes many to one, sometimes
> one to many.
>
> If this is too unclear, let me know...
> Thanks!
>
> --
> Laura Buchholz
> Digital Projects Librarian
> Reed College Library
> 503-517-7629
> [log in to unmask]<mailto:[log in to unmask]>
>
>


-- 
Laura Buchholz
Digital Projects Librarian
Reed College Library
503-517-7629
[log in to unmask]