Print

Print


 
I'm perfectly willing to be persuaded to the 'light' side as well and I'm looking forward to learning more about your project as well, which is much more mature than mine at this point ... I'm just interested in something that works and is easily tweakable.  I don't hold a lot of hope that a one-size-fits-all XSL transformation could ever be put together -- I think there are too many minor but significant variations in how people catalog stuff and how different ILSes stick things like item data in the MARC record.  I could be wrong about that though.
 
Maybe I've just been traumatized by having to deal with so many bad uses of XSL featuring multiple 250KB+ stylesheets with extension functions spaghetti'd throughout that I'm disinclined to use it even when it is the best and simplest tool for the job.  I'd love to see how you're getting around the somewhat convoluted back and forth between XSL and extension functions that has been my experience.
 
As far as performance goes, if you've got an ILS that allows you to dump all the MARC records in the system at once but not a way to incrementally get all the MARC records that have changed since you last updated the indexes, then indexing performance is very important -- if you can reindex all your records in an hour or two it makes it feasible to just rebuild your indexes from scratch every night from 3-4 AM where it wouldn't be if it takes 8 hours.  It also makes the cost of fine-tuning your indexes much lower.
 
Just for some clarification, in my system, you don't need to know a thing about programming or XML at all or ever look at a single line of code to change how an index is created.    There is just one configuration file (in the future this may all be stored in a database and accessible via Django's automatic web admin interface but for now it's just a text file) and the core indexing code is never modified at all.  The three lines in the config file that define the title index look something like this:
 
title.type = single
title.marcMap = 245$ab,246$ab,240$a
title.stripTrailingPunctuation = 1
 
(The .type argument says that it is not a repeated field in Solr, the .marcMap field dictates how the title data is extracted and .stripTrailingPunctuation does what it sounds like)
 
Now say you want to include the n subfields in there as well.  Well, you just change that one line in that one config file to:

title.marcMap = 245$abn,246$abn,240$an
 
Now say you want to introduce a new index in Solr.  Well, you just add a couple of new lines to the config file, run a little script that automatically generates the Solr schema (though I still have a ton of work to do on that piece of it), reindex, and you're done.   
 
Defining an index of the language of the material ("English", "Swahili", etc.) would look like:
 
language.type= singleTranslation
language.marcMap = 008/35:38
language.translationMap = LANGUAGE_CODING_MAP
 
(LANGUAGE_CODING_MAP is a hash map of the three letter LoC language codes, for example 'eng' => 'English' )
 
You can handle fields with processors (little bits of code) if you need something more sophisticated than a MARC map or a translation.  The processor I have for the common-sense format of the item (DVD, Book on CD, eMusic -- the kind of thing that is very annoying to get out of a MARC record but very important to patrons) is extremely complex and  would be unbelievably tedious to replicate in XSL.   Now, say somebody writes a better processor (which could theoretically be written in any JVM language - java, jruby, jython, javascript (rhino), etc.).  To use it would be as simple as changing one line in a configuration file and dropping the processor code in a particular spot.  
 
 
--Casey

>>> [log in to unmask] 1/19/2007 2:35 PM >>>
Casey, we have had great successes with XSL for MARCXML to SOLR, so I
can't agree to everything you are saying.  However I anxiously await
your presentation on your successes with SOLR so you can persuade me to
the dark side :)

Casey Durfee wrote:
>
I agree with your argument of abstracting your programming from your
data so that a non-tech-savvy librarian could modify the solr settings.
But if you modify the solr settings, you need to (at this point)
reimport all of your data which mean that you either have to change your
XSLT or your transformation application.  I personally feel that a
less-tech savvy individual can pickup XSLT easier than coding java.
Maybe I am understanding you incorrectly though.
>
> 3) Ease of programming.
>
> a) Heavy-duty string manipulation is a pain in pure XSLT.  To index MARC records have to do normalization on dates and names and you probably want to do some translation between MARC codes and their meanings (for the audience & language codes, for instance).  Is it doable?  Yes, especially if you use XSL extension functions.  But if you're going to have huge chunks of your logic buried in extension functions, why not go whole hog and do it all outside of XSLT, instead of having half your programming logic in an extension function and half in the XSLT itself?
>
I can see your argument for this, however I like to abstract my layers
of applications as mentioned above.  So in this aspect, I have a script
the runs the XSLT.  Inside the script is also some logic that the XSLT
refers back to for the manipulation and massaging of the data.  I can
keep all XML related transformation logic in my XSL and all of my coding
logic in my script.  Again, I think it boils down to preference.
>
> b) Using XSLT makes object-oriented programming with your data harder.
That's a bold statement.
>   Your indexer should be able to give you a nice object representation of a record (so you can use that object representation within other code).  If you go the XSLT route, you'd have to parse the MARC record, transform it to your Solr record XML format, then parse that XML and map the XML to an object.  If you avoid XSLT, you just parse the MARC record and transform it to an object programmatically (with the object having a method to print itself out as a Solr XML record).
>
> Honestly, all this talk of using XSLT for indexing MARC records reminds me of that guy who rode across the United States on a riding lawnmower.  I am looking forward to there being a standard, well-tested MARC record indexer for Solr (and would be excited to contribute to such a project), but I don't think that XSL is the right tool to use.
>
I can agree with your OO style of design in which you have one Record
object that is responsible for all of the work (converting to solr, and
back again) but again, this all seems to be based on preference.
I have an import script that is completely independent of our SOLR
libraries.  I have a main SOLR class that is responsible for interacting
with SOLR and as well creating Record objects (using XSLT of course).

Also, I am sure there are plenty of folks attending the SOLR
preconference who are not experienced software developers and may have
an easier time developing some XSLT stylesheets -- and an even easier
time if we come up with a standard xslt doc for marcxml -> solr -- than
learning how to do what you are describing so they can create a nice
search engine for their catalog.

I feel that your arguments for not using XSLT are based on preference
and do not lend toward a "better" design at this point.

But I love to be proved wrong ... Im currently finishing up my masters
in Computer Science/Software Engineering  ... so I love these kinds of
debates since they are all thats on my head at the moment.

Andrew