LISTSERV mailing list manager LISTSERV 16.5

Help for CODE4LIB Archives


CODE4LIB Archives

CODE4LIB Archives


CODE4LIB@LISTS.CLIR.ORG


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CODE4LIB Home

CODE4LIB Home

CODE4LIB  November 2007

CODE4LIB November 2007

Subject:

Re: Getting started with SOLR

From:

"Binkley, Peter" <[log in to unmask]>

Reply-To:

Code for Libraries <[log in to unmask]>

Date:

Thu, 22 Nov 2007 10:11:26 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (90 lines)

Some thoughts on at least some of your questions:

Field types: you'll probably want to index things like titles in two
fields, one tokenized (text) and one not (string), so that you can
retrieve and match the full title as well as searching for terms within
it. See the way the Solr sample app uses "*_exact" fields. You can use
the copyfield setting to avoid having to input the value twice.

The same considerations affect whether you want to use multi-valued
fields: if you're going to facet on that field, you want distinct
values, not a concatenated series; if you're only going to do free term
searching, the concatenation might not be a problem (though you risk
getting matches on phrase searches like "James Miller" against the
example you gave below).

If you use boost on the date field the way you suggest, remember you'll
have to reindex from scratch every year to adjust the boost as items
age. The sample solrconfig.xml contains an example of date-wrangling to
get the same effect based on distance from the current date, rather than
hard-coding the boost into the index.

The only point in interim commits is to make the new stuff available for
searching. If you're just loading stuff into an index that isn't serving
searches, there's no benefit to committing before everything is loaded;
it just slows things down.

Assuming your data structures are the same and you're not talking
millions of records, I'd be inclined to put everything in one index to
make cross-searching easier, assuming you want cross-searching. If you
don't, there's no reason not to have multiple indexes.

There is a way to pass Solr a path to a file that it can read from disk
rather than posting the file. I hunted a bit in the wiki and couldn't
find it, though; it may still be a patch you have to apply.

Peter


-----Original Message-----
From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
Michael Lackhoff
Sent: Thursday, November 22, 2007 1:03 AM
To: [log in to unmask]
Subject: [CODE4LIB] Getting started with SOLR

Hello,

I am just getting my feet wet with SOLR and have a couple of question
how others have done certain things.

I created a schema.xml where basically every field is of type "text" for
the beginning. Do you use specialized types for authors or ISBNs or
other fields?
How do you handle multi-value fields? Do you feed everything in a single
field (like "Smith, James ; Miller, Steve" as I have seen in a pure
Lucene implementation of a collegue or do you use the multiValued
feature of SOLR?

What about boosting? I thought of giving the current year a boost="3.0"
and then 0.1 less for every year the title is older, down to 1.0 for a
21-year-old book. The idea is to have a sort that tends to promote
recent titles but still respects other aspects. Does this sound
reasonable or are there other ideas? I would be very interested in an
actual boosting-scheme from where I could start.

We have a couple of databases that should eventually indexed. Do you
build one huge database with an additional "database" field or is it
better to have every database in its own SOLR instance?

How do you fill the index? Our main database has about 700,000 records
and I don't know if I should build one huge XML-file and feed that into
SOLR or use a script that sends one record at a time with a commit after
every 1000 records or so. Or do something in between and split it into
chunks of a few thousand records each? What are your experiences? What
if a record gives an error? Will the whole file be recjected or just
that one record?
Are there alternatives to the HTTP gateway?
Are there any Perl-scripts around that could help? I built a little
script that uses LWP to feed my test records into the database. It works
but I don't have any error handling yet, very Quick and dirty XML
creation so if there is something more mature I would like to use that.

Any other ideas, further reading, experiences...?

I know these are a lot of questions but after the conference last year I
think there is lots of expertise in this group and perhaps I can avoid a
few beginner mistakes with your help

thanks in advance
- Michael

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

ATOM RSS1 RSS2



LISTS.CLIR.ORG

CataList Email List Search Powered by the LISTSERV Email List Manager