Print

Print


To me, the big deal about Google's pagerank is often missed. Sure, to
some extent a link from page A to page B is a 'vote' for page B, and
I suppose a holdings count is roughly equivalent to that. But more
importantly, from my point of view, Google realized that the _link
text_ in the link from A to B was descriptive metadata about B. It
was a vote not just for B being "good", but for B being _about_ the
words contained in the incoming link text.

There's really no way to duplicate that with a library catalog. It's
an artifact of the nature of the web. But it's what Google's real
genius was---certainly Google's algorithm isn't always going to put
the web equivalent of the bible (Google.com itself? :) ) at the top
of all of your queries that contain that page in the result
set---only at the top of the querries whose text matches incoming
link text to that page (among many other things; this is an
oversimplification).  There is a lot going on in Google's relevancy
rankings in addition to just putting 'popular' pages on top, and most
of it is about trying to gauge the relevancy of the page to the
user's query.  Using techniques that may not be available in a
catalog as opposed to the web.

So I'd say, be careful what lesson you draw from Google.

It seems to me less than clear that creating a new edition of a work
is a 'vote' for it.  But worse than that, to some extent how many
records exist in our catalog that collate to the same FRBR 'work' is
an artifact of the cataloging rules. A given work might have been in
continuous publication since 1912 and have sold millions of copies,
but only have one record. Another work might have been published only
three years ago and sold tens of thousand of copies, but have
multiple records in the catalog because the publisher changed just
enough in a new 'edition' every year to trigger the creation of a new
record by catalogers attempting (successfully or not!) to follow
standards for when a new edition is 'different enough'  to justify a
new record. (Many college textbooks would end up like this, but most
libraries don't hold college textbooks). WorldCat, of course, can
contain multiple records for the _exact same_ edition, due to
cataloger error. This doesn't matter when you are just summing the
holdings count, as OCLC is, for a ranking---because whether it's one
record with 100 holdings or 10 records with 10 holdings each, your
total count is the same.  The formula "sum of holdings" isn't
effected by the number of records these holdings are distributed
amongst.  If you are instead using a formula where an increased
number of records for a given work increases your ranking, all other
things being equal---I'm skeptical.

--Jonathan

At 2:48 PM -0400 4/11/06, Keith Jenkins wrote:
>A very interesting discussion here... so I'll support its funding with
>my own two cents.
>
>I'd argue that search relevance is a product of two factors:
>   A. The overall popularity of an item
>   B. The appropriateness to a given query
>
>Both are approximate measures with their own difficulties, but a good
>search usually needs to focus on both (unless B is so restrictive that
>we don't need A).
>
>B is always going to be inhibited, to various degrees, by the limited
>nature of the user's input--usually just a couple of words.  If a user
>isn't very specific, then it is indeed quite difficult to determine
>what would be most relevant to that user.  That's where A can really
>help to sort a large number of results (although B can also help
>sorting).  I think Thom makes a good point here:
>
>On 4/10/06, Hickey,Thom <[log in to unmask]> wrote:
>>  Actually, though, 'relevancy' ranking based on where terms occur in the
>>  record and how many times they occur is of minor help compared to some
>>  sort of popularity score.  WorldCat holdings work fairly well for that,
>>  as should circulation data.
>
>In fact, it was this sort of "popularity score" logic that originally
>enabled Google to provide a search engine far better than what was
>possible using just term placement and frequency metrics for each
>document.  Word frequency is probably useless for our short
>bibliographic records that are often cataloged at differing levels of
>completeness.  But I think it could still be useful to give more
>weight to the title and primary author of a book.
>
>The basic mechanism of Google's PageRank algorithm is this: a link
>from page X to page Y is a vote by X for Y, and the number of votes
>for Y determines the power of Y's vote for other pages.  We could
>apply this to FRBR records, if we think of every FRBR relationship as
>a two-way link.  In this way, all the items link to the
>manifestations, which link to the expressions, which link to the
>works.  All manner of derivative works would also be linked to the
>original works.  So the most highly-related works get ranked the
>highest.  (For the algorithmically-minded, I found the article "XRANK:
>Ranked Keyword Search over XML Documents" helpful in understanding how
>the PageRank algorithm can be applied to other situations:
>http://www.cs.cornell.edu/~cbotev/XRank.pdf )  It would be interesting
>to see how such an approach compares to a simple tally of "number of
>versions".
>
>-Keith