Print

Print


I tried to populate metadata using entity extraction in a project some
years ago and encountered the same issues as you.

To cut straight to the chase, regex normalization routines based on
eyeballing several thousand entries worked as well as anything. We were
unable to come up with an effective weighting system as I'll explain below,
so the effective outcome was program that recommended access points that
humans would have to evaluate.

On Fri, Mar 29, 2019 at 11:13 AM Eric Lease Morgan <[log in to unmask]> wrote:

> My question is now two-fold. First, how do I go about normalizing
> ("cleaning") my names. I could use OpenRefine to normalize things, but
> unfortunately, OpenRefine does not seem to scale very well when it comes to
> .5 million rows of data. OpenRefine's coolest solution for normalizing is
> its clustering functions, and I believe I can rather easily implement a
> version of the Levenshtein algorithm (one of the clustering functions) in
> any number of computer languages including Python or SQL. Using Levenshtein
> I can then fix the various mis-spellings.
>

Since you are using dirty OCR data, Levenshtein may be helpful. However,
I'd still be inclined to look at and preprocess a lot of data before
applying running Levenshtein, because I think there's a real risk of
different entities getting normalized together.  I wouldn't consider using
anything other than a purpose-written program designed around your specific
needs and data. For example, it's way easier to tune a program to ignore
spurious data than accomplish this with a general purpose tool.


> Second, assuming my entities have been normalized, which ones do I
> actually include as metadata? I could simply remove each of the duplicate
> entities associated with a given file, and then add them all. This results
> in a whole lot of names, and just because a name is mentioned one time does
> not necessarily justify inclusion as metadata. I could then say, "If a name
> is mentioned more than once, then it is justified for inclusion", but this
> policy breaks down if a document is really long; the document is still not
> "about" that name.
>

Definitely don't include everything -- this simply duplicates keyword
search functionality in a less effective way.


> Instead, I think I need to implement some sort of weighting system. "Given
> all the entities extracted from a set of documents, only include those
> names which are (statistically) significant." I suppose I could implement
> some version of TF/IDF to derive weight and significance. Hmmmm...
>

This is basically what I found most effective -- which unfortunately wasn't
very. Aside the problem you observed that nonhuman entities frequently
share names with humans, I'm sure you've also discovered that that the same
human will be referenced in many nonunique ways in the same document. Add
that frequency does a much worse job of predicting relevance than we'd
like, the end result is you miss important stuff and catch a lot of noise.

Although I thought it a pretty good way to recommend access points, I
didn't feel the accuracy was anywhere near high enough to go on full auto.

If you figure anything out, please do share. If someone can come up with a
better way of dealing with this kind of stuff, it has a lot of applications
that would help a lot of people.

kyle