On Sep 19, 2019, at 11:48 AM, Tim Spalding <[log in to unmask]> wrote:
> I wonder if anyone has thoughts on the best way to identify the source
> of summary/description data (520s) across a large corpus of MARC
> records?
>
> My primary goal is to distinguish between more neutral,
> librarian-written summaries, and the more promotional summaries
> derived from publishers sources, whether typed in from flap copy or
> produced by ONIX-MARC conversion. I can see a number of uses for this
> distinction; one is that members of LibraryThing much prefer short,
> neutral descriptions, and abhor the lengthy purple prose of many
> publisher descriptions...
>
> ...I could compare the MARC descriptions I have with similar data from
> Ingram, Amazon and Bowker, which (mostly) come from publishers. If
> they match, it's probably publisher provided. (All this ignores
> summaries that come from non-library, non-publisher sources.)
>
> 100% accuracy will no doubt elude me, but if I can identify a large
> set of both publisher and non-publisher, I can perhaps use them as a
> training set for a Bayesian filter. It's probably that certain words
> mark something out as publisher-derived—"much-anticipated,"
> "bestselling," "seminal," etc.
Tim, your's is a perfect example of a supervised machine learning classification process. The process works very much like your computer's spam filter. Here's how:
1. collect a set of data that you know is
library-written
2. collect a set of data that you know is
publisher-sourced
3. count, tabulate, and vectorize the
features of your data -- measure the data's
characteristics and associate them with
a collection
4. model the data -- use any one of a number
of clustering algorithms to associate
the data with one collection or another,
such as Naive Bayes
5. optionally, test the accuracy of the model
6. save the model
Once you have done this, you will have finished the training step in the process -- the "supervised" step of machine learning.
You will now want to do some actual classification. Here how:
1. open the model
2. open the data to be classified
3. count, tabulate, and vectorize the data
in the same way the model was created
4. compare the vectors to the model to get
classifications
5. output the classifications
Given two or more directories filled with plain text (*.txt) files, the following script (train.py) creates a machine learning classification model based on Naive Bayes - a very popular and well-understood classification algorithm:
#!/usr/bin/env python
# train.py - given a file name and a list of directories, create a model for classifying similar items
# Eric Lease Morgan <[log in to unmask]>
# require
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
import glob, os, pickle, sys
# sanity check
if len( sys.argv ) < 4 :
sys.stderr.write( 'Usage: ' + sys.argv[ 0 ] + " <model> <directory> <another directory> [<another directory> ...]\n" )
quit()
# get the desired name of the output file
model = sys.argv[ 1 ]
# initialize the data and its associated labels
data = []
labels = []
directories = []
# get the directories to process
for i in range( 2, len( sys.argv ) ) : directories.append( sys.argv[ i ] )
# process each given directory
for directory in directories :
# find all the text files and get the directory's name
files = glob.glob( directory + "/*.txt" )
label = os.path.basename( directory )
# process each file
for file in files :
# open the file, read it, and update the lists of texts and labels
with open( file, 'r' ) as handle :
data.append( handle.read() )
labels.append( label )
# divide the data/labels into training and testing sets
data_train, data_test, labels_train, labels_test = train_test_split( data, labels )
# vectorize the training data
vectorizer = CountVectorizer( stop_words='english' )
data_train = vectorizer.fit_transform( data_train )
# model the training data and associated labels
classifier = MultinomialNB()
classifier.fit( data_train, labels_train )
# vectorize the test set and generate classifications
data_test = vectorizer.transform( data_test )
classifications = classifier.predict( data_test )
# calculate and output accuracy
count = 0
for i in range( len( classifications ) ) :
if classifications[ i ] == labels_test[ i ] : count += 1
print ( " Accuracy: %s%% \n" % ( int( ( count * 1.0 ) / len( classifications ) * 100 ) ) )
# save and quit
with open( model, 'wb' ) as handle : pickle.dump( ( vectorizer, classifier ), handle )
quit()
Given a previously created model plus a directory of files to be classified, the following script (classify.py) will output label/filename pairs denoting how each file might be... classified:
#!/usr/bin/env python
# classify.py - given a previously generate classification model, classify a set of documents
# Eric Lease Morgan <[log in to unmask]>
# require
import glob, os, pickle, sys
# sanity check
if len( sys.argv ) != 3 :
sys.stderr.write( 'Usage: ' + sys.argv[ 0 ] + " <model> <directory>\n" )
quit()
# get input
model = sys.argv[ 1 ]
directory = sys.argv[ 2 ]
# load the model
with open( model, 'rb' ) as handle : ( vectorizer, classifier ) = pickle.load( handle )
# process each unclassified file
for file in glob.glob( directory + "/*.txt" ) :
# open, read, classify, and output
with open( file, 'r' ) as handle : classification = classifier.predict( vectorizer.transform( [ handle.read() ] ) )
print( "\t".join( ( classification[ 0 ], os.path.basename( file ) ) ) )
# done
quit()
As an example, I created a set of four directories, each containing a number of books written by various American authors. Running train.py against the directories results in a file (model) named model.bin:
$ ./train.py ./model.bin ./melville ./hawthorne ./emerson ./longfellow
Run a number of different times, the accuracy of the model seems to range between 80% and 100%.
I then created a directory containing files to be classified. None of these files were included in the training process, but they all were written by one (and only one) of the American authors. I then ran classify.py, and it output label/filename pairs:
$ ./classify.py ./model.bin ./library
longfellow longfellow-01.txt
longfellow longfellow-02.txt
melville melville-02.txt
hawthorne hawthorne-01.txt
melville melville-03.txt
hawthorne melville-01.txt
hawthorne hawthorne-02.txt
hawthorne hawthorne-03.txt
emerson emerson.txt
As you (may or may not) can see, the script worked perfectly.
Code4Lib community, you too can run these scripts. I have put them, plus sample data, in a tarball, and I made it temporarily available at the following URL:
http://dh.crc.nd.edu/tmp/classification.zip
I will elaborate on this whole process in a subsequent posting...
--
Eric Lease Morgan
University of Notre Dame
|