UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 9: ordinal not in range(128)
Having seen my fair share of these kinds of encoding errors in Python, I can speculate (without seeing the pymarc source code, so please don't hold me to this) that it's the Python code that's not set up to handle the UTF-8 strings from your data source. In fact, the error indicates it's using the default 'ascii' codec rather than 'utf-8'. If it said "'utf-8' codec can't decode...", then I'd suspect a problem with the data.
If you were to send the full traceback (all the gobbledy-gook that Python spews when it encounters an error) and the version of pymarc you're using to the program's author(s), they may be able to help you out further.
From: Code for Libraries [[log in to unmask]] on behalf of Godmar Back [[log in to unmask]]
Sent: Thursday, March 08, 2012 1:02 PM
To: [log in to unmask]
Subject: [CODE4LIB] Q.: MARC8 vs. MARC/Unicode and pymarc and misencoded III records
a few days ago, I showed pymarc to a group of technical librarians to
demonstrate how easily certain tasks can be scripted/automated.
Unfortunately, it blew up at me when I tried to write a record:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 9:
ordinal not in range(128)
Investigation revealed this culprit:
=LDR 00916nam a2200241I 4500
=100 10$aEsser, Hermann,$d1900-
=245 14$aDie j<E8>udischer Weltpest ;$bjudend<E1>ammerung auf dem
Erdball,$cvon Hermann Esser.
=260 0\$aM<E8>unchen,$bZentralverlag der N S D A P., F. Eher ahchf.,$c1939.
=300 \\$a243  p.$c23 cm.
=533 \\$aAlso available as electronic reproduction.$bChicago :$cCenter for
=650 \0$aJewish question.
=700 12$aBierbrauer, Johann Jacob,$d1705-1760?
=710 2\$aCenter for Research Libraries (U.S.)
=856 41$uhttp://dds.crl.edu/CRLdelivery.asp?tid=10538$zOnline version
=998 \\$awww$b08-30-10$cm$dz$e-$fger$ggw $h4$i0
The leader field is set to 'a', so the record should contain
UTF8-encoded Unicode , but E8 75 in the 245$a appears to be ANSEL where
'E8' denotes the Umlaut preceding the lowercase 'u' (0x75). 
To me, this record looks misencoded... am I correct here? There are
thousands of such records in the data set I'm dealing with, which was
obtained using the 'Data Exchange' feature of III's Millennium system.
My question is how others, especially pymarc users dealing with III
records, deal with this issue or whatever other
experiences/hints/practices/kludges exist in this area.