If anyone out there is really making a case for FRBR based on whether or
not it saves a few characters in a database, well, she should give up the
library business and go make money off her time machine . Maybe -- *maybe* --
15 years ago. But I have to say, I'm sitting on 10m records right now, and
would happily figure out how to deal with double or triple the space
requirements for added utility. Space is always a consideration, but it's
slipped down into about 15th place on my Giant List of Things to Worry
About.
On Wed, Oct 16, 2013 at 3:49 PM, Karen Coyle <[log in to unmask]> wrote:
> On 10/16/13 12:33 PM, Kyle Banerjee wrote:
>
>> BTW, I don't think 240 is a good substitute as the content is very
>> different than in the regular title. That's where you'll find music, laws,
>> selections, translations and it's totally littered with subfields. The
>> 70.1
>> figure from the stripped 245 is probably closer to the mark
>>
>
> Yes, you are right, especially for the particular purpose I am looking at.
> Thanks.
>
>
>
>> IMO, what you stand to gain in functionality, maintenance, and analysis is
>> much more interesting than potential space gains/losses.
>>
>
> Yes, obviously. But there exists an apology for FRBR that says that it
> will save cataloger time and will be more efficient in a database. I think
> it's worth taking a look at those assumptions. If there is a way to measure
> functionality, maintenance, etc. then we should measure it, for sure.
>
> kc
>
>
>
>> kyle
>>
>>
>>
>>
>> On Wed, Oct 16, 2013 at 12:00 PM, Karen Coyle <[log in to unmask]> wrote:
>>
>> Thanks, Roy (and others!)
>>>
>>> It looks like the 245 is including the $c - dang! I should have been more
>>> specific. I'm mainly interested in the title, which is $a $b -- I'm
>>> looking
>>> at the gains and losses of bytes should one implement FRBR. As a hedge,
>>> could I ask what've you got for the 240? that may be closer to reality.
>>>
>>> kc
>>>
>>>
>>> On 10/16/13 10:57 AM, Roy Tennant wrote:
>>>
>>> I don't even have to fire it up. That's a statistic that we generate
>>>> quarterly (albeit via Hadoop). Here you go:
>>>>
>>>> 100 - 30.3
>>>> 245 - 103.1
>>>> 600 - 41
>>>> 610 - 48.8
>>>> 611 - 61.4
>>>> 630 - 40.8
>>>> 648 - 23.8
>>>> 650 - 35.1
>>>> 651 - 39.6
>>>> 653 - 33.3
>>>> 654 - 38.1
>>>> 655 - 22.5
>>>> 656 - 30.6
>>>> 657 - 27.4
>>>> 658 - 30.7
>>>> 662 - 41.7
>>>>
>>>> Roy
>>>>
>>>>
>>>> On Wed, Oct 16, 2013 at 10:38 AM, Sean Hannan <[log in to unmask]> wrote:
>>>>
>>>> That sounds like a request for Roy to fire up the ole OCLC Hadoop.
>>>>
>>>>> -Sean
>>>>>
>>>>>
>>>>>
>>>>> On 10/16/13 1:06 PM, "Karen Coyle" <[log in to unmask]> wrote:
>>>>>
>>>>> Anybody have data for the average length of specific MARC fields in
>>>>> some
>>>>>
>>>>>> reasonably representative database? I mainly need 100, 245, 6xx.
>>>>>>
>>>>>> Thanks,
>>>>>> kc
>>>>>>
>>>>>> --
>>>>>> Karen Coyle
>>>>>> [log in to unmask] http://kcoyle.net
>>>>>> m: 1-510-435-8234
>>>>>> skype: kcoylenet
>>>>>>
>>>>>> --
>>> Karen Coyle
>>> [log in to unmask] http://kcoyle.net
>>> m: 1-510-435-8234
>>> skype: kcoylenet
>>>
>>>
> --
> Karen Coyle
> [log in to unmask] http://kcoyle.net
> m: 1-510-435-8234
> skype: kcoylenet
>
--
Bill Dueber
Library Systems Programmer
University of Michigan Library
|