I played around a bit with the library cataloging interface. The results
remind me of a paper written by a high school student who didn't do any
homework; the cites have one or two of the key words in them, but aren't
relevant to the actual question. It also reminds me of this Saturday
Night Live sketch in which contestants have to answer like that kind of
student:
https://www.youtube.com/watch?v=e0HGEZXTy8Y
When I asked about the concept of Work in cataloging, I got an answer
about working in libraries.
I know that this is fascinating technology, and it may eventually result
in useful answers, but I think the most interesting study today would be
in HOW it gets things wrong. A lot of that will have to do with the fact
that language is amazingly imprecise (and our brains seem to work around
that).
I'm not trying to rain on anyone's parade, and experimenting with this
is valuable, but as librarians I think we really need to harshly
evaluate its relationship to facts.
kc
On 5/10/24 10:34 AM, Eric Lease Morgan wrote:
> Parthasarathi Mukhopadhyay <[log in to unmask]> wrote:
>
>> ...However, one interesting point to be mentioned here is the effect of prompt
>> engineering on a RAG pipeline. When I ask the same questions as Simon did
>> on the same set of documents in a similar kind of pipeline with prompt
>> engineering, the result shows some differences (see additional system
>> prompt in the snapshot):
>>
>> --
>> Parthasarathi Mukhopadhyay
>
> Yes, when it comes to generative-AI, prompt engineering is a real thing. Prompts are akin to commands given to a large-language model, and different large-language models have different prompts. The prompt I have been using in my proof-of-concept applications have this form:
>
> Context information is below.
> ---------------------
> {context_str}
> ---------------------
> Given the context information and not prior knowledge, answer the query
> Write the answer in the style of {speaker} and intended for {audience}.
> Query: {query_str}
> Answer:
>
> Where the placeholders (the things in curly braces) are replaced with values from the interface. For example, {context_str} is the content of documents pointing in the same vectored direction as the query. The {speaker} placeholder might be "a second grader", "a librarian", "a professor emeritus", etc. The same thing is true for {audience}. The value of {query_str} is whatever the user (I hate that word) entered in the interface. The prompt is where one inserts things like the results of the previous interaction, what to do if there is very little context, etc. Prompt engineering is a catch-as-catch-can. Once a prompt is completed, it given as input to the large-languagae model for processing -- text generation.
>
> Over the past week, I have created five different generative-AI chatbot interfaces, each with their own different strengths and weaknesses:
>
> * climate change - https://5c0af9ffadb4b3d2ba.gradio.live
> * library cataloging - https://6a147d360a3fc1d7df.gradio.live
> * Jane Austen - https://e7053a831a40f92a86.gradio.live
> * children's literature - https://a10e1d2687be735f40.gradio.live
> * What's Eric Reading - https://e462cd2ac6d1e35d1c.gradio.live
>
> These interfaces use a thing called Gradio (https://www.gradio.app/) for I/O, and they are supposed to last 72 hours, but all of them still seem to be active. Go figure.
>
> Finally, today I saw an announcment for a AI4LAM Zoom meeting on the topic of RAG where three different investigations will be presented:
>
> * Kristi Mukk and Matteo Cargnelutti (Harvard Library Innovation Lab), Warc-GPT
> * Daniel Hutchinson (Belmont Abbey College), the Nicolay project
> * Antoine de Sacy and Adam Faci (HumaNum Lab), the Isidore project
>
> See the Google Doc for details: https://bit.ly/3QEzQ6f
>
> --
> Eric Morgan
> University of Notre Dame
--
Karen Coyle
[log in to unmask]
http://kcoyle.net
|