Thanks Eric for all the links (I am exploring these and still these are
accessible), and the AI4LAM Zoom meeting.
Best regards
On Fri, May 10, 2024 at 11:05 PM Eric Lease Morgan <
[log in to unmask]> wrote:
> Parthasarathi Mukhopadhyay <[log in to unmask]> wrote:
>
> > ...However, one interesting point to be mentioned here is the effect of
> prompt
> > engineering on a RAG pipeline. When I ask the same questions as Simon did
> > on the same set of documents in a similar kind of pipeline with prompt
> > engineering, the result shows some differences (see additional system
> > prompt in the snapshot):
> >
> > --
> > Parthasarathi Mukhopadhyay
>
>
> Yes, when it comes to generative-AI, prompt engineering is a real thing.
> Prompts are akin to commands given to a large-language model, and different
> large-language models have different prompts. The prompt I have been using
> in my proof-of-concept applications have this form:
>
> Context information is below.
> ---------------------
> {context_str}
> ---------------------
> Given the context information and not prior knowledge, answer the query
> Write the answer in the style of {speaker} and intended for {audience}.
> Query: {query_str}
> Answer:
>
> Where the placeholders (the things in curly braces) are replaced with
> values from the interface. For example, {context_str} is the content of
> documents pointing in the same vectored direction as the query. The
> {speaker} placeholder might be "a second grader", "a librarian", "a
> professor emeritus", etc. The same thing is true for {audience}. The value
> of {query_str} is whatever the user (I hate that word) entered in the
> interface. The prompt is where one inserts things like the results of the
> previous interaction, what to do if there is very little context, etc.
> Prompt engineering is a catch-as-catch-can. Once a prompt is completed, it
> given as input to the large-languagae model for processing -- text
> generation.
>
> Over the past week, I have created five different generative-AI chatbot
> interfaces, each with their own different strengths and weaknesses:
>
> * climate change - https://5c0af9ffadb4b3d2ba.gradio.live
> * library cataloging - https://6a147d360a3fc1d7df.gradio.live
> * Jane Austen - https://e7053a831a40f92a86.gradio.live
> * children's literature - https://a10e1d2687be735f40.gradio.live
> * What's Eric Reading - https://e462cd2ac6d1e35d1c.gradio.live
>
> These interfaces use a thing called Gradio (https://www.gradio.app/) for
> I/O, and they are supposed to last 72 hours, but all of them still seem to
> be active. Go figure.
>
> Finally, today I saw an announcment for a AI4LAM Zoom meeting on the topic
> of RAG where three different investigations will be presented:
>
> * Kristi Mukk and Matteo Cargnelutti (Harvard Library Innovation Lab),
> Warc-GPT
> * Daniel Hutchinson (Belmont Abbey College), the Nicolay project
> * Antoine de Sacy and Adam Faci (HumaNum Lab), the Isidore project
>
> See the Google Doc for details: https://bit.ly/3QEzQ6f
>
> --
> Eric Morgan
> University of Notre Dame
>
|