On Mar 1, 2024, at 4:01 PM, Eric Lease Morgan <[log in to unmask]> wrote:
> RAG (retrieval-augmented generation) -- as one way to implement generative AI -- is something easy for us libraries to get our heads around because the process is very much like the implementation of our discovery systems: 1) create content, 2) index content, 3) query content, 4) return response... For example, I collected about 136 journal articles on the topic of cataloging...
I have temporarily implemented a public interface to a generative-AI chatbot, as outline above:
https://6a147d360a3fc1d7df.gradio.live
Attached is a screen shot of what it looks like.
In short, I curated a collection of 136 articles on the topic of cataloging. I used a model called "all-minilm" to index/vectorize the collection, and I am using LLama2 to generate the responses.
One thing that is different from this chatbot and others is that this one returns references whence the responses were generated.
The implementation is generously supported by folks working for Amazon Web Services, and its purpose is to learn and explore how generative AI can be used effectively in Library Land.
--
Eric Morgan
Navari Family Center for Digital Scholarship
University of Notre Dame
|