RAG: Hosting LLMs in the Enterprise// Bernard Camus // IRL Meetup Bristol # 51
In this brief look at the theory and practice of RAG architecture for large language models, Bernard shares insights on the theory behind LLMs and their architecture. It’s a good foundation in theory before discussing real-world use cases and the benefits and potential problems with them, and how RAG can be used to mitigate some of those issues, before giving a quick comparison of self hosting Vs API hosted. Watch it here
|