Semantic Chunking for RAG

James Briggs
James Briggs
24 هزار بار بازدید - 5 ماه پیش - Semantic chunking for RAG allows
Semantic chunking for RAG allows us to build more concise chunks for our RAG pipelines, chatbots, and AI agents. We can pair this with various LLMs and embedding models from OpenAI, Cohere, Anthropic, etc, and libraries like LangChain or CrewAI to build potentially improved Retrieval Augmented Generation (RAG) pipelines. 📌 Code: github.com/pinecone-io/examples/blob/master/learn/… 🚩 Intro to Semantic Chunking: www.aurelio.ai/learn/semantic-chunkers-intro 🌲 Subscribe for Latest Articles and Videos: www.pinecone.io/newsletter-signup/ 👋🏼 AI Consulting: aurelio.ai/ 👾 Discord: discord.gg/c5QtDB9RAP Twitter: twitter.com/jamescalam LinkedIn: www.linkedin.com/in/jamescalam/ 00:00 Semantic Chunking for RAG 00:45 What is Semantic Chunking 03:31 Semantic Chunking in Python 12:17 Adding Context to Chunks 13:41 Providing LLMs with More Context 18:11 Indexing our Chunks 20:27 Creating Chunks for the LLM 27:18 Querying for Chunks #artificialintelligence #ai #nlp #chatbot #openai
5 ماه پیش در تاریخ 1403/02/15 منتشر شده است.
24,009 بـار بازدید شده
... بیشتر