Superfast RAG with Llama 3 and Groq

James Briggs
James Briggs
6.3 هزار بار بازدید - ماه قبل - Groq API provides access to
Groq API provides access to Language Processing Units (LPUs) that enable incredibly fast LLM inference. The service offers several LLMs including Meta's Llama 3. In this video, we'll implement a RAG pipeline using Llama 3 70B via Groq, an open source e5 encoder, and the Pinecone vector database.

📌 Code:
https://github.com/pinecone-io/exampl...

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-si...

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
Discord: discord

Twitter: Twitter: jamescalam
LinkedIn: LinkedIn: jamescalam

#artificialintelligence #llama3 #groq

00:00 Groq and Llama 3 for RAG
00:37 Llama 3 in Python
04:25 Initializing e5 for Embeddings
05:56 Using Pinecone for RAG
07:24 Why We Concatenate Title and Content
10:15 Testing RAG Retrieval Performance
11:28 Initialize connection to Groq API
12:24 Generating RAG Answers with Llama 3 70B
14:37 Final Points on Why Groq Matters
ماه قبل در تاریخ 1403/04/12 منتشر شده است.
6,332 بـار بازدید شده
... بیشتر