RAGStack Examples Index
This section contains examples of how to use RAGStack. We’re actively updating this section, so check back often!
Description | Colab | Documentation |
---|---|---|
Perform multi-modal RAG with LangChain, Astra DB Serverless, and a Google Gemini Pro Vision model. |
||
Build a simple RAG pipeline using NVIDIA AI Foundation Models. |
||
Build a hotels search application with RAGStack and Astra DB Serverless. |
Build a Hotel Search Application with RAGStack and Astra DB Serverless |
|
Vector search with the Maximal Marginal Relevance (MMR) algorithm. |
||
Evaluate a RAG pipeline using LangChain’s QA Evaluator. |
||
Evaluate the response accuracy, token cost, and responsiveness of MultiQueryRAG and ParentDocumentRAG. |
||
Orchestrate the advanced FLARE retrieval technique in a RAG pipeline. |
||
Build a simple RAG pipeline using Unstructured and Astra DB Serverless. |
Description | Colab | Documentation |
---|---|---|
Build a simple RAG pipeline using LlamaIndex and Astra DB Serverless. |
||
Build a simple RAG pipeline using LlamaParse and Astra DB Serverless. |
Description | Colab | Documentation |
---|---|---|
Create ColBERT embeddings, index embeddings on Astra, and retrieve embeddings with RAGStack. |
||
Extract and traverse graphs with the ragstack-ai-knowledge-graph library and CassIO. |
||
Implement a generative Q&A over your own documentation with Astra DB Serverless Search, OpenAI, and CassIO. |
Knowledge Base Search on Proprietary Data powered by Astra DB Serverless |
|
Store external or proprietary data in Astra DB Serverless and query it to provide more up-to-date LLM responses. |
||
Use the self-managed Hyper-Converged Database (HCD) as a vector backend for your RAG application. |
||
Use DataStax Enterprise as a vector backend for your RAG application. |