Vector Store RAG

Learn how to build a Retrieval Augmented Generation (RAG) application using Astra DB and Langflow.

In this starter project, you create a document ingestion flow using Astra DB as a vector store, and a RAG Application flow that uses the documents stored in Astra DB to generate responses to your queries.

This starter project highlights the use of Astra DB in a vector RAG project, but this component is adaptable as a vector database for any Langflow project. For more information, see Astra DB in Langflow.

Open Langflow and start a new project

  1. In the Astra Portal header, switch your active app from Astra DB to Langflow.

  2. Click New Project, and then select the Vector Store RAG project.

This opens a starter project with the necessary components to run a RAG application using Astra DB.

This project consists of two flows: the document ingestion flow and the RAG application flow.

Run the document ingestion flow

The ingestion flow is responsible for ingesting documents into the Astra DB database, and it has the following components:

  • A Files component that uploads a text file to Langflow.

  • A Recursive Character Text Splitter component that splits the text into smaller chunks.

  • An OpenAIEmbeddings component that generates embeddings for the text chunks.

  • An Astra DB component that stores the text chunks in the Astra DB database. For more information, see Astra DB in Langflow.

To create the document ingestion flow:

  1. Add your credentials to the OpenAI components. The fastest way to complete these fields is with Langflow’s Global Variables.

    1. In the OpenAI API Key field, click the language Globe icon, and then click Add New Variable. Alternatively, click your username in the top right corner and select Settings.

  2. Click Global Variables, and then click Add New.

    1. Name your variable. Paste your OpenAI API key (sk-…​) in the Value field.

    2. In the Apply To Fields field, select the OpenAI API Key field to apply this variable to all OpenAI Embeddings components.

  3. Add your credentials to the AstraDB component using the same Global Variables process.

    1. In the Token field, click the language and then click Add New Variable, or click your username in the top right corner and select Settings.

    2. Name your variable. Paste your Astra token (AstraCS:…​) in the Value field.

    3. In the Apply To Fields field, select the Astra DB Application Token field to apply this variable to all Astra components.

  4. In the Astra DB component, select the Database. If you don’t have a database, click Create new database to create one.

  5. In the Astra DB component, select the Collection. If you don’t have a collection, click Create Collection to create one.

  6. Click more_horiz Advanced and paste your API Endpoint value into the API Endpoint field. The API Endpoint value is autopopulated if you create a new Astra DB with the Langflow component. If you have an existing Astra DB, you can find the API Endpoint in the Astra DB console.

  7. In the File component, upload a text file from your local machine with data you want to ingest into the Astra DB database.

  8. In the Astra DB component, click play_arrow Play to start the ingestion flow. Your file passes through the Recursive Character Text Splitter component, which splits the text into smaller chunks. These chunks are then passed to the OpenAI Embeddings component, which generates embeddings for each chunk. The embeddings are then stored in the Astra DB database.

Run the RAG application flow

The RAG application flow generates responses to your queries from the embedded documents. This application defines all of the steps from getting the user’s input, to generating a response, and finally displaying it in the Playground.

The RAG application flow consists of the following:

  • A Chat Input component that defines where to put the user input coming from the playground.

  • An OpenAI Embeddings component that generates embeddings from the user input.

  • An Astra DB component that retrieves the most relevant records from the Astra DB database.

  • A Text Output component that turns the records into text by concatenating them and also displays it in the playground. This component is named Extracted Chunks in the example, and that is how it appears in the playground, but it is a Text Output component in the flow.

  • A Prompt component that takes in the user input and the retrieved records as text and builds a prompt for the OpenAI model.

  • An OpenAI component that generates a response to the prompt.

  • A Chat Output component that displays the response in the Playground.

If you used Langflow’s Global Variables feature, the RAG application flow components are already configured with the necessary credentials.

To create the RAG application flow:

  1. In the Chat Output component, click play_arrow Play to start the RAG application flow.

  2. After the flow runs, click Playground Playground to start a chat session. Because this flow has Chat Input and Text Output components, the Playground displays a Chat Input field and an Extracted Chunks output section.

  3. Enter a query, and then make sure the bot responds based on your uploaded data. With each query, the Extracted Chunks section is updated to display the retrieved records.

If something goes wrong:

To view Logs, click Options, and then click Logs.

Check the Inputs and Outputs tabs in the Playground and ensure that the data is being passed correctly between components.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com