Integrate LlamaIndex.TS with Astra DB Serverless

query_builder 15 min

LlamaIndex.TS can use Astra DB Serverless to store and retrieve vectors for ML applications.

Prerequisites

This guide requires the following:

Connect to the database

  1. In the Astra Portal, go to Databases, and select your database.

  2. Make sure the database is in Active status, and then, in the Database Details section, click Generate Token.

  3. In the Application Token dialog, click content_paste Copy, and then store the token securely. The token format is AstraCS: followed by a unique token string.

    Application tokens created from Database Details have the Database Administrator role for the associated database.

  4. In Database Details, copy your database’s API endpoint. The endpoint format is https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com.

  5. In your terminal, assign your token and API endpoint to environment variables.

    • Linux or macOS

    • Windows

    • Google Colab

    export ASTRA_DB_API_ENDPOINT=API_ENDPOINT
    export ASTRA_DB_APPLICATION_TOKEN=TOKEN
    export OPENAI_API_KEY=API_KEY
    set ASTRA_DB_API_ENDPOINT=API_ENDPOINT
    set ASTRA_DB_APPLICATION_TOKEN=TOKEN
    set OPENAI_API_KEY=API_KEY
    import os
    os.environ["ASTRA_DB_API_ENDPOINT"] = "API_ENDPOINT"
    os.environ["ASTRA_DB_APPLICATION_TOKEN"] = "TOKEN"
    os.environ["OPENAI_API_KEY"] = "API_KEY"

Load and split documents

  1. Download the text of Edgar Allen Poe’s "The Cask of Amontillado" to be indexed in the vector store.

    curl https://raw.githubusercontent.com/CassioML/cassio-website/main/docs/frameworks/langchain/texts/amontillado.txt \
      --output amontillado.txt
  2. Create load.ts in a src directory.

  3. Import your dependencies.

    load.ts
    import fs from "node:fs/promises";
    
    import {
      AstraDBVectorStore,
      Document,
      VectorStoreIndex,
      storageContextFromDefaults,
    } from "llamaindex";
    
    // ...
  4. Create a main function.

    This function loads your .txt file into a Document object, creates a vector store, and stores the embeddings in a VectorStoreIndex. Wrapping the function in async allows the use of await to execute non-blocking calls to the database.

    load.ts
    // ...
    
    import fs from "node:fs/promises";
    import { AstraDBVectorStore, Document, VectorStoreIndex, storageContextFromDefaults } from "llamaindex";
    
    const collectionName = "amontillado";
    
    async function main() {
      try {
        // Load the text file
        const path = "./src/sample-data/amontillado.txt";
        const essay = await fs.readFile(path, "utf-8");
    
        // Create a Document object from the text file
        const document = new Document({ text: essay, id_: path });
    
        // Initialize AstraDB Vector Store and connect
        const astraVS = new AstraDBVectorStore({
            params: {
                token: process.env.ASTRA_DB_APPLICATION_TOKEN,
                endpoint: process.env.ASTRA_DB_API_ENDPOINT
            }
        });
        await astraVS.create(collectionName, {
          vector: { dimension: 1536, metric: "cosine" }
        });
        await astraVS.connect(collectionName);
    
        // Create embeddings and store them in VectorStoreIndex
        const ctx = await storageContextFromDefaults({ vectorStore: astraVS });
        const index = await VectorStoreIndex.fromDocuments([document], {
          storageContext: ctx
        });
      } catch (e) {
        console.error(e);
      }
    }
    
    main();
  5. Compile and run the code you defined earlier.

    npx tsx src/load.ts

Chat with your documents

  1. Create chat.ts in a src directory.

  2. Import your dependencies.

    chat.ts
    import {
      AstraDBVectorStore,
      serviceContextFromDefaults,
      VectorStoreIndex,
      ContextChatEngine
    } from "llamaindex";
    
    // ...
  3. Create a main function. This code is separate from load.ts so that you can tune your query and prompt independently.

  4. Complete the main function. This might look like a lot of code, but most of the logic is for setting up the chat interaction loop.

    The code specific to your LlamaIndex.TS integration is:

    1. A new AstraDBVectorStore instance called 'astraVS' is created and connects to the amontillado collection you populated earlier.

    2. const index creates an index over your vector store with the default storage context. For more on LlamaIndex’s service context, see Service Context.

    3. The retriever returns the top 20 results from the index of the vector store.

    4. The chat engine uses the retriever to respond to user input.

      chat.ts
      // ...
      
      import {
        AstraDBVectorStore,
        serviceContextFromDefaults,
        VectorStoreIndex,
        ContextChatEngine
      } from "llamaindex";
      
      const collectionName = "amontillado";
      
      // Function to check if the input is a quit command
      function isQuit(question) {
        return ["q", "quit", "exit"].includes(question.trim().toLowerCase());
      }
      
      // Function to get user input as a promise
      function getUserInput(readline) {
        return new Promise(resolve => {
          readline.question("What would you like to know?\n> ", userInput => {
            resolve(userInput);
          });
        });
      }
      
      async function main() {
        const readline = require("readline").createInterface({
          input: process.stdin,
          output: process.stdout
        });
      
        try {
          // Connect to AstraDB Vector Store
          const astraVS = new AstraDBVectorStore({
              params: {
                  token: process.env.ASTRA_DB_APPLICATION_TOKEN,
                  endpoint: process.env.ASTRA_DB_API_ENDPOINT
              }
          });
          await astraVS.connect(collectionName);
      
          // Setup vector store and chat engine
          const ctx = serviceContextFromDefaults();
          const index = await VectorStoreIndex.fromVectorStore(astraVS, ctx);
          const retriever = await index.asRetriever({ similarityTopK: 20 });
          const chatEngine = new ContextChatEngine({ retriever });
      
          // Query engine for chat interactions
          const queryEngine = await index.asQueryEngine();
      
          // Chat loop
          let question = "";
          while (!isQuit(question)) {
            question = await getUserInput(readline);
      
            if (isQuit(question)) {
              readline.close();
              process.exit(0);
            }
      
            try {
              const answer = await queryEngine.query({ query: question });
              console.log(answer.response);
            } catch (error) {
              console.error("Error:", error);
            }
          }
        } catch (err) {
          console.error(err);
          console.log("If your AstraDB initialization failed, make sure to set env vars for your ASTRA_DB_APPLICATION_TOKEN, ASTRA_DB_ENDPOINT, and OPENAI_API_KEY as needed.");
          process.exit(1);
        }
      }
      
      main().catch(console.error).finally(() => {
          process.exit(1);
      });
  5. Compile and run the code you defined earlier.

    npx tsx src/chat.ts

    If you get a TOO_MANY_COLLECTIONS error, use the Data API command below or see delete an existing collection to delete a collection and make room.

    curl -sS --location -X POST "ASTRA_DB_API_ENDPOINT/api/json/v1/ASTRA_DB_KEYSPACE" \
    --header "Token: ASTRA_DB_APPLICATION_TOKEN" \
    --header "Content-Type: application/json" \
    --data '{
      "deleteCollection": {
        "name": "COLLECTION_NAME"
      }
    }'

Complete code examples

load.ts
load.ts
import fs from "node:fs/promises";
import { AstraDBVectorStore, Document, VectorStoreIndex, storageContextFromDefaults } from "llamaindex";

const collectionName = "amontillado";

async function main() {
  try {
    // Load the text file
    const path = "./src/sample-data/amontillado.txt";
    const essay = await fs.readFile(path, "utf-8");

    // Create a Document object from the text file
    const document = new Document({ text: essay, id_: path });

    // Initialize AstraDB Vector Store and connect
    const astraVS = new AstraDBVectorStore({
        params: {
            token: process.env.ASTRA_DB_APPLICATION_TOKEN,
            endpoint: process.env.ASTRA_DB_API_ENDPOINT
        }
    });
    await astraVS.create(collectionName, {
      vector: { dimension: 1536, metric: "cosine" }
    });
    await astraVS.connect(collectionName);

    // Create embeddings and store them in VectorStoreIndex
    const ctx = await storageContextFromDefaults({ vectorStore: astraVS });
    const index = await VectorStoreIndex.fromDocuments([document], {
      storageContext: ctx
    });
  } catch (e) {
    console.error(e);
  }
}

main();
chat.ts
chat.ts
import {
  AstraDBVectorStore,
  serviceContextFromDefaults,
  VectorStoreIndex,
  ContextChatEngine
} from "llamaindex";

const collectionName = "amontillado";

// Function to check if the input is a quit command
function isQuit(question) {
  return ["q", "quit", "exit"].includes(question.trim().toLowerCase());
}

// Function to get user input as a promise
function getUserInput(readline) {
  return new Promise(resolve => {
    readline.question("What would you like to know?\n> ", userInput => {
      resolve(userInput);
    });
  });
}

async function main() {
  const readline = require("readline").createInterface({
    input: process.stdin,
    output: process.stdout
  });

  try {
    // Connect to AstraDB Vector Store
    const astraVS = new AstraDBVectorStore({
        params: {
            token: process.env.ASTRA_DB_APPLICATION_TOKEN,
            endpoint: process.env.ASTRA_DB_API_ENDPOINT
        }
    });
    await astraVS.connect(collectionName);

    // Setup vector store and chat engine
    const ctx = serviceContextFromDefaults();
    const index = await VectorStoreIndex.fromVectorStore(astraVS, ctx);
    const retriever = await index.asRetriever({ similarityTopK: 20 });
    const chatEngine = new ContextChatEngine({ retriever });

    // Query engine for chat interactions
    const queryEngine = await index.asQueryEngine();

    // Chat loop
    let question = "";
    while (!isQuit(question)) {
      question = await getUserInput(readline);

      if (isQuit(question)) {
        readline.close();
        process.exit(0);
      }

      try {
        const answer = await queryEngine.query({ query: question });
        console.log(answer.response);
      } catch (error) {
        console.error("Error:", error);
      }
    }
  } catch (err) {
    console.error(err);
    console.log("If your AstraDB initialization failed, make sure to set env vars for your ASTRA_DB_APPLICATION_TOKEN, ASTRA_DB_ENDPOINT, and OPENAI_API_KEY as needed.");
    process.exit(1);
  }
}

main().catch(console.error).finally(() => {
    process.exit(1);
});

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com