Build a RAG command line chatbot

query_builder 20 min

This tutorial demonstrates how to build a command line chatbot. The chatbot uses data from your Astra collection for retrieval-augmented generation (RAG) with OpenAI.

This example uses the collection of book summaries from the quickstart, but you can use any collection of documents that have the $vectorize field populated with the text you want used as context when answering questions.

Prerequisites

  • An endpoint and application token for a Astra database, and a collection in that database with documents that have the $vectorize field populated.

    If you don’t already have this, follow the quickstart.

  • An OpenAI API key.

  • Node version 18 or later.

  • TypeScript version 5 or later. If you are using JavaScript instead of TypeScript, you do not need TypeScript installed.

Store your credentials

For this tutorial, store your database endpoint, database application token, and OpenAI API key in environment variables:

  • Linux or macOS

  • Windows

export API_ENDPOINT=API_ENDPOINT
export APPLICATION_TOKEN=APPLICATION_TOKEN
export OPENAI_API_KEY=OPENAI_API_KEY
set API_ENDPOINT=API_ENDPOINT
set APPLICATION_TOKEN=APPLICATION_TOKEN
set OPENAI_API_KEY=OPENAI_API_KEY

Install packages

Install the @datastax/astra-db-ts and openai packages.

For example:

npm install @datastax/astra-db-ts openai

Add the code

import { DataAPIClient } from "@datastax/astra-db-ts";
import OpenAI from "openai";

import { createInterface } from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

export async function main() {
  const {
    API_ENDPOINT: endpoint,
    APPLICATION_TOKEN: applicationToken,
    OPENAI_API_KEY: openaiApiKey,
  } = process.env; (1)
  const keyspace = "default_keyspace"; (2)
  const collectionName = "quickstart_collection"; (3)

  if (!endpoint || !applicationToken || !openaiApiKey) {
    throw new Error(
      "Environment variables API_ENDPOINT, APPLICATION_TOKEN, OPENAI_API_KEY must be defined.",
    );
  }

  // Instantiate a `DataAPIClient` and get a reference to your collection
  const client = new DataAPIClient();
  const database = client.db(endpoint, { token: applicationToken, keyspace });
  const collection = database.collection(collectionName);

  // Instantiate the OpenAI client
  const openai = new OpenAI({
    apiKey: openaiApiKey,
  });

  // This list of messages will be sent to OpenAI with every query.
  // It starts with a single system prompt and grows as the chat progresses.
  const messages: OpenAI.ChatCompletionMessageParam[] = [
    {
      role: "system",
      content:
        "You are an AI assistant that can answer questions based on the the context you are given. Don't mention the context, just use it to inform your answers.",
    },
  ];

  // Use the built-in Node.js readline to implement the CLI
  const cli = createInterface({ input, output });

  // Start the chat by writing a message to the CLI
  let userInput = await cli.question(
    `Greetings! I am an AI assistant that is ready to help you with your questions. You can ask me anything you like.\nIf you want to exit, type ".exit".\n\n> `,
  );

  // Run this loop continuously until the user inputs the exit command
  while (userInput.toLowerCase() !== ".exit") {
    // If the user didn't input text, re-prompt them
    if (userInput.trim() === "") {
      userInput = await cli.question("> ");
      continue;
    }

    try {
      // Perform a vector search in your collection,
      // using the user input as the search string to vectorize.
      // Limit the search to 10 documents.
      // Use a projection to return just the $vectorize field of each document.
      const response = collection.find(
        {},
        {
          sort: { $vectorize: userInput },
          limit: 10,
          projection: { $vectorize: 1 },
        },
      );

      // Join the $vectorize fields of the returned documents into a single string
      const context = (await response.toArray())
        .map((doc) => doc.$vectorize)
        .join("\n");

      // Combine the user question with the context from vector search
      const ragMessage: OpenAI.ChatCompletionUserMessageParam = {
        role: "user",
        content: `${context}\n---\nGiven the above context, answer the following question:\n${userInput}`,
      };

      // Send the list of previous messages, plus the context augmented question
      const stream = await openai.chat.completions.create({
        model: "gpt-4o-mini",
        messages: [...messages, ragMessage],
        stream: true,
      });

      // Write OpenAI's response to the CLI as it comes in,
      // and also record it in a string
      let message = "";
      for await (const chunk of stream) {
        const delta = chunk.choices[0]?.delta?.content ?? "";
        output.write(delta);
        message += delta;
      }

      // Record the user question, without the added context, in the list of messages
      messages.push({ role: "user", content: userInput });

      // Record the OpenAI response in the list of messages
      messages.push({ role: "assistant", content: message });

      // Prompt the user for their next question
      userInput = await cli.question("\n\n> ");
    } catch (error) {
      if (error instanceof Error) {
        console.error(error.message);
      }
      userInput = await cli.question(
        "\nSomething went wrong, try asking again\n\n> ",
      );
    }
  }

  cli.close();
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
1 Store your database’s endpoint, application token, and OpenAI key in environment variables named API_ENDPOINT, APPLICATION_TOKEN, and OPENAI_API_KEY, as instructed in Store your credentials.
2 Change the keyspace name if your collection is in a different keyspace.
3 Change the collection name if you are not using the collection created in the quickstart.

Test the code

  1. From your terminal, run the code from the previous section.

    For example, if you are using TypeScript: npx tsx path-to-file.ts

  2. The terminal should show the welcome message and a > prompt.

  3. Enter a question. For example, Can you recommend a book set on another planet?

  4. The terminal should print the answer from OpenAI, and give the > prompt again.

  5. To exit, type .exit.

Next steps

  • If you used the the quickstart collection, try making a collection of other data and using that instead.

  • If the user asks a question unrelated to the collection contents, the vector search still returns 10 documents. However, the similarity scores for these documents will be low, and the documents won’t be relevant to the question.

    In this tutorial, similarity scores are not requested, and low-similarity results are still included in the context that is sent to OpenAI.

    You can use the includeSimilarity option to return a similarity score for each document. Then, you can omit results with a low similarity score from the context, or prompt the user to ask a more relevant question.

  • Right now, the message list grows infinitely as the chat progresses. You can truncate the message list so that older messages are discarded.

Was this helpful?

Give Feedback

How can we improve the documentation?

© Copyright IBM Corporation 2025 | Privacy policy | Terms of use Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: Contact IBM