Quickstart
This quickstart demonstrates how to insert data, generate vector embeddings, and perform a vector search to find similar data. Specifically, the example here loads data from a JSON file, uses the Astra-hosted NVIDIA embedding model to generate vector embeddings for the data, and then performs a vector search.
The Next steps section discusses how to insert other types of data, use a different embedding model, insert data with pre-generated vector embeddings, or skip embedding.
To learn more about vector databases and vector search, see Intro to vector databases and What is Vector Search?.
Create a database and store your credentials
-
In the Astra Portal navigation menu, click Databases, and then click Create Database.
-
For this quickstart, select:
-
Serverless (Vector) as the deployment type
-
Amazon Web Services as the provider
-
us-east-2 as the region
-
-
Click Create Database.
Wait for your database to activate. This can take several minutes. The Astra Portal will update once your database is active.
-
Under Database Details, copy your database’s API endpoint.
-
Under Database Details, click Generate Token, then copy the token.
-
For this quickstart, store the endpoint and token in environment variables:
-
Linux or macOS
-
Windows
export ASTRA_DB_API_ENDPOINT=API_ENDPOINT export ASTRA_DB_APPLICATION_TOKEN=TOKEN
set ASTRA_DB_API_ENDPOINT=API_ENDPOINT
set ASTRA_DB_APPLICATION_TOKEN=TOKEN
-
Install a client
Install one of the Astra DB Data API clients to facilitate interactions with the Data API.
-
Python
-
TypeScript
-
Java
-
Update to Python version 3.8 or later if needed.
-
Update to pip version 23.0 or later if needed.
-
Install the latest stable version of the astrapy package (1.5.2).
pip install astrapy
-
Update to Node version 18 or later if needed.
-
Install the latest stable version of the @datastax/astra-db-ts package (1.5.0).
For example:
npm install @datastax/astra-db-ts
-
Maven
-
Gradle
-
Update to Java version 17 or later if needed.
-
Update to Maven version 3.9 or later if needed.
-
Add a dependency to the latest stable version of the astra-db-java package (1.3.0).
pom.xml<dependencies> <dependency> <groupId>com.datastax.astra</groupId> <artifactId>astra-db-java</artifactId> <version>1.3.0</version> </dependency> </dependencies>
-
Update to Java version 17 or later if needed.
-
Update to Gradle version 11 or later if needed.
-
Add a dependency to the latest stable version of the astra-db-java package (1.3.0).
build.gradledependencies { implementation 'com.datastax.astra:astra-db-java:1.3.0' }
Connect to your database
The following function will connect to your database.
Copy the file into your project. You don’t need to execute the function now; the subsequent code examples will import and use this function.
-
Python
-
TypeScript
-
Java
import os
from astrapy import DataAPIClient, Database
def connect_to_database() -> Database:
"""
Connects to a DataStax Astra database.
This function retrieves the database endpoint and application token from the
environment variables `ASTRA_DB_API_ENDPOINT` and `ASTRA_DB_APPLICATION_TOKEN`.
Returns:
Database: An instance of the connected database.
Raises:
RuntimeError: If the environment variables `ASTRA_DB_API_ENDPOINT` or
`ASTRA_DB_APPLICATION_TOKEN` are not defined.
"""
endpoint = os.environ.get("ASTRA_DB_API_ENDPOINT") (1)
token = os.environ.get("ASTRA_DB_APPLICATION_TOKEN")
if not token or not endpoint:
raise RuntimeError(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined"
)
# Create an instance of the `DataAPIClient` class with your token.
client = DataAPIClient(token)
# Get the database specified by your endpoint.
database = client.get_database(endpoint)
print(f"Connected to database {database.info().name}")
return database
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
import { DataAPIClient, Db, VectorizeDoc } from "@datastax/astra-db-ts";
/**
* Connects to a DataStax Astra database.
* This function retrieves the database endpoint and application token from the
* environment variables `ASTRA_DB_API_ENDPOINT` and `ASTRA_DB_APPLICATION_TOKEN`.
*
* @returns An instance of the connected database.
* @throws Will throw an error if the environment variables
* `ASTRA_DB_API_ENDPOINT` or `ASTRA_DB_APPLICATION_TOKEN` are not defined.
*/
export function connectToDatabase(): Db {
const { ASTRA_DB_API_ENDPOINT: endpoint, ASTRA_DB_APPLICATION_TOKEN: token } =
process.env; (1)
if (!token || !endpoint) {
throw new Error(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined.",
);
}
// Create an instance of the `DataAPIClient` class with your token.
const client = new DataAPIClient(token);
// Get the database specified by your endpoint.
const database = client.db(endpoint);
console.log(`Connected to database ${database.id}`);
return database;
}
// You can define interfaces that describe the shape of your data.
// The VectorizeDoc interface adds a $vectorize key.
export interface Book extends VectorizeDoc {
title: string;
author: string;
numberOfPages: number;
rating: number;
publicationYear: number;
summary: string;
genres: string[];
metadata: {
ISBN: string;
language: string;
edition: string;
};
isCheckedOut: boolean;
borrower: string | null;
dueDate: string | null;
}
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
package com.quickstart;
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.Database;
public class QuickstartConnect {
/**
* Connects to a DataStax Astra database. This function retrieves the database endpoint and
* application token from the environment variables `ASTRA_DB_API_ENDPOINT` and
* `ASTRA_DB_APPLICATION_TOKEN`.
*
* @return an instance of the connected database
* @throws IllegalStateException if the environment variables `ASTRA_DB_API_ENDPOINT` or
* `ASTRA_DB_APPLICATION_TOKEN` are not defined
*/
public static Database connectToDatabase() {
String endpoint = System.getenv("ASTRA_DB_API_ENDPOINT"); (1)
String token = System.getenv("ASTRA_DB_APPLICATION_TOKEN");
if (endpoint == null || token == null) {
throw new IllegalStateException(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined");
}
// Create an instance of `DataAPIClient` with your token.
DataAPIClient client = new DataAPIClient(token);
// Get the database specified by your endpoint.
Database database = client.getDatabase(endpoint);
System.out.println("Connected to database.");
return database;
}
}
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
Insert data to your database
The following script will insert data from a JSON file into a your database.
-
Copy the script into your project.
-
Download the quickstart_dataset.json sample dataset (76 kB). This dataset is a JSON array describing library books.
-
Replace
PATH_TO_DATA_FILE
in the script with the path to the dataset. -
If needed, update the import path to the "connect to database" function from the previous section.
-
Execute the script.
Once the script completes, you should see a printed message confirming the insertion of 100 items.
-
Python
-
TypeScript
-
Java
from quickstart_connect import connect_to_database (1)
from astrapy import Database, Collection
from astrapy.constants import VectorMetric
from astrapy.info import CollectionVectorServiceOptions
import json
def create_collection(database: Database, collection_name: str) -> Collection:
"""
Creates a collection in the specified database with vectorization enabled.
The collection will use Nvidia's NV-Embed-QA embedding model
to generate vector embeddings for data in the collection. (2)
Args:
database (Database): The instantiated object that represents the database where the collection will be created.
collection_name (str): The name of the collection to create.
Returns:
Collection: The created collection.
"""
collection = database.create_collection(
collection_name,
metric=VectorMetric.COSINE,
service=CollectionVectorServiceOptions(
provider="nvidia",
model_name="NV-Embed-QA",
),
)
print(f"Created collection {collection.full_name}")
return collection
def upload_json_data(
collection: Collection,
data_file_path: str,
embedding_string_creator: callable,
) -> None:
"""
Uploads data from a file containing a JSON array to the specified collection.
For each piece of data, a $vectorize field is added. The $vectorize value is
a string from which vector embeddings will be generated.
Args:
collection (Collection): The instantiated object that represents the collection to upload data to.
data_file_path (str): The path to a JSON file containing a JSON array.
embedding_string_creator (callable): A function to create the string for which vector embeddings will be generated.
"""
# Read the JSON file and parse it into a JSON array.
with open(data_file_path, "r", encoding="utf8") as file:
json_data = json.load(file)
# Add a $vectorize field to each piece of data. (3)
documents = [
{
**data,
"$vectorize": embedding_string_creator(data),
}
for data in json_data
]
# Upload the data.
inserted = collection.insert_many(documents)
print(f"Inserted {len(inserted.inserted_ids)} items.")
def main():
database = connect_to_database()
collection = create_collection(database, "quickstart_collection") (4)
upload_json_data(
collection,
"PATH_TO_DATA_FILE", (5)
lambda data: ( (6)
f"summary: {data['summary']} | "
f"genres: {', '.join(data['genres'])}"
),
)
if __name__ == "__main__":
main()
1 | This is the connect_to_database function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | This collection will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials. |
3 | When you insert data to a collection that can automatically generate embeddings, you can specify a $vectorize value for the data. The $vectorize value will be used to generate vector embeddings. $vectorize can be any string and should include the parts of the data that you want to be considered when you search for similar data with a vector search. |
4 | This script creates a collection named quickstart_collection . If you want to use a different name, change the name before running the script. |
5 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
6 | This is a function that processes the summary and genres values from the data into a string. When the data is inserted to the collection, the $vectorize value is set to this string, and vector embeddings are generated for this string. |
import { connectToDatabase, Book } from "./quickstart-connect"; (1)
import { Db, Collection } from "@datastax/astra-db-ts";
import fs from "fs";
/**
* Creates a collection in the specified database with vectorization enabled.
* The collection will use Nvidia's NV-Embed-QA embedding model
* to generate vector embeddings for data in the collection. (2)
*
* @param database - The instantiated object that represents the database where the collection will be created.
* @param collectionName - The name of the collection to create.
* @returns A promise that resolves to the created collection.
*/
async function createCollection(
database: Db,
collectionName: string,
): Promise<Collection<Book>> {
const collection = await database.createCollection<Book>(collectionName, {
vector: {
service: {
provider: "nvidia",
modelName: "NV-Embed-QA",
},
},
});
console.log(
`Created collection ${collection.keyspace}.${collection.collectionName}`,
);
return collection;
}
/**
* Uploads data from a file containing a JSON array to the specified collection.
* For each piece of data, a $vectorize field is added. The $vectorize value is
* a string from which vector embeddings will be generated.
*
* @param collection - The instantiated object that represents the collection to upload the data to.
* @param dataFilePath - The path to a JSON file containing a JSON array.
* @param embeddingStringCreator - A function to create the string for which vector embeddings will be generated.
* @returns {Promise<void>} A promise that resolves when the data has been uploaded.
*/
async function uploadJsonData(
collection: Collection<Book>,
dataFilePath: string,
embeddingStringCreator: (data: Record<string, any>) => string,
): Promise<void> {
// Read the JSON file and parse it into a JSON array.
const rawData = fs.readFileSync(dataFilePath, "utf8");
const jsonData = JSON.parse(rawData);
// Add a $vectorize field to each piece of data. (3)
const documents: Book[] = jsonData.map((data: any) => ({
...data,
$vectorize: embeddingStringCreator(data),
}));
// Upload the data.
const inserted = await collection.insertMany(documents);
console.log(`Inserted ${inserted.insertedCount} items.`);
}
(async function () {
const database = connectToDatabase();
const collection = await createCollection(
database,
"quickstart_collection", (4)
);
await uploadJsonData(
collection,
"PATH_TO_DATA_FILE", (5)
(data) => {
return `summary: ${data["summary"]} | genres: ${data["genres"].join(", ")}`;
}, (6)
);
})();
1 | This is the connectToDatabase function and the Book interface from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | This collection will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials. |
3 | When you insert data to a collection that can automatically generate embeddings, you can specify a $vectorize value for the data. The $vectorize value will be used to generate vector embeddings. $vectorize can be any string and should include the parts of the data that you want to be considered when you search for similar data with a vector search. |
4 | This script creates a collection named quickstart_collection . If you want to use a different name, change the name before running the script. |
5 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
6 | This is a function that processes the summary and genres values from the data into a string. When the data is inserted to the collection, the $vectorize value is set to this string, and vector embeddings are generated for this string. |
To run:
npx tsx quickstart-upload.ts
package com.quickstart;
import com.datastax.astra.client.Collection;
import com.datastax.astra.client.Database;
import com.datastax.astra.client.model.CollectionOptions;
import com.datastax.astra.client.model.Document;
import com.datastax.astra.client.model.InsertManyResult;
import com.datastax.astra.client.model.SimilarityMetric;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.function.Function;
public class QuickstartUploadDemo {
/**
* Creates a collection in the specified database with vectorization enabled.
* The collection will use Nvidia's NV-Embed-QA embedding model to generate
* vector embeddings for data in the collection. (1)
*
* @param database the instantiated object that represents the database where
* the collection will be created
* @param collectionName the name of the collection to create
* @return the collection instance that was created
*/
public static Collection<Document> createCollection(Database database, String collectionName) {
CollectionOptions.CollectionOptionsBuilder builder =
CollectionOptions.builder()
.vectorSimilarity(SimilarityMetric.COSINE)
.vectorize("nvidia", "NV-Embed-QA");
Collection<Document> collection = database.createCollection(collectionName, builder.build());
System.out.println("Created collection.");
return collection;
}
/**
* Uploads data from a file containing a JSON array to the specified collection. For each piece of
* data, a $vectorize field is added. The $vectorize value is a string from which vector
* embeddings will be generated.
* This function uses the Jackson library to process the JSON data.
*
* @param collection the instantiated object that represents the collection to upload the data to
* @param dataFilePath the path to a JSON file containing a JSON array
* @param embeddingStringCreator a function to create the string for which
* vector embeddings will be generated
* @throws IOException if an I/O error occurs reading from the file
*/
public static void uploadJsonData(
Collection<Document> collection,
String dataFilePath,
Function<JsonNode, String> embeddingStringCreator)
throws IOException {
// Initialize Jackson ObjectMapper
ObjectMapper objectMapper = new ObjectMapper();
// Read the JSON file and parse it into a JSON array (ArrayNode).
String rawData = Files.readString(Paths.get(dataFilePath), StandardCharsets.UTF_8);
ArrayNode jsonData = (ArrayNode) objectMapper.readTree(rawData);
// Convert the data to a list of Documents, and
// add a $vectorize field to each piece of data. (2)
List<Document> documents = new ArrayList<>();
for (JsonNode data : jsonData) {
if (data instanceof ObjectNode) {
((ObjectNode) data).put("$vectorize", embeddingStringCreator.apply(data));
}
documents.add(Document.parse(data.toString()));
}
// Upload the data.
InsertManyResult result = collection.insertMany(documents);
System.out.println("Inserted " + result.getInsertedIds().size() + " items.");
}
public static void main(String[] args) {
Database database = QuickstartConnect.connectToDatabase(); (3)
Collection<Document> collection =
createCollection(
database, "quickstart_collection" (4)
);
try {
uploadJsonData(
collection,
"PATH_TO_DATA_FILE", (5)
data -> { (6)
String summary = data.path("summary").asText("");
String genres = String.join(", ",
data.path("genres").findValuesAsText(""));
return "summary: " + summary + " | genres: " + genres;
});
} catch (IOException e) {
e.printStackTrace();
}
}
}
1 | This collection will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | When you insert data to a collection that can automatically generate embeddings, you can specify a $vectorize value for the data. The $vectorize value will be used to generate vector embeddings. $vectorize can be any string and should include the parts of the data that you want to be considered when you search for similar data with a vector search. |
3 | This is the connectToDatabase function from the previous section. |
4 | This script creates a collection named quickstart_collection . If you want to use a different name, change the name before running the script. |
5 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
6 | This is a function that processes the summary and genres values from the data into a string. When the data is inserted to the collection, the $vectorize value is set to this string, and vector embeddings are generated for this string. |
Find data in your database
After you insert data to your database, you can search the data. In addition to traditional database filtering, you can perform a vector search to find data that is most similar to a search string.
The following script performs three searches on the sample data that you loaded in Insert data to your database.
-
Python
-
TypeScript
-
Java
from quickstart_connect import connect_to_database (1)
def main():
database = connect_to_database()
collection = database.get_collection("quickstart_collection") (2)
# Find documents that match a filter
print("\nFinding books with rating greater than 4.7...")
duration_cursor = collection.find({"rating": {"$gt": 4.7}})
for document in duration_cursor:
print(f"{document['title']} is rated {document['rating']}")
# Perform a vector search to find the closest match to a search string
print("\nUsing vector search to find a single scary novel...")
single_vector_match = collection.find_one(
{}, sort={"$vectorize": "A scary novel"}
)
print(f"{single_vector_match['title']} is a scary novel")
# Combine a filter, vector search, and projection to find the 3 books with
# more than 400 pages that are the closest matches to a search string,
# and just return the title and author
print("\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author...")
vector_cursor = collection.find(
{"numberOfPages": {"$gt": 400}},
sort={"$vectorize": "A book set in the arctic"},
limit=3,
projection={"title": True, "author": True}
)
for document in vector_cursor:
print(document)
if __name__ == "__main__":
main()
1 | This is the connect_to_database function from the previous section. Update the import path if necessary. |
2 | If you changed the collection name in the previous script, change it in this script as well. |
import { connectToDatabase, Book } from "./quickstart-connect"; (1)
(async function () {
const database = connectToDatabase();
const collection = database.collection<Book>("quickstart_collection"); (2)
// Find documents that match a filter
console.log("\nFinding books with rating greater than 4.7...");
const durationCursor = collection.find(
{ rating: { $gt: 4.7 } },
{ limit: 10 },
);
for await (const document of durationCursor) {
console.log(`${document.title} is rated ${document.rating}`);
}
// Perform a vector search to find the closest match to a search string
console.log("\nUsing vector search to find a single scary novel...");
const singleVectorMatch = await collection.findOne(
{},
{ sort: { $vectorize: "A scary novel" } },
);
console.log(`${singleVectorMatch?.title} is a scary novel`);
// Combine a filter, vector search, and projection to find the 3 books with
// more than 400 pages that are the closest matches to a search string,
// and just return the title and author
console.log("\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author...");
const vectorCursor = collection.find(
{ numberOfPages: { $gt: 400 } },
{
sort: { $vectorize: "A book set in the arctic" },
limit: 3,
projection: { title: true, author: true },
},
);
for await (const document of vectorCursor) {
console.log(document);
}
})();
1 | This is the connectToDatabase function and the Book interface from the previous section. Update the import path if necessary. |
2 | If you changed the collection name in the previous script, change it in this script as well. |
package com.quickstart;
import static com.datastax.astra.client.model.Projections.include;
import com.datastax.astra.client.Collection;
import com.datastax.astra.client.Database;
import com.datastax.astra.client.model.Document;
import com.datastax.astra.client.model.Filter;
import com.datastax.astra.client.model.Filters;
import com.datastax.astra.client.model.FindOneOptions;
import com.datastax.astra.client.model.FindOptions;
public class QuickstartFindDemo {
public static void main(String[] args) {
Database database = QuickstartConnect.connectToDatabase(); (1)
Collection<Document> collection = database.getCollection(
"quickstart_collection" (2)
);
// Find documents that match a filter
System.out.println("\nFinding books with rating greater than 4.7...");
Filter filter = Filters.gt("rating", 4.7);
FindOptions options = new FindOptions().limit(10);
collection
.find(filter, options)
.forEach(
document -> {
System.out.println(
document.getString("title") + " is rated " + document.get("rating"));
});
// Perform a vector search to find the closest match to a search string
System.out.println("\nUsing vector search to find a single scary novel...");
FindOneOptions options2 = new FindOneOptions().sort("A scary novel");
collection
.findOne(options2)
.ifPresent(
document -> {
System.out.println(document.getString("title") + " is a scary novel");
});
// Combine a filter, vector search, and projection to find the 3 books with
// more than 400 pages that are the closest matches to a search string,
// and just return the title and author
System.out.println("\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author...");
Filter filter3 = Filters.gt("numberOfPages", 400);
FindOptions options3 =
new FindOptions()
.limit(3)
.sort("A book set in the arctic")
.projection(include("title", "author"));
collection
.find(filter3, options3)
.forEach(
document -> {
System.out.println(document);
});
}
}
1 | This is the connectToDatabase function from the previous section. |
2 | If you changed the collection name in the previous script, change it in this script as well. |
Next steps
For more practice, you can continue building with the database that you created here. For example, try inserting more data to the collection, or try different searches. The Data API reference provides code examples for various operations.
Upload data from different sources
This quickstart demonstrated how to insert data from a JSON file, but you can insert data from many sources, including CSV and PDF files.
-
If you can convert your data into JSON, you can use the example from this quickstart.
-
If your data is in unstructured files, such as PDF files, you can use the data loader or write a script to use the Unstructured.io integration to insert your data.
Use a different method to generate vector embeddings
This quickstart used the Astra-hosted NVIDIA embedding model to generate vector embeddings. You can also use other embedding models, or you can insert data with pre-generated vector embeddings (or without vector embeddings) and skip embedding.
-
To use a different embedding model, see Auto-generate embeddings with vectorize.
-
To insert pre-embedded data, you need to specify the vector dimensions and similarity metric instead of specifying the embedding provider. See the API documentation for collections.
-
To skip embedding, you can create a collection without any vector options. See the API documentation for collections.
Perform more complex searches
This quickstart demonstrated how to find data using filters and vector search. To learn more about the searches you can perform, see Find documents reference and Perform a vector search.
Use different database settings
For this quickstart, you need a Serverless (Vector) database in the Amazon Web Services us-east-2 region, which is required for the Astra-hosted NVIDIA embedding model integration.
For production databases, you might use different database settings.
For more information, see Astra DB Serverless database regions and maintenance schedules and Create a database.
For more examples, see the integration guides, code examples, and tutorials.