Astra DB Serverless quickstart for tables
If your data is not fully structured, or if you do not want to use a fixed schema, see the quickstart for collections instead. |
This Astra DB Serverless feature is currently in public preview. Development is ongoing, and the features and functionality are subject to change. Astra DB Serverless, and the use of such, is subject to the DataStax Preview Terms. The Data API tables commands are available through HTTP and the clients. If you use a client, tables commands are available only in client versions 2.0-preview or later. For more information, see Data API client upgrade guide. |
This quickstart demonstrates how to create a table schema, insert data to a table, generate vector embeddings, and perform a vector search to find similar data.
The Next steps section discusses how to insert other types of data, use a different embedding model, insert data with pre-generated vector embeddings, or skip embedding.
To learn more about vector databases and vector search, see What are vector databases? and What is Vector Search.
Create a database and store your credentials
-
In the Astra Portal navigation menu, click Databases, and then click Create Database.
-
For this quickstart, select the following:
-
Serverless (Vector) as the deployment type
-
Amazon Web Services as the provider
-
us-east-2 as the region
-
-
Click Create Database.
Wait for your database to initialize and reach Active status. This can take several minutes.
-
Under Database Details, copy your database’s API endpoint.
-
Under Database Details, click Generate Token, then copy the token.
-
For this quickstart, store the endpoint and token in environment variables:
-
Linux or macOS
-
Windows
export ASTRA_DB_API_ENDPOINT=API_ENDPOINT export ASTRA_DB_APPLICATION_TOKEN=TOKEN
set ASTRA_DB_API_ENDPOINT=API_ENDPOINT
set ASTRA_DB_APPLICATION_TOKEN=TOKEN
-
Install a client
Install one of the Astra DB Data API clients to facilitate interactions with the Data API. To use the Data API with tables, you must install client version 2.0.x.
Client version 2.0-preview is a public preview release. Development is ongoing, and the features and functionality are subject to change. Astra DB Serverless, and the use of such, is subject to the DataStax Preview Terms. |
-
Python
-
TypeScript
-
Java
-
Update to Python version 3.8 or later if needed.
-
Update to pip version 23.0 or later if needed.
-
Install the latest version of the astrapy package
. To upgrade to this preview release, you must pass the
--pre
flag.pip install --upgrade --pre astrapy
-
Update to Node version 18 or later if needed.
-
Update to TypeScript version 5 or later if needed.
-
Install the latest version of the @datastax/astra-db-ts package
. To upgrade to this preview release, you must specify
@next
.For example:
npm install @datastax/astra-db-ts@next
-
Maven
-
Gradle
-
Update to Java version 17 or later if needed.
-
Update to Maven version 3.9 or later if needed.
-
Add a dependency to the latest version of the astra-db-java package
.
pom.xml<dependencies> <dependency> <groupId>com.datastax.astra</groupId> <artifactId>astra-db-java</artifactId> <version>2.0.0-PREVIEW</version> </dependency> </dependencies>
-
Update to Java version 17 or later if needed.
-
Update to Gradle version 11 or later if needed.
-
Add a dependency to the latest version of the astra-db-java package
.
build.gradledependencies { implementation 'com.datastax.astra:astra-db-java:2.0.0-PREVIEW' }
Connect to your database
The following function will connect to your database.
Copy the file into your project. You don’t need to execute the function now; the subsequent code examples will import and use this function.
-
Python
-
TypeScript
-
Java
import os
from astrapy import DataAPIClient, Database
def connect_to_database() -> Database:
"""
Connects to a DataStax Astra database.
This function retrieves the database endpoint and application token from the
environment variables `ASTRA_DB_API_ENDPOINT` and `ASTRA_DB_APPLICATION_TOKEN`.
Returns:
Database: An instance of the connected database.
Raises:
RuntimeError: If the environment variables `ASTRA_DB_API_ENDPOINT` or
`ASTRA_DB_APPLICATION_TOKEN` are not defined.
"""
endpoint = os.environ.get("ASTRA_DB_API_ENDPOINT") (1)
token = os.environ.get("ASTRA_DB_APPLICATION_TOKEN")
if not token or not endpoint:
raise RuntimeError(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined"
)
# Create an instance of the `DataAPIClient` class with your token.
client = DataAPIClient(token)
# Get the database specified by your endpoint.
database = client.get_database(endpoint)
print(f"Connected to database {database.info().name}")
return database
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
import { DataAPIClient, Db } from "@datastax/astra-db-ts";
/**
* Connects to a DataStax Astra database.
* This function retrieves the database endpoint and application token from the
* environment variables `ASTRA_DB_API_ENDPOINT` and `ASTRA_DB_APPLICATION_TOKEN`.
*
* @returns An instance of the connected database.
* @throws Will throw an error if the environment variables
* `ASTRA_DB_API_ENDPOINT` or `ASTRA_DB_APPLICATION_TOKEN` are not defined.
*/
export function connectToDatabase(): Db {
const { ASTRA_DB_API_ENDPOINT: endpoint, ASTRA_DB_APPLICATION_TOKEN: token } =
process.env; (1)
if (!token || !endpoint) {
throw new Error(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined.",
);
}
// Create an instance of the `DataAPIClient` class with your token.
const client = new DataAPIClient(token);
// Get the database specified by your endpoint.
const database = client.db(endpoint);
console.log(`Connected to database ${database.id}`);
return database;
}
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
package com.quickstart;
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.databases.Database;
public class QuickstartConnect {
/**
* Connects to a DataStax Astra database. This function retrieves the database endpoint and
* application token from the environment variables `ASTRA_DB_API_ENDPOINT` and
* `ASTRA_DB_APPLICATION_TOKEN`.
*
* @return an instance of the connected database
* @throws IllegalStateException if the environment variables `ASTRA_DB_API_ENDPOINT` or
* `ASTRA_DB_APPLICATION_TOKEN` are not defined
*/
public static Database connectToDatabase() {
String endpoint = System.getenv("ASTRA_DB_API_ENDPOINT"); (1)
String token = System.getenv("ASTRA_DB_APPLICATION_TOKEN");
if (endpoint == null || token == null) {
throw new IllegalStateException(
"Environment variables ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN must be defined");
}
// Create an instance of `DataAPIClient` with your token.
DataAPIClient client = new DataAPIClient(token);
// Get the database specified by your endpoint.
Database database = client.getDatabase(endpoint);
System.out.println("Connected to database.");
return database;
}
}
1 | Store your database’s endpoint and application token in environment variables named ASTRA_DB_API_ENDPOINT and ASTRA_DB_APPLICATION_TOKEN , as instructed in Create a database and store your credentials. |
Create a table
The following script will create an empty table in your database. The table created here matches the structure of the data that you will insert to the table. After creating the table, the script will index some columns so that you can find and sort data in those columns.
-
Copy the script into your project.
-
If needed, update the import path to the "connect to database" function from the previous section.
-
Execute the script.
For information about executing scripts, refer to the documentation for the language that your script uses.
Once the script completes, you should see a printed message confirming the table creation.
-
Python
-
TypeScript
-
Java
from quickstart_connect import connect_to_database (1)
from astrapy.info import (
CreateTableDefinition,
ColumnType,
TableVectorIndexOptions,
VectorServiceOptions,
)
from astrapy.constants import VectorMetric
def main() -> None:
database = connect_to_database()
table_definition = (
CreateTableDefinition.builder()
# Define all of the columns in the table
.add_column("title", ColumnType.TEXT)
.add_column("author", ColumnType.TEXT)
.add_column("numberOfPages", ColumnType.INT)
.add_column("rating", ColumnType.FLOAT)
.add_column("publicationYear", ColumnType.INT)
.add_column("summary", ColumnType.TEXT)
.add_set_column(
"genres",
ColumnType.TEXT,
)
.add_map_column(
"metadata",
# This is the key type for the map column
ColumnType.TEXT,
# This is the value type for the map column
ColumnType.TEXT,
)
.add_column("isCheckedOut", ColumnType.BOOLEAN)
.add_column("borrower", ColumnType.TEXT)
.add_column("dueDate", ColumnType.DATE)
# This column will store vector embeddings.
# The column will use an embedding model from NVIDIA to generate the
# vector embeddings when data is inserted to the column. (2)
.add_vector_column(
"summaryGenresVector",
dimension=1024,
service=VectorServiceOptions(
provider="nvidia",
model_name="NV-Embed-QA",
),
)
# Define the primary key for the table.
# In this case, the table uses a composite primary key.
.add_partition_by(["title", "author"])
# Finally, build the table definition.
.build()
)
table = database.create_table(
"quickstart_table", (3)
definition=table_definition,
)
print("Created table")
# Index any columns that you want to sort and filter on.
table.create_index(
"ratingIndex",
column="rating",
)
table.create_index(
"numberOfPagesIndex",
column="numberOfPages",
)
table.create_vector_index(
"summaryGenresVectorIndex",
column="summaryGenresVector",
options=TableVectorIndexOptions(
metric=VectorMetric.COSINE,
),
)
print("Indexed columns")
if __name__ == "__main__":
main()
1 | This is the connect_to_database function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | This column will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials. |
3 | This script creates a table named quickstart_table . If you want to use a different name, change the name before running the script. |
import { connectToDatabase } from "./quickstart-connect"; (1)
import { Table, InferTablePrimaryKey, InferTableSchema } from "@datastax/astra-db-ts";
const database = connectToDatabase();
const tableDefinition = Table.schema({
// Define all of the columns in the table
columns: {
title: "text",
author: "text",
numberOfPages: "int",
rating: "float",
publicationYear: "int",
summary: "text",
genres: { type: "set", valueType: "text" },
metadata: {
type: "map",
keyType: "text",
valueType: "text",
},
isCheckedOut: "boolean",
borrower: "text",
dueDate: "date",
// This column will store vector embeddings.
// The column will use an embedding model from NVIDIA to generate the
// vector embeddings when data is inserted to the column. (2)
summaryGenresVector: {
type: "vector",
dimension: 1024,
service: {
provider: "nvidia",
modelName: "NV-Embed-QA",
},
},
},
// Define the primary key for the table.
// In this case, the table uses a composite primary key.
primaryKey: {
partitionBy: ["title", "author"],
},
});
// Infer the TypeScript-equivalent type of the table's schema and primary key.
// Export the types for later use.
export type TableSchema = InferTableSchema<typeof tableDefinition>;
export type TablePrimaryKey = InferTablePrimaryKey<typeof tableDefinition>;
(async function () {
const table = await database.createTable<TableSchema, TablePrimaryKey>(
"quickstartTable", (3)
{ definition: tableDefinition },
);
console.log("Created table");
// Index any columns that you want to sort and filter on.
await table.createIndex("ratingIndex", "rating");
await table.createIndex("numberOfPagesIndex", "numberOfPages");
await table.createVectorIndex(
"summaryGenresVectorIndex",
"summaryGenresVector",
{
options: {
metric: "cosine",
},
},
);
console.log("Indexed columns");
})();
1 | This is the connectToDatabase function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | This column will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials. |
3 | This script creates a table named quickstartTable . If you want to use a different name, change the name before running the script. |
package com.quickstart;
import com.datastax.astra.client.core.vector.SimilarityMetric;
import com.datastax.astra.client.core.vectorize.VectorServiceOptions;
import com.datastax.astra.client.databases.Database;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.definition.TableDefinition;
import com.datastax.astra.client.tables.definition.columns.ColumnDefinitionVector;
import com.datastax.astra.client.tables.definition.columns.ColumnTypes;
import com.datastax.astra.client.tables.definition.indexes.TableVectorIndexDefinition;
import com.datastax.astra.client.tables.definition.rows.Row;
public class QuickstartTableCreateDemo {
public static void main(String[] args) {
Database database = QuickstartConnect.connectToDatabase(); (1)
TableDefinition tableDefinition =
new TableDefinition()
// Define all of the columns in the table
.addColumnText("title")
.addColumnText("author")
.addColumnInt("numberOfPages")
.addColumn("rating", ColumnTypes.FLOAT)
.addColumnInt("publicationYear")
.addColumnText("summary")
.addColumnSet("genres", ColumnTypes.TEXT)
.addColumnMap("metadata", ColumnTypes.TEXT, ColumnTypes.TEXT)
.addColumnBoolean("isCheckedOut")
.addColumnText("borrower")
.addColumn("dueDate", ColumnTypes.DATE)
// This column will store vector embeddings.
// The column will use an embedding model from NVIDIA to generate the
// vector embeddings when data is inserted to the column. (2)
.addColumnVector(
"summaryGenresVector",
new ColumnDefinitionVector()
.dimension(1024)
.metric(SimilarityMetric.COSINE)
.service(
new VectorServiceOptions().provider("nvidia").modelName("NV-Embed-QA")))
// Define the primary key for the table.
// In this case, the table uses a composite primary key.
.addPartitionBy("title")
.addPartitionBy("author");
// Default Table Creation
Table<Row> table =
database.createTable(
"quickstartTable", (3)
tableDefinition);
System.out.println("Created table.");
// Index any columns that you want to sort and filter on.
table.createIndex("ratingIndex", "rating");
table.createIndex("numberOfPagesIndex", "numberOfPages");
TableVectorIndexDefinition definition =
new TableVectorIndexDefinition()
.column("summaryGenresVector")
.metric(SimilarityMetric.COSINE);
table.createVectorIndex("summaryGenresVectorIndex", definition);
System.out.println("Indexed columns.");
}
}
1 | This is the connectToDatabase function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | This column will use the Astra-hosted NVIDIA embedding model to generate vector embeddings. This is currently only supported in certain regions. Ensure that your database is in the Amazon Web Services us-east-2 region, as instructed in Create a database and store your credentials. |
3 | This script creates a table named quickstartTable . If you want to use a different name, change the name before running the script. |
Insert data to your table
The following script will insert data from a JSON file into a your table.
-
Copy the script into your project.
-
Download the quickstart_dataset.json sample dataset (76 kB). This dataset is a JSON array describing library books.
-
Replace
PATH_TO_DATA_FILE
in the script with the path to the dataset. -
If needed, update the import path to the "connect to database" function from the previous section.
-
Execute the script.
For information about executing scripts, refer to the documentation for the language that your script uses.
Once the script completes, you should see a printed message confirming the insertion of 100 rows.
-
Python
-
TypeScript
-
Java
from quickstart_connect import connect_to_database (1)
from astrapy.data_types import DataAPIDate
import json
def main() -> None:
database = connect_to_database()
table = database.get_table("quickstart_table") (2)
data_file_path = "PATH_TO_DATA_FILE" (3)
with open(data_file_path, "r", encoding="utf8") as file:
json_data = json.load(file)
rows = [
{
**data,
"dueDate": (
DataAPIDate.from_string(data["dueDate"])
if data.get("dueDate")
else None
),
"summaryGenresVector": (
f"summary: {data['summary']} | genres: {', '.join(data['genres'])}"
),
}
for data in json_data
]
insert_result = table.insert_many(rows)
print(f"Inserted {len(insert_result.inserted_ids)} rows")
if __name__ == "__main__":
main()
1 | This is the connect_to_database function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | If you changed the table name in the previous script, change it in this script as well. |
3 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
import { connectToDatabase } from "./quickstart-connect"; (1)
import { TableSchema, TablePrimaryKey } from "./quickstart-create-table"; (2)
import { DataAPIDate } from "@datastax/astra-db-ts";
import fs from "fs";
(async function () {
const database = connectToDatabase();
const table = database.table<TableSchema, TablePrimaryKey>("quickstartTable"); (3)
const dataFilePath = "PATH_TO_DATA_FILE"; (4)
// Read the JSON file and parse it into a JSON array.
const rawData = fs.readFileSync(dataFilePath, "utf8");
const jsonData = JSON.parse(rawData);
const rows = jsonData.map((data: any) => ({
...data,
genres: new Set(data.genres),
metadata: new Map(Object.entries(data.metadata)),
dueDate: data.dueDate ? new DataAPIDate(data.dueDate) : null,
summaryGenresVector: `summary: ${data["summary"]} | genres: ${data["genres"].join(", ")}`,
}));
const insertedResult = await table.insertMany(rows);
console.log(`Inserted ${insertedResult.insertedCount} rows.`);
})();
1 | This is the connectToDatabase function from the previous section. Update the import path if necessary.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | These are the types exported from the previous section. Update the import path if necessary. |
3 | If you changed the table name in the previous script, change it in this script as well. |
4 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
package com.quickstart;
import com.datastax.astra.client.databases.Database;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.FileInputStream;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
public class QuickstartInsertToTableDemo {
private static Date parseDate(String date) {
if (date == null) return null;
try {
return new SimpleDateFormat("yyyy-MM-dd").parse(date);
} catch (ParseException e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws Exception {
Database database = QuickstartConnect.connectToDatabase(); (1)
Table<Row> table = database.getTable("quickstartTable"); (2)
// Initialize Jackson ObjectMapper
ObjectMapper objectMapper = new ObjectMapper();
try (FileInputStream stream = new FileInputStream("PATH_TO_DATA_FILE")) { (3)
List<Row> rows = objectMapper.readValue(stream, new TypeReference<>() {});
rows.forEach(
row -> {
// Deserialize the "genres" field into a HashSet
row.add("genres", new HashSet<>(row.getList("genres", String.class)));
// Deserialize the "metadata" field into a Map
Map<String, String> metadataMap =
objectMapper.convertValue(
row.get("metadata"), new TypeReference<Map<String, String>>() {});
row.add("metadata", metadataMap);
// Deserialize the "dueDate" field into a Date or null
row.add("dueDate", parseDate(row.getText("dueDate")));
// Add a field of text to vectorize
String summary = row.getText("summary");
String genres = String.join(", ", row.getList("genres", String.class));
String summaryGenresVector =
String.format("summary: %s | genres: %s", summary, genres);
row.add("summaryGenresVector", summaryGenresVector);
});
TableInsertManyResult result = table.insertMany(rows);
System.out.println("Inserted " + result.getInsertedIds().size() + " items.");
}
}
}
1 | This is the connectToDatabase function from the previous section.
To use the function, ensure you stored your database’s endpoint and application token in environment variables as instructed in Create a database and store your credentials. |
2 | If you changed the table name in the previous script, change it in this script as well. |
3 | Replace PATH_TO_DATA_FILE with the path to the JSON data file. |
Find data in your table
After you insert data to your table, you can search the data. In addition to traditional database filtering, you can perform a vector search to find data that is most similar to a search string.
The following script performs three searches on the sample data that you loaded in Insert data to your table.
-
Python
-
TypeScript
-
Java
from quickstart_connect import connect_to_database (1)
def main() -> None:
database = connect_to_database()
table = database.get_table("quickstart_table") (2)
# Find rows that match a filter
print("\nFinding books with rating greater than 4.7...")
rating_cursor = table.find(
{"rating": {"$gt": 4.7}},
projection={"title": True, "rating": True}
)
for row in rating_cursor:
print(f"{row['title']} is rated {row['rating']}")
# Perform a vector search to find the closest match to a search string
print("\nUsing vector search to find a single scary novel...")
single_vector_match = table.find_one(
{},
sort={"summaryGenresVector": "A scary novel"},
projection={"title": True}
)
print(f"{single_vector_match['title']} is a scary novel")
# Combine a filter and vector search to find the 3 books with
# more than 400 pages that are the closest matches to a search string
print(
"\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author..."
)
vector_cursor = table.find(
{"numberOfPages": {"$gt": 400}},
sort={"summaryGenresVector": "A book set in the arctic"},
limit=3,
projection={"title": True, "author": True},
)
for row in vector_cursor:
print(row)
if __name__ == "__main__":
main()
1 | This is the connect_to_database function from the previous section. Update the import path if necessary. |
2 | If you changed the table name in the previous script, change it in this script as well. |
import { connectToDatabase } from "./quickstart-connect"; (1)
import { TableSchema, TablePrimaryKey } from "./quickstart-create-table"; (2)
(async function () {
const database = connectToDatabase();
const table = database.table<TableSchema, TablePrimaryKey>("quickstartTable"); (3)
// Find rows that match a filter
console.log("\nFinding books with rating greater than 4.7...");
const ratingCursor = table.find(
{ rating: { $gt: 4.7 } },
{
limit: 10,
projection: { title: true, rating: true },
}
);
for await (const row of ratingCursor) {
console.log(`${row.title} is rated ${row.rating}`);
}
// Perform a vector search to find the closest match to a search string
console.log("\nUsing vector search to find a single scary novel...");
const singleVectorMatch = await table.findOne(
{},
{
sort: { summaryGenresVector: "A scary novel" },
projection: { title: true },
},
);
console.log(`${singleVectorMatch?.title} is a scary novel`);
// Combine a filter, vector search, and projection to find the 3 books with
// more than 400 pages that are the closest matches to a search string
console.log(
"\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author...",
);
const vectorCursor = table.find(
{ numberOfPages: { $gt: 400 } },
{
sort: { summaryGenresVector: "A book set in the arctic" },
limit: 3,
projection: { title: true, author: true },
},
);
for await (const row of vectorCursor) {
console.log(row);
}
})();
1 | This is the connectToDatabase function and the Book interface from the previous section. Update the import path if necessary. |
2 | These are the types exported from the previous section. Update the import path if necessary. |
3 | If you changed the table name in the previous script, change it in this script as well. |
package com.quickstart;
import static com.datastax.astra.client.core.query.Projection.include;
import com.datastax.astra.client.core.query.Filter;
import com.datastax.astra.client.core.query.Filters;
import com.datastax.astra.client.core.query.Sort;
import com.datastax.astra.client.databases.Database;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.options.TableFindOneOptions;
import com.datastax.astra.client.tables.commands.options.TableFindOptions;
import com.datastax.astra.client.tables.definition.rows.Row;
public class QuickstartFindTableRowsDemo {
public static void main(String[] args) {
Database database = QuickstartConnect.connectToDatabase(); (1)
Table<Row> table = database.getTable("quickstartTable"); (2)
// Find rows that match a filter
System.out.println("\nFinding books with rating greater than 4.7...");
Filter filter = Filters.gt("rating", 4.7);
TableFindOptions options = new TableFindOptions()
.limit(10)
.projection(include("title", "rating"));
table
.find(filter, options)
.forEach(
row -> {
System.out.println(row.get("title") + " is rated " + row.get("rating"));
});
// Perform a vector search to find the closest match to a search string
System.out.println("\nUsing vector search to find a single scary novel...");
TableFindOneOptions options2 =
new TableFindOneOptions()
.sort(Sort.vectorize("summaryGenresVector", "A scary novel"))
.projection(include("title"));
table
.findOne(options2)
.ifPresent(
row -> {
System.out.println(row.get("title") + " is a scary novel");
});
// Combine a filter, vector search, and projection to find the 3 books with
// more than 400 pages that are the closest matches to a search string
System.out.println(
"\nUsing filters and vector search to find 3 books with more than 400 pages that are set in the arctic, returning just the title and author...");
Filter filter3 = Filters.gt("numberOfPages", 400);
TableFindOptions options3 =
new TableFindOptions()
.limit(3)
.sort(Sort.vectorize("summaryGenresVector", "A book set in the arctic"))
.projection(include("title", "author"));
table
.find(filter3, options3)
.forEach(
row -> {
System.out.println(row);
});
}
}
1 | This is the connectToDatabase function from the previous section. |
2 | If you changed the table name in the previous script, change it in this script as well. |
Next steps
For more practice, you can continue building with the database that you created here. For example, try inserting more data to the table, or try different searches. The Data API reference provides code examples for various operations.
Upload data from different sources
This quickstart demonstrated how to insert structured data from a JSON file into a table, but you can insert data from many sources.
Tables use fixed schemas. If your data is unstructured or if you want a flexible schema, you can use a collection instead of a table. See the quickstart for collections.
Use a different method to generate vector embeddings
This quickstart used the Astra-hosted NVIDIA embedding model to generate vector embeddings. You can also use other embedding models, or you can insert data with pre-generated vector embeddings (or without vector embeddings) and skip embedding.
-
To use a different embedding model, see Auto-generate embeddings with vectorize and Work with rows: Vector type.
-
To insert pre-embedded data, you need to specify the vector dimensions and similarity metric instead of specifying the embedding provider. See Work with rows: Vector type.
Perform more complex searches
This quickstart demonstrated how to find data using filters and vector search. To learn more about the searches you can perform, see About tables and Perform a vector search.
Use different database settings
For this quickstart, you need a Serverless (Vector) database in the Amazon Web Services us-east-2 region, which is required for the Astra-hosted NVIDIA embedding model integration.
For production databases, you might use different database settings.
For more information, see Astra DB Serverless database regions and maintenance schedules and Create a database.