Insert rows

Inserts multiple rows into a table.

This method can insert a row in an existing CQL table, but the Data API does not support all CQL data types or modifiers. For more information, see Data types in tables.

For general information about working with tables and rows, see About tables with the Data API.

Ready to write code? See the examples for this method to get started. If you are new to the Data API, check out the quickstart.

Result

  • Python

  • TypeScript

  • Java

  • curl

Inserts the specified rows and returns a TableInsertManyResult object that includes the primary key of the inserted rows as dictionaries and as ordered tuples.

If a row with the specified primary key already exists in the table, the row is overwritten with the specified column values. Unspecified columns remain unchanged.

If a row fails to insert and the insertions are sequential (ordered is True), then that row and all subsequent rows are not inserted. If the requested chunk size is greater than 1, and a failure occurs because the table schema was invalidated, then none of the rows in that chunk are inserted. The resulting error message indicates the first row that failed to insert.

If a row fails to insert and the insertions are not sequential (ordered is False), the operation will try to insert the remaining rows and then throw an error. The error indicates which rows were successfully inserted and the problems with the failed rows.

Example response:

TableInsertManyResult(
  inserted_ids=[
    {'match_id': 'fight4', 'round': 1},
    {'match_id': 'fight5', 'round': 1},
    {'match_id': 'fight5', 'round': 2},
    {'match_id': 'fight5', 'round': 3},
    {'match_id': 'challenge6', 'round': 1}
    ... (13 total)
  ],
  inserted_id_tuples=[
    ('fight4', 1), ('fight5', 1), ('fight5', 2),
    ('fight5', 3), ('challenge6', 1) ... (13 total)
  ],
  raw_results=...
)

Inserts the specified rows and returns a promise that resolves to a TableInsertManyResult<PKey> object that includes the primary keys of the inserted rows. The primary key type is inferred from the PKey of the table’s type. If it cannot be inferred from the PKey, it is instead inferred from Partial<Schema>.

If a row with the specified primary key already exists in the table, the row is overwritten with the specified column values. Unspecified columns remain unchanged.

If a row fails to insert and the insertions are sequential (options.ordered is true), then that row and all subsequent rows are not inserted. If the requested chunk size is greater than 1, and a failure occurs because the table schema was invalidated, then none of the rows in that chunk are inserted. The resulting error message indicates the first row that failed to insert.

If a row fails to insert and the insertions are not sequential (options.ordered is false), the operation will try to insert the remaining rows and then throw an error. The error indicates which rows were successfully inserted and the problems with the failed rows.

Example resolved response:

{
  insertedIds: [
    { matchId: 'match0', round: 0 },
    { matchId: 'match1', round: 0 },
    { matchId: 'match2', round: 0 },
    // ...
  ],
  insertedCount: 50,
}

Inserts the specified rows and returns a TableInsertManyResult instance that includes the primary keys of the inserted rows and the schema of the primary key.

If a row with the specified primary key already exists in the table, the row is overwritten with the specified column values. Unspecified columns remain unchanged.

If a row fails to insert and the insertions are sequential (the ordered property in TableInsertManyOptions is true), then that row and all subsequent rows are not inserted. If the requested chunk size is greater than 1, and a failure occurs because the table schema was invalidated, then none of the rows in that chunk are inserted. The resulting error message indicates the first row that failed to insert.

If a row fails to insert and the insertions are not sequential (the ordered property in TableInsertManyOptions is false), the operation will try to insert the remaining rows and then throw an error. The error indicates which rows were successfully inserted and the problems with the failed rows.

Example response:

{
  "status": {
    "primaryKeySchema": {
      "match_id": {
        "type": "text"
      },
      "round": {
        "type": "int"
      }
    },
    "insertedIds": [
      ["fight4",1 ],
      ["fight5",1],
      ["fight5",2]
    ]
  }
}

Inserts the specified rows.

If a row with the specified primary key already exists in the table, the row is overwritten with the specified column values. Unspecified columns remain unchanged.

The JSON response includes the following:

  • status.primaryKeySchema: An object that describes the table’s primary key definition, including column names and types.

  • status.insertedIds: A nested array that contains the primary key values for each inserted row. If the primary key has multiple columns, then the order of each array matches the order described by status.primaryKeySchema.

    Omitted if the options.returnDocumentResponses parameter is true.

  • status.documentResponses: An array of objects where each object represents a row. In each object, status describes the outcome of the insertion, and _id is an array that contains the primary key values.

    Included only if the options.returnDocumentResponses parameter is true.

You must check the entire response for errors to verify that all rows inserted successfully.

If a row fails to insert and the insertions are sequential (options.ordered is true), then that row and all subsequent rows are not inserted. If the requested chunk size is greater than 1, and a failure occurs because the table schema was invalidated, then none of the rows in that chunk are inserted. The resulting error message indicates the first row that failed to insert.

If a row fails to insert and the insertions are not sequential (options.ordered is false), the operation will try to insert the remaining rows. The response includes a status object that describes successful insertions and an errors array that describes problems with failed rows.

Example response for a single-column primary key:

{
  "status": {
    "primaryKeySchema": {
      "email": {
        "type": "ascii"
      }
    },
    "insertedIds": [
      [
        "tal@example.com"
      ],
      [
        "sami@example.com"
      ],
      [
        "kirin@example.com"
      ]
    ]
  }
}

Example response for a multi-column primary key:

{
  "status": {
    "primaryKeySchema": {
      "email": {
        "type": "ascii"
      },
      "graduation_year": {
        "type": "int"
      }
    },
    "insertedIds": [
      [
        "tal@example.com",
        2024
      ],
      [
        "sami@example.com",
        2024
      ],
      [
        "kiran@example.com",
        2024
      ]
    ]
  }
}

Example response when options.returnDocumentResponses is true:

{
  "status": {
    "primaryKeySchema": {
      "email": {
        "type": "ascii"
      }
    },
    "documentResponses": [
      {"_id":["tal@example.com"], "status":"OK"},
      {"_id":["sami@example.com"], "status":"OK"},
      {"_id":["kirin@example.com"], "status":"OK"}
    ]
  }
}

Parameters

  • Python

  • TypeScript

  • Java

  • curl

Use the insert_many method, which belongs to the astrapy.Table class.

Method signature
insert_many(
  rows: Iterable[Dict[str, Any]],
  *,
  ordered: bool,
  chunk_size: int,
  concurrency: int
  general_method_timeout_ms: int,
  request_timeout_ms: int,
  timeout_ms: int,
) -> TableInsertManyResult
Name Type Summary

rows

Iterable[dict]

An iterable of dictionaries, where each dictionary defines a row to insert.

All primary key values are required. Any unspecified columns are set to null.

The table definition determines the columns in the row, the type for each column, and the primary key. To get this information, see List table metadata.

ordered

bool

Whether to insert the rows sequentially.

If false, the rows are inserted in an arbitrary order with possible concurrency. This results in a much higher insert throughput than an equivalent ordered insertion.

Default: false

concurrency

int

The maximum number of concurrent requests to the API at a given time.

For ordered insertions, must be 1 or unspecified.

chunk_size

int

The number of rows to insert in a single API request.

DataStax recommends that you leave this unspecified to use the system default.

general_method_timeout_ms

int

Optional. The maximum time, in milliseconds, that the whole operation, which might involve multiple HTTP requests, can take.

This parameter is aliased as timeout_ms.

Default: The default value for the table. This default is 30 seconds unless you specified a different default when you initialized the Table or DataAPIClient object. For more information, see Timeout options.

request_timeout_ms

int

Optional. The maximum time, in milliseconds, that the client should wait for each underlying HTTP request.

Default: The default value for the table. This default is 30 seconds unless you specified a different default when you initialized the Table or DataAPIClient object. For more information, see Timeout options.

Use the insertMany method, which belongs to the Table class.

Method signature
async insertMany(
  rows: Schema[],
  options?: {
    ordered?: boolean,
    concurrency?: number,
    chunkSize?: number,
    timeout?: number | TimeoutDescriptor,
  }
): TableInsertManyResult<PKey>
Name Type Summary

rows

Schema[]

An array of objects, where each object defines a row to insert.

All primary key values are required. Any unspecified columns are set to null.

The table definition determines the columns in the row, the type for each column, and the primary key. To get this information, see List table metadata.

options

TableInsertManyOptions

Optional. The options for this operation. See Properties of options for more details.

Properties of options
Name Type Summary

ordered

boolean

Whether to insert the rows sequentially.

If false, the rows are inserted in an arbitrary order with possible concurrency. This results in a much higher insert throughput than an equivalent ordered insertion.

Default: false

concurrency

number

The maximum number of concurrent requests to the API at a given time.

For ordered insertions, must be 1 or unspecified.

Default: 8 for unordered insertions. 1 for ordered insertions.

chunkSize

number

The number of rows to insert in a single API request.

DataStax recommends that you leave this unspecified to use the system default.

timeout

number | TimeoutDescriptor

The timeout(s) to apply to HTTP request(s) originating from this method.

Use the insertMany method, which belongs to the com.datastax.astra.client.tables.Table class.

Method signature
TableInsertManyResult insertMany(
  List<? extends T> rows,
  TableInsertManyOptions options
)
TableInsertManyResult insertMany(
  List<? extends T> rows
)
Name Type Summary

rows

List<Row>

The list of Rows to insert, where each Row defines a row to insert.

All primary key values are required. Any unspecified columns are set to null.

The table definition determines the columns in the row, the type for each column, and the primary key. To get this information, see List table metadata.

options

TableInsertManyOptions

Optional. The options for this operation. See Methods of TableInsertManyOptions for more details.

Methods of TableInsertManyOptions
Method Parameters Summary

ordered()

bool

Whether to insert the rows sequentially.

If false, the rows are inserted in an arbitrary order with possible concurrency. This results in a much higher insert throughput than an equivalent ordered insertion.

Default: false

concurrency()

int

The maximum number of concurrent requests to the API at a given time.

For ordered insertions, must be 1 or unspecified.

chunkSize()

int

The number of rows to insert in a single API request.

DataStax recommends that you leave this unspecified to use the system default.

Use the insertMany command.

Command signature
curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
--header "Token: APPLICATION_TOKEN" \
--header "Content-Type: application/json" \
--data '{
  "insertMany": {
    "documents": ROWS_JSON_ARRAY,
    "options": {
      "ordered": BOOLEAN,
      "returnDocumentResponses": BOOLEAN
    }
  }
}'
Name Type Summary

documents

array

An array of objects where each object defines a row to insert.

All primary key values are required. Any unspecified columns are set to null.

The table definition determines the columns in the row, the type for each column, and the primary key. To get this information, see List table metadata.

You can insert up to 100 rows per HTTP request. If you want to insert more rows at once, you must make multiple requests or use the Data API clients.

options.ordered

boolean

Whether to insert the rows sequentially.

If false, the rows are inserted in an arbitrary order with possible concurrency. This results in a much higher insert throughput than an equivalent ordered insertion.

Default: false

options.returnDocumentResponses

boolean

Whether the response should include a status for each insertion.

Default: false

Examples

The following examples demonstrate how to insert multiple rows into a table.

Insert rows

When you insert rows, you must specify a non-null value for each primary key column for each row. Non-primary key columns are optional, and any unspecified non-primary key columns are set to null.

  • Python

  • TypeScript

  • Java

  • curl

from astrapy import DataAPIClient
from astrapy.data_types import (
    DataAPISet,
    DataAPIDate,
)

# Get an existing table
client = DataAPIClient("APPLICATION_TOKEN")
database = client.get_database("API_ENDPOINT")
table = database.get_table("TABLE_NAME")

# Insert rows into the table
result = table.insert_many(
    [
        {
            "title": "Computed Wilderness",
            "author": "Ryan Eau",
            "number_of_pages": 432,
            "due_date": DataAPIDate.from_string("2024-12-18"),
            "genres": DataAPISet(["History", "Biography"]),
        },
        {
            "title": "Desert Peace",
            "author": "Walter Dray",
            "number_of_pages": 355,
            "rating": 4.5,
        },
    ]
)
import { DataAPIClient, date } from "@datastax/astra-db-ts";

// Get an existing table
const client = new DataAPIClient("APPLICATION_TOKEN");
const database = client.db("API_ENDPOINT");
const table = database.table("TABLE_NAME");

// Insert rows into the table
(async function () {
  const result = await table.insertMany([
    {
      title: "Computed Wilderness",
      author: "Ryan Eau",
      number_of_pages: 432,
      due_date: date("2024-12-18"),
      genres: new Set(["History", "Biography"]),
    },
    {
      title: "Desert Peace",
      author: "Walter Dray",
      number_of_pages: 355,
      rating: 4.5,
    },
  ]);
})();
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import java.util.Set;

public class Example {

  public static void main(String[] args) {
    // Get an existing table
    Table<Row> table =
        new DataAPIClient("APPLICATION_TOKEN")
            .getDatabase("API_ENDPOINT")
            .getTable("TABLE_NAME");

    // Insert rows into the table
    Calendar calendar = Calendar.getInstance();
    calendar.set(2024, Calendar.DECEMBER, 18);
    Date date = calendar.getTime();
    Row row1 =
        new Row()
            .addText("title", "Computed Wilderness")
            .addText("author", "Ryan Eau")
            .addInt("number_of_pages", 432)
            .addDate("due_date", date)
            .addSet("genres", Set.of("History", "Biography"));
    Row row2 =
        new Row()
            .addText("title", "Desert Peace")
            .addText("author", "Walter Dray")
            .addInt("number_of_pages", 355)
            .addFloat("rating", 4.5f);
    TableInsertManyResult result = table.insertMany(List.of(row1, row2));
    System.out.println(result.getInsertedIds());
  }
}
curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        "title": "Computed Wilderness",
        "author" :"Ryan Eau",
        "number_of_pages": 432,
        "due_date": "2024-12-18",
        "genres": ["History", "Biography"]
      },
      {
        "title": "Desert Peace",
        "author" :"Walter Dray",
        "number_of_pages": 355,
        "rating": 3.5
      }
    ]
  }
}'

Insert rows with vector embeddings

You can only insert vector embeddings into vector columns.

To create a table with a vector column, see Create a table. To add a vector column to an existing table, see Alter a table.

All embeddings in the column should use the same provider, model, and dimensions. Mismatched embeddings can cause inaccurate vector searches.

  • Python

  • TypeScript

  • Java

  • curl

You can use the astrapy.data_types.DataAPIVector class to binary-encode your vector embeddings. DataStax recommends that you always use a DataAPIVector object instead of a list of floats to improve performance.

from astrapy import DataAPIClient
from astrapy.data_types import (
    DataAPIVector,
)

# Get an existing table
client = DataAPIClient("APPLICATION_TOKEN")
database = client.get_database("API_ENDPOINT")
table = database.get_table("TABLE_NAME")

# Insert rows into the table
result = table.insert_many(
    [
        {
            "title": "Computed Wilderness",
            "author": "Ryan Eau",
            "summary_genres_vector": DataAPIVector([0.4, -0.6, 0.2]),
        },
        {
            "title": "Desert Peace",
            "author": "Walter Dray",
            "summary_genres_vector": DataAPIVector([0.3, 0.6, 0.5]),
        },
    ]
)

You can use the DataAPIVector class to binary-encode your vector embeddings. DataStax recommends that you always use a DataAPIVector object instead of a list of floats to improve performance.

import { DataAPIClient, DataAPIVector } from "@datastax/astra-db-ts";

// Get an existing table
const client = new DataAPIClient("APPLICATION_TOKEN");
const database = client.db("API_ENDPOINT");
const table = database.table("TABLE_NAME");

// Insert rows into the table
(async function () {
  const result = await table.insertMany([
    {
      title: "Computed Wilderness",
      author: "Ryan Eau",
      summary_genres_vector: new DataAPIVector([0.4, -0.6, 0.2]),
    },
    {
      title: "Desert Peace",
      author: "Walter Dray",
      summary_genres_vector: new DataAPIVector([0.3, 0.6, 0.5]),
    },
  ]);
})();

You can use the DataAPIVector class to binary-encode your vector embeddings. DataStax recommends that you always use a DataAPIVector object instead of a list of floats to improve performance.

import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.core.vector.DataAPIVector;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import java.util.List;

public class Example {

  public static void main(String[] args) {
    // Get an existing table
    Table<Row> table =
        new DataAPIClient("APPLICATION_TOKEN")
            .getDatabase("API_ENDPOINT")
            .getTable("TABLE_NAME");

    // Insert rows into the table
    Row row1 =
        new Row()
            .addText("title", "Computed Wilderness")
            .addText("author", "Ryan Eau")
            .addVector("summary_genres_vector", new DataAPIVector(new float[] {0.4f, -0.6f, 0.2f}));
    Row row2 =
        new Row()
            .addText("title", "Desert Peace")
            .addText("author", "Walter Dray")
            .addVector("summary_genres_vector", new DataAPIVector(new float[] {0.3f, 0.6f, 0.5f}));
    TableInsertManyResult result = table.insertMany(List.of(row1, row2));
    System.out.println(result.getInsertedIds());
  }
}

You can provide the vector embeddings as an array of floats, or you can use $binary to provide the vector embeddings as a Base64-encoded string. $binary can be more performant.

Vector binary encodings specification

A d-dimensional vector is a list of d floating-point numbers that can be binary encoded.

To prepare for encoding, the list must be transformed into a sequence of bytes where each float is represented as four bytes in big-endian format. Then, the byte sequence is Base64-encoded, with = padding, if needed. For example, here are some vectors and their resulting Base64 encoded strings:

[0.1, -0.2, 0.3] = "PczMzb5MzM0+mZma"
[0.1, 0.2] = "PczMzT5MzM0="
[10, 10.5, 100, -91.19] = "QSAAAEEoAABCyAAAwrZhSA=="

Once encoded, you use $binary to pass the Base64 string to the Data API:

{ "$binary": "BASE64_STRING" }

You can use a script to encode your vectors, for example:

python
import base64
import struct

input_vector = [0.1, -0.2, 0.3]
d = len(input_vector)
pack_format = ">" + "f" * d
binary_encode = base64.b64encode(struct.pack(pack_format, *input_vector)).decode()
  • Array of floats

  • $binary

curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        "title": "Computed Wilderness",
        "author" :"Ryan Eau",
        "summary_genres_vector": [.12, .52, .32]
      },
      {
        "title": "Desert Peace",
        "author" :"Walter Dray",
        "summary_genres_vector": [0.3, 0.6, 0.5]
      }
    ]
  }
}'
curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        "title": "Computed Wilderness",
        "author" :"Ryan Eau",
        "summary_genres_vector": {"$binary": "PfXCjz8FHrg+o9cK"}
      },
      {
        "title": "Desert Peace",
        "author" :"Walter Dray",
        "summary_genres_vector": {"$binary": "PpmZmj8ZmZo/AAAA"}
      }
    ]
  }
}'

Insert rows and generate vector embeddings

To automatically generate vector embeddings, your table must have a vector column with an embedding provider integration. You can configure embedding provider integrations when you create a table, add a vector column to an existing table, or alter an existing vector column.

When you insert a row, you can pass a string to the vector column. Astra DB uses the embedding provider integration to generate vector embeddings from that string.

The strings used to generate the vector embeddings are not stored. If you want to store the original strings, you must store them in a separate column.

In the following examples, summary_genres_vector is a vector column that has an embedding provider integration configured, and summary_genres_original_text is a text column to store the original text.

  • Python

  • TypeScript

  • Java

  • curl

from astrapy import DataAPIClient

# Get an existing table
client = DataAPIClient("APPLICATION_TOKEN")
database = client.get_database("API_ENDPOINT")
table = database.get_table("TABLE_NAME")

# Insert rows into the table
result = table.insert_many(
    [
        {
            "title": "Computed Wilderness",
            "author": "Ryan Eau",
            "summary_genres_vector": "Text to vectorize",
            "summary_genres_original_text": "Text to vectorize",
        },
        {
            "title": "Desert Peace",
            "author": "Walter Dray",
            "summary_genres_vector": "Text to vectorize",
            "summary_genres_original_text": "Text to vectorize",
        },
    ]
)
import { DataAPIClient } from "@datastax/astra-db-ts";

// Get an existing table
const client = new DataAPIClient("APPLICATION_TOKEN");
const database = client.db("API_ENDPOINT");
const table = database.table("TABLE_NAME");

// Insert rows into the table
(async function () {
  const result = await table.insertMany([
    {
      title: "Computed Wilderness",
      author: "Ryan Eau",
      summary_genres_vector: "Text to vectorize",
      summary_genres_original_text: "Text to vectorize",
    },
    {
      title: "Desert Peace",
      author: "Walter Dray",
      summary_genres_vector: "Text to vectorize",
      summary_genres_original_text: "Text to vectorize",
    },
  ]);
})();
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import java.util.List;

public class Example {

  public static void main(String[] args) {
    // Get an existing table
    Table<Row> table =
        new DataAPIClient("APPLICATION_TOKEN")
            .getDatabase("API_ENDPOINT")
            .getTable("TABLE_NAME");

    // Insert rows into the table
    Row row1 =
        new Row()
            .addText("title", "Computed Wilderness")
            .addText("author", "Ryan Eau")
            .addVectorize("summary_genres_vector", "Text to vectorize")
            .addText("summary_genres_original_text", "Text to vectorize");
    Row row2 =
        new Row()
            .addText("title", "Desert Peace")
            .addText("author", "Walter Dray")
            .addVectorize("summary_genres_vector", "Text to vectorize")
            .addText("summary_genres_original_text", "Text to vectorize");
    TableInsertManyResult result = table.insertMany(List.of(row1, row2));
    System.out.println(result.getInsertedIds());
  }
}
curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        "title": "Computed Wilderness",
        "author" :"Ryan Eau",
        "summary_genres_vector": "Text to vectorize",
        "summary_genres_original_text": "Text to vectorize"
      },
      {
        "title": "Desert Peace",
        "author" :"Walter Dray",
        "summary_genres_vector": "Text to vectorize",
        "summary_genres_original_text": "Text to vectorize"
      }
    ]
  }
}'

Insert rows with a map column that uses non-string keys

  • Python

  • TypeScript

  • Java

  • curl

To insert rows with a map column that includes non-string keys, you must use an array of key-value pairs to represent the map column.

With the Python client, you can also use DataAPIMap to encode maps that use non-string keys.

from astrapy import DataAPIClient
from astrapy.data_types import DataAPIMap

# Get an existing table
client = DataAPIClient("APPLICATION_TOKEN")
database = client.get_database("API_ENDPOINT")
table = database.get_table("TABLE_NAME")

# Insert rows into the table
result = table.insert_many(
    [
        {
            # This map has non-string keys,
            # so the insertion is an array of key-value pairs
            "map_column_1": [[1, "value1"], [2, "value2"]],
            # Alternatively, use DataAPIMap to encode maps with non-string keys
            "map_column_2": DataAPIMap({1: "value1", 2: "value2"}),
            # This map does not have non-string keys,
            # so the insertion does not need to be an array of key-value pairs
            "map_column_3": {"key1": "value1", "key2": "value2"},
        },
    ]
)

To insert rows with a map column that includes non-string keys, you must use an array of key-value pairs to represent the map column.

import { DataAPIClient } from "@datastax/astra-db-ts";

// Get an existing table
const client = new DataAPIClient("APPLICATION_TOKEN");
const database = client.db("API_ENDPOINT");
const table = database.table("TABLE_NAME");

// Insert rows into the table
(async function () {
  const result = await table.insertMany([
    {
      // This map has non-string keys,
      // so the insertion is an array of key-value pairs
      map_column_1: [
        [1, "value1"],
        [2, "value2"],
      ],
      // This map does not have non-string keys,
      // so the insertion does not need to be an array of key-value pairs
      map_column_2: {
        key1: "value1",
        key2: "value2",
      },
      title: "Once in a Living Memory",
      author: "Kayla McMaster",
    },
  ]);
})();

The Java client supports insertion of a row with a map column that includes non-string keys. (You don’t need to use an array of key-value pairs to represent the map column.)

import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import java.util.List;
import java.util.Map;

public class Example {

  public static void main(String[] args) {
    // Get an existing table
    Table<Row> table =
        new DataAPIClient("APPLICATION_TOKEN")
            .getDatabase("API_ENDPOINT")
            .getTable("TABLE_NAME");

    // This map has non-string keys,
    // but the insertion can still be represented as a map
    // instead of an array of key-value pairs
    Map<Integer, String> mapColumn1 = Map.of(1, "value1", 2, "value2");

    // This map does not have non-string keys
    Map<String, String> mapColumn2 = Map.of("key1", "value1", "key2", "value2");

    Row row =
        new Row()
            .addMap("map_column_1", mapColumn1)
            .addMap("map_column_2", mapColumn2)
            .addText("title", "Once in a Living Memory")
            .addText("author", "Kayla McMaster");

    TableInsertManyResult result = table.insertMany(List.of(row));
  }
}

To insert rows with a map column that includes non-string keys, you must use an array of key-value pairs to represent the map column.

curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        # This map has non-string keys,
        # so the insertion is an array of key-value pairs
        "map_column_1": [
            [1, "value1"],
            [2, "value2"]
        ],
        # This map does not have non-string keys,
        # so the insertion does not need to be an array of key-value pairs
        "map_column_2": {
          "key1": "value1",
          "key2": "value2"
        },
        "title": "Once in a Living Memory",
        "author": "Kayla McMaster"
      }
    ]
  }
}'

Insert rows and specify insertion behavior

  • Python

  • TypeScript

  • Java

  • curl

from astrapy import DataAPIClient
from astrapy.data_types import (
    DataAPISet,
    DataAPIDate,
)

# Get an existing table
client = DataAPIClient("APPLICATION_TOKEN")
database = client.get_database("API_ENDPOINT")
table = database.get_table("TABLE_NAME")

# Insert rows into the table
result = table.insert_many(
    [
        {
            "title": "Computed Wilderness",
            "author": "Ryan Eau",
            "number_of_pages": 432,
            "due_date": DataAPIDate.from_string("2024-12-18"),
            "genres": DataAPISet(["History", "Biography"]),
        },
        {
            "title": "Desert Peace",
            "author": "Walter Dray",
            "number_of_pages": 355,
            "rating": 4.5,
        },
    ],
    chunk_size=2,
    concurrency=2,
    ordered=False,
)
import { DataAPIClient, date } from "@datastax/astra-db-ts";

// Get an existing table
const client = new DataAPIClient("APPLICATION_TOKEN");
const database = client.db("API_ENDPOINT");
const table = database.table("TABLE_NAME");

// Insert rows into the table
(async function () {
  const result = await table.insertMany(
    [
      {
        title: "Computed Wilderness",
        author: "Ryan Eau",
        number_of_pages: 432,
        due_date: date("2024-12-18"),
        genres: new Set(["History", "Biography"]),
      },
      {
        title: "Desert Peace",
        author: "Walter Dray",
        number_of_pages: 355,
        rating: 4.5,
      },
    ],
    {
      chunkSize: 2,
      concurrency: 2,
      ordered: false,
    },
  );
})();
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.tables.Table;
import com.datastax.astra.client.tables.commands.options.TableInsertManyOptions;
import com.datastax.astra.client.tables.commands.results.TableInsertManyResult;
import com.datastax.astra.client.tables.definition.rows.Row;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import java.util.Set;

public class Example {

  public static void main(String[] args) {
    // Get an existing table
    Table<Row> table =
        new DataAPIClient("APPLICATION_TOKEN")
            .getDatabase("API_ENDPOINT")
            .getTable("TABLE_NAME");

    // Define the insertion options
    TableInsertManyOptions options =
        new TableInsertManyOptions().chunkSize(20).concurrency(3).ordered(false);

    // Insert rows into the table
    Calendar calendar = Calendar.getInstance();
    calendar.set(2024, Calendar.DECEMBER, 18);
    Date date = calendar.getTime();
    Row row1 =
        new Row()
            .addText("title", "Computed Wilderness")
            .addText("author", "Ryan Eau")
            .addInt("number_of_pages", 432)
            .addDate("due_date", date)
            .addSet("genres", Set.of("History", "Biography"));
    Row row2 =
        new Row()
            .addText("title", "Desert Peace")
            .addText("author", "Walter Dray")
            .addInt("number_of_pages", 355)
            .addFloat("rating", 4.5f);
    TableInsertManyResult result = table.insertMany(List.of(row1, row2), options);
    System.out.println(result.getInsertedIds());
  }
}
curl -sS -L -X POST "API_ENDPOINT/api/json/v1/KEYSPACE_NAME/TABLE_NAME" \
  --header "Token: APPLICATION_TOKEN" \
  --header "Content-Type: application/json" \
  --data '{
  "insertMany": {
    "documents": [
      {
        "title": "Computed Wilderness",
        "author" :"Ryan Eau",
        "number_of_pages": 432,
        "due_date": "2024-12-18",
        "genres": ["History", "Biography"]
      },
      {
        "title": "Desert Peace",
        "author" :"Walter Dray",
        "number_of_pages": 355,
        "rating": 3.5
      }
    ],
    "options": {
      "ordered": false,
      "returnDocumentResponses": true
    }
  }
}'

Client reference

  • Python

  • TypeScript

  • Java

  • curl

For more information, see the client reference.

For more information, see the client reference.

For more information, see the client reference.

Client reference documentation is not applicable for HTTP.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax, an IBM Company | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com