TypeScript client usage

This page provides language-specific guidance for using the Data API TypeScript client.

For information about installing and getting started with the TypeScript client, see Get started with the Data API.

Client hierarchy

When you create apps using the Data API clients, you must instantiate a DataAPIClient object.

The DataAPIClient object serves as the entry point to the client hierarchy. It includes the following concepts:

Adjacent to these concepts are the administration classes for database administration. The specific administration classes you use, and how you instantiate them, depends on your client language and database type (Astra DB, HCD, or DSE).

You directly instantiate the DataAPIClient object only. Then, through the DataAPIClient object, you can instantiate and access other classes and concepts. Where necessary, instructions for instantiating other classes are provided in the command reference relevant to each class.

For instructions for instantiating the DataAPIClient object, see Instantiate a client object.

Options hierarchy

These concepts are the same for both the 1.x and 2.x versions of the TypeScript client, but specific names can change between major versions. For example, *SpawnOptions changed to *Options.

Like the client hierarchy, the options for each class also exist in a hierarchy—ptions defined in parent classes are deeply merged with those in their child classes.

The root of the hierarchy is the DataAPIClientOptions, which branches into AdminOptions and DbOptions, and DbOptions further branches into TableOptions and CollectionOptions.

At each level, options are inherited and merged from parent to child.

For the full list of available options at each level, see the TypeScript client reference. for each interface.

Datatypes

For information about the datatypes used in the TypeScript client, see the datatypes overview in the astra-db-ts repository.

TimeoutDescriptor

TimeoutDescriptor was introduced in TypeScript client version 2.0. For more information about TimeoutDescriptor and migrating from 1.x timeouts to 2.x timeouts, see Client upgrade guide: Replacement of client timeout settings.

Timeout defaults

The timeoutDefaults option, of type Partial<TimeoutDescriptor>, is present throughout the options hierarchy. It represents the group of timeouts that cover any operation the client can perform, including individual and multi-call methods, as well as the time required to make the actual request call.

All timeout values are expressed in milliseconds. A timeout of 0 disables that timeout, but operations associated with that timeout setting can still follow other concurrent timeouts to limit their duration.

Most operations are subject to two different, concurrent timeouts. For example, the total duration of multi-call methods is limited by the method-specific timeout, such as generalMethodTimeoout or keyspaceAdminTimeout. Meanwhile, each underlying HTTP request is independently subject to the requestTimeout.

Per-method timeouts

For all methods that issue HTTP requests, you can override the method’s timeouts through the timeout option field.

The timeout field is always in the omnipresent options parameters, and there are two ways that you can set it:

  • Set it to a plain number, which resolves to the most appropriate timeout for the operation.

  • Set it to a subset of TimeoutDescriptor, which provides slighly more fine-grained control over the timeouts.

You can check method signatures/autocomplete to find the timeouts that are available for each method.

Timeout errors

If a timeout occurs, a DataAPITimeoutException or DevOpsAPITimeoutError is thrown, depending on the operation being performed at the time and the database type (DevOpsAPITimeoutError only occurs for Astra DB database).

To help you debug the error, the error and error message include the timeout fields that hit their defined timeout limit.

Timeout fields

Name Default Summary

requestTimeoutMs

10s

The timeout imposed on a single HTTP request.

This applies to HTTP requests to both the Data API and DevOps API, with the exception of API calls that inherently take a long time to resolve, such as createCollection.

generalMethodTimeoutMs

30s

The timeout imposed on the overall duration of a method invocation. It is valid for all DML methods that aren’t concerned with schema or admin operations.

For single-call methods, such as findOne, the client uses the lesser of generalMethodTimeoutMs and requestTimeoutMs to limit the total duration of the operation.

For methods that can take multiple HTTP calls, generalMethodTimeoutMs limits the overall duration of the method invocation, while requestTimeoutMs is separately applied to each individual HTTP request.

collectionAdminTimeoutMs

60s

A timeout imposed on all collection-related schema/admin operations, such as createCollection, listCollections, or dropCollection.

Except createCollection, each individual request issued as part of these operations must obey requestTimeoutMs in addition to collectionAdminTimeoutMs.

tableAdminTimeoutMs

30s

A timeout for all table-related schema/admin operations, such as createTable, listTables, dropTable, alter, createIndex, createVectorIndex, listIndexes, and dropTableIndex.

Each individual request issued as part of these operations must obey requestTimeoutMs in addition to tableAdminTimeoutMs.

databaseAdminTimeoutMs

10m

A timeout for all database-related schema/admin operations, such as createDatabase, listDatabases, and dropDatabase, as well as findEmbeddingProviders.

The longest running operations in this class are createDatabase and dropDatabase. If these operations are called with blocking: true, they can last several methods.

Each individual request issued as part of these operations must obey requestTimeoutMs in addition to databaseAdminTimeoutMs.

keyspaceAdminTimeoutMs

30s

A timeout for all keyspace-related operations, such as createKeyspace, listKeyspaces, and dropKeyspace.

Each individual request issued as part of these operations must obey requestTimeoutMs in addition to keyspaceAdminTimeoutMs.

Cursors

This following information applies to TypeScript client version 2.0 or later.

Earlier TypeScript client versions had a singular FindCursor class with similar behavior.

The primary differences between the FindCursor between 1.x and 2.x are as follows:

  • 2.x introduces TableFindCursor or CollectionFindCursor subclasses, and an AbstractCursor superclass.

  • In 1.x, builder functions, such as skip and project, mutated the cursors in-place; 2.x builder functions immutably return a new cursor.

  • 2.x introduces a second, optional, TRaw type parameter on the cursor.

For more information about moving from 1.x to 2.x, see Data API client upgrade guide.

When you call a find command on a Table or a Collection, a FindCursor object is returned. FindCursor objects also have the subclassed forms TableFindCursor and CollectionFindCursor.

Similarity, the findAndRerank command returns a FindAndRerankCursor object, which shares the same AbstractCursor interface as FindCursor, but with slightly different options.

Cursor objects represent a lazy stream of results that you can use to progressively retrieve new results (pagination). The basic usage pattern is to consume the cursor item by item. For example:

// Using find to create a fully-configured TableFindCursor object
const cursor = table.find<{ winner: number }>({ matchId: 'challenge6' }, {
  projection: { winner: 1 },
  limit: 3,
});

// Lazily consuming the cursor as an async iterator
for await (const item of cursor) {
  console.log(item);
}

// Or if you prefer to build the cursor using a fluent interface
// Also, you can use the toArray method to get all the results at once
const rows = await table.find({ matchId: 'challenge6' })
  .project<{ winner: number }>({ winner: 1 })
  .limit(3)
  .toArray();

console.log(rows);

Cursor properties

Cursors have the following properties that you can inspect at any time:

Name Return type Summary

state

CursorState

The current state of the cursor:

  • 'idle': Item consumption not started.

  • 'active': Item consumption in progress.

  • 'closed': All items consumed or the cursor was closed before exhaustion.

consumed

number

The number of items already read by the code consuming the cursor. This does not include buffered, but not yet consumed, items.

buffered

number

The number of items currently stored in the client-side buffer of this cursor.

dataSource

Table | Collection

The Table or Collection instance that spawned this cursor.

This type can be more specific if a subclass of FindCursor is used.

Methods that directly alter the cursor

The following methods, in addition to iterating over the cursor, alter the cursor’s internal state:

Name Return type Summary

close

void

Closes the cursor regardless of its state.

You can close a cursor at any time. Closing a cursor discards any items that haven’t been consumed.

You can use a closed cursor again unless you call cursor.rewind().

rewind

void

Rewinds the cursor to it’s original, unconsumed state while retaining all cursor settings, such as filter, mapping, and projection.

consumeBuffer

TRaw[]

Consumes (returns) up to the requested number of raw buffered items (rows or documents).

Cursor mapping is not applied to these documents.

The returned items are consumed, meaning that the cursor can’t return those items again.

forEach

void

Consumes the remaining rows in the cursor, invoking a provided callback function on each of them.

If the callback returns exactly false, then the method stops iteration and exits early. If the method returns early, the cursor remains in the 'active' state. Otherwise, the cursor is closed.

Calling this method on a closed cursor results in an error.

toArray

T[]

Converts all remaining, unconsumed rows from the cursor into a list.

If the cursor is 'idle', the result is the entire set of rows returned by the find operation. Otherwise, the rows already consumed by the cursor aren’t in the resulting list.

DataStax doesn’t recommend calling this method if you expect a large list of results because constructing the full list requires many data exchanges with the Data API and potentially massive memory usage. If you expect many results, DataStax recommends following a lazy pattern of iterating and consuming the rows.

Calling this method on a closed cursor results in an error.

hasNext

bool

Returns a Boolean indicating whether the cursor has more documents to return.

hasNext can be called on any cursor, with the following possible outcomes:

  • If called on a closed cursor, it returns false.

  • If the current buffer is empty, it triggers the operation to fetch a new page.

  • If called on an 'idle' cursor, it fetches the first page, but the cursor stays in the 'idle' state until actual consumption starts.

getSortVector

DataAPIVector | null

Returns the query vector used in the vector search that originated this cursor, if applicable. If the query wasn’t a vector search or it was invoked without the includeSortVector flag, then it returns null.

If called an 'idle' cursor, this method fetches the first page, but the cursor stays in the 'idle' state until actual consumption starts. If called on a closed cursor, this method returns either null or the sort vector used in the search.

Builder methods

The following methods don’t directly alter the cursor’s internal state. Instead, these methods produce a copy of the cursor with altered attributes. You can use these to fluently build up the cursor.

Except for the clone method, these methods require that the cursor is in the 'idle' state.

Note that the following table lists the return type of each method as FindCursor<T, TRaw>, but the actual type of the returned cursor is a subclass of FindCursor (either TableFindCursor or CollectionFindCursor).

FindAndRerankCursor also has a very similar set of methods, but with slightly different options available.

Name Return type Summary

clone

FindCursor<TRaw, TRaw>

Creates a new 'idle' copy of the cursor with the same configuration except for the mapping. If the cursor had mapping, it is removed.

filter

FindCursor<T, TRaw>

Returns a copy of the cursor with a new filter setting.

project

FindCursor<RRaw, RRaw>

Return a copy of the cursor with a new projection setting.

To prevent typing misalignment, you can’t set projection if a mapping is already set.

sort

FindCursor<T, TRaw>

Returns a copy of the cursor with a new sort setting.

limit

FindCursor<T, TRaw>

Returns a copy of the cursor with a new limit setting.

includeSimilarity

FindCursor<WithSim<TRaw>, WithSim<TRaw>>

Returns a copy of the cursor with a new includeSimilarity setting.

To prevent typing misalignment, you can’t set includeSimilarity if a mapping is already set.

includeSortVector

FindCursor<T, TRaw>

Returns a copy of the cursor with a new includeSortVector setting.

skip

FindCursor<T, TRaw>

Returns a copy of the cursor with a new skip setting.

map

FindCursor<R, TRaw>

Returns a copy of the cursor with a mapping function to transform the returned items.

Calling this method on a cursor that already has a mapping causes the mapping functions to be composed.

Typing Collections and Tables

This information applies to TypeScript client version 2.0 or later. For more information, see Data API client upgrade guide.

The actual type signatures of Collection and Table aren’t standard Collection<Schema>-like types that you may be used to. Instead, they’re typed as follows:

class Collection<
  WSchema extends SomeDoc = SomeDoc,
  RSchema extends WithId<SomeDoc> = FoundDoc<WSchema>,
> {}

class Table<
  WSchema extends SomeRow,
  PKey extends SomePKey = Partial<FoundRow<WSchema>>,
  RSchema extends SomeRow = FoundRow<WSchema>,
> {}

FoundDoc and FoundRow are generic types that represent the type of the document or row as it’s returned from the Data API in read operations.

For most use cases, you can treat Collection and Table as if they were Collection<Schema> and Table<Schema, Pkey?>, respectively.

Because all type parameters have default values, the usage and behavior can be the same as if they were typed simply. This means that the following is valid:

const coll = db.collection<MySchema>('my_collection');

function useCollection<T extends SomeDoc>(coll: Collection<T>) {
  // ...
}

The difference is typically negligible, except for advanced use case like custom serialization/deserialization.

If necessary, such as for advanced use cases with custom serialization/deserialization logic, you can specify dual types:

type MyPKey = Pick<MyWSchema, 'ptKey' | 'clKey'>;

const coll = await db.createCollection<MyWSchema, MyRSchema>('my_collection');
const table = await db.table<MyWSchema, MyPKey, MyRSchema>('my_table');

function useCollection<W extends SomeDoc, R extends SomeDoc>(coll: Collection<W, R>) {
  // ...
}

function useTable<W extends SomeRow, P extends SomePKey, R extends SomeRow>(table: Table<W, P, R>) {
  // ...
}

You can also create a type alias for specific types of Collection or Table instances, if the exact type is getting long:

export type GameCollection = Collection<GameSchema, GameReadSchema>;
export type GameTable = Table<GameSchema, GamePKey, GameReadSchema>;

Logging

Logging is a public preview feature available in TypeScript client version 2.0.

See the 1.x astra-db-ts README for information about monitoring/logging in previous versions.

The TypeScript client supports logging and monitoring of events that occur during client operation.

You can enable logging when you instantiate the DataAPIClient.

// Sets defaults for logging
const client = new DataAPIClient({ logging: 'all' });
Event defaults

When any event(s) are enabled with any specified output, the following defaults apply:

[{
  events: ['adminCommandStarted', 'adminCommandPolling', 'adminCommandSucceeded', 'adminCommandFailed', 'adminCommandWarnings', 'commandFailed', 'commandWarnings'],
  emits: ['event', 'stderr'],
}, {
  events: ['commandStarted', 'commandSucceeded'],
  emits: ['event'],
}];

Output types

You may consume events in a few different ways:

  • Log directly to stdout/stderr:

    const client = new DataAPIClient({
      logging: [
        { events: 'commandStarted', emits: 'stdout' },
        { events: 'commandFailed', emits: 'stderr' },
      ],
    });
    Example output
    2025-02-12 13:18:23 CDT [32a4fe9b] [CommandStarted]: test_coll::insertMany {records=1,ordered=false}
    2025-02-12 13:18:24 CDT [32a4fe9b] [CommandFailed]: test_coll::insertMany {records=1,ordered=false} (865ms) ERROR: 'Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (_), and hyphens (-)'
    Custom formatting

    You may override the custom default formatter for events using BaseClientEvent.setDefaultFormatter().

    A custom EventFormatter takes in two parameters:

    1. The event being formatted

    2. The full event message

      • This is equal to event.getMessagePrefix() + event.getMessage()

    // Define a custom formatter
    const customFormatter: EventFormatter = (event, _fullMessage) => {
     return `[${event.requestId.slice(0, 8)}] (${event.name}) - ${event.getMessage()}`;
    }
    
    // Set the custom formatter as the default
    BaseClientEvent.setDefaultFormatter(customFormatter);
    
    // Now all events will use the custom formatter
    const coll = db.collection('*COLLECTION_NAME*', {
      logging: [{ events: 'all', emits: 'stdout' }],
    });
    
    // Logs:
    // - [e31bc40e] (CommandStarted) - basic_logging_example_table::findOne
    // - [e31bc40e] (CommandFailed) - basic_logging_example_table::findOne (249ms) ERROR: 'Invalid filter expression: filter clause path ('$invalid') contains character(s) not allowed'
    coll.findOne({ $invalid: 1 });
  • Log directly to stdout/stderr using a verbose JSON format:

    const client = new DataAPIClient({
      logging: [
        { events: 'commandStarted', emits: 'stdout:verbose' },
        { events: 'commandFailed', emits: 'stderr:verbose' },
      ],
    });
    Example output
    {
      "name": "CommandStarted",
      "timestamp": "2025-02-12 16:38:28 IST",
      "requestId": "6d34edd4-ff64-4837-a9cc-04f91fe70b31",
      "extra": {
        "records": 1,
        "ordered": false
      },
      "command": {
        "insertMany": {
          "documents": [
            {
              "$invalid": "abc"
            }
          ],
          "options": {
            "returnDocumentResponses": true,
            "ordered": false
          }
        }
      },
      "url": "https://480f95a5-fd3f-40a1-aac6-2ca765b09be0-us-east-2.apps.astra.datastax.com/api/json/v1/default_keyspace/test_coll",
      "timeout": {
        "requestTimeoutMs": 10000,
        "generalMethodTimeoutMs": 30000
      }
    }
    {
      "name": "CommandFailed",
      "timestamp": "2025-02-12 16:38:29 IST",
      "requestId": "6d34edd4-ff64-4837-a9cc-04f91fe70b31",
      "extra": {
        "records": 1,
        "ordered": false
      },
      "command": {
        "insertMany": {
          "documents": [
            {
              "$invalid": "abc"
            }
          ],
          "options": {
            "returnDocumentResponses": true,
            "ordered": false
          }
        }
      },
      "url": "https://480f95a5-fd3f-40a1-aac6-2ca765b09be0-us-east-2.apps.astra.datastax.com/api/json/v1/default_keyspace/test_coll",
      "duration": 1730.055833,
      "error": {
        "name": "DataAPIResponseError",
        "message": "DataAPIResponseError fields not separately serialized; all context already in the CommandFailed event"
      },
      "resp": {
        "status": {
          "documentResponses": [
            {
              "id": "634e8a91-cc25-4461-8e8a-91cc25f461db",
              "status": "ERROR",
              "errorsIdx": 0
            }
          ]
        },
        "errors": [
          {
            "message": "Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (), and hyphens (-)",
            "errorCode": "SHRED_DOC_KEY_NAME_VIOLATION",
            "id": "f920fcea-8955-4ce3-a7ea-ffc73aeeb7c7",
            "family": "REQUEST",
            "title": "Document field name invalid",
            "scope": "DOCUMENT"
          }
        ]
      }
    }
  • Use DataAPIClient as an EventEmitter-like object:

    const client = new DataAPIClient({
      logging: [
        { events: ['commandStarted', 'commandFailed'], emits: 'event' },
      ],
    });
    
    client.on('commandStarted', (event) => {
      console.log('FYI,', event.commandName, 'started.');
    });
    
    client.once('commandFailed', (event) => {
      console.error('On no!!', event.format());
    });
    Example output
    FYI, insertMany started.
    Oh no!! 2025-02-12 13:18:24 CDT [32a4fe9b] [CommandFailed]: test_coll::insertMany {records=1,ordered=false} (865ms) ERROR: 'Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (_), and hyphens (-)'
When to use which?
Use case Recommendation

Basic logs for a small application

Log to stdout/stderr

Quick, temporary, debug logging

Log to stdout/stderr :verbose

Structured, customized logging

Use EventEmitter-like behavior

Using your own logging framework

Use EventEmitter-like behavior

Event types

There are two main categories of logged events:

  • command events: operations from Db, Table, and Collection instances

  • adminCommand events: operations from AstraAdmin and DbAdmin instances

The following events are emitted:

Event Description Default Behavior

commandStarted

Emitted when a command is started, before the initial HTTP request is made.

Emits the event, does not log to the console.

commandSucceeded

Emitted when a command has succeeded (status code is 200, and no errors are returned).

Emits the event, does not log to the console.

commandFailed

Emitted when a command has errored (status code is not 200, or errors are returned).

Emits the event, logs to stderr.

commandWarnings

Emitted when a command has warnings (status.warnings field is present in the result).

Emits the event, logs to stderr.

adminCommandStarted

Emitted when an admin command is started, before the initial HTTP request is made.

Emits the event, logs to stderr.

adminCommandPolling

Emitted when a command is polling in a long-running operation (e.g. createDatabase).

Emits the event, logs to stderr.

adminCommandSucceeded

Emitted when an admin command has succeeded (HTTP 200 is returned).

Emits the event, logs to stderr.

adminCommandFailed

Emitted when an admin command has failed (HTTP 4xx/5xx is returned, even if while polling).

Emits the event, logs to stderr.

adminCommandWarnings

Emitted when an admin command has warnings (status.warnings field is present).

Emits the event, logs to stderr.

In addition to these events, you may provide custom regex patterns to match multiple events at once, or use 'all' to match all events, as a semantic equivalent to the regex /.*/.

Config format

Logging is configured through a list of hierarchical rules, determining which events to emit where. Each rule layer overrides previous rules for the same events.

There are multiple ways to configure logging, and the rules may be combined in various ways:

  • logging: 'all': This will enable all events with default behaviors.

  • logging: [{ events: 'all', emits: ['<outputs>'] }]: This will allow you to define the specific outputs for all events

  • logging: '<event>' | ['<event>']: This will enable only the specified event(s) with default behaviors

  • logging: /<regex>/ | [/<regex>/]: This will enable only the matching event(s) with default behaviors

  • logging: [{ events: ['<events>'], emits: ['<outputs>'] }]: This will allow you to define the specific outputs for specific events

  • logging: ['all', { events: ['<events>'], emits: [] }]: This demonstrates how 'all' may be used as a base configuration, to be overridden by specific rules. An empty emits array effectively disables outputs for the specified events

Examples
// Set defaults for logging
const client = new DataAPIClient({
  logging: 'all',
});

// Just emit all events as events
const client = new DataAPIClient({
  logging: [{ events: 'all', emits: 'event' }],
});

// Define specific outputs for specific events
const client = new DataAPIClient({
  logging: [
    { events: ['commandSucceeded', 'adminCommandStarted', 'adminCommandSucceeded'], emits: ['stdout', 'event'] },
    { events: ['commandFailed', 'adminCommandFailed'], emits: ['stderr', 'event'] },
  ],
});

// Use 'all' as a base configuration, and override specific events
const client = new DataAPIClient({
  logging: ['all', { events: /.*(Started|Succeeded)/, emits: [] }],
});

// Enable specific events with default behaviors
const client = new DataAPIClient({
  logging: ['commandSucceeded', 'adminCommandStarted'],
});

Further usage and examples

For more information about logging, and viewing actual code examples involving it, you may check out the examples/logging directory in the astra-db-ts repository, which contains the following examples:

Custom ser/des

Ser/des configuration is a public preview feature available in TypeScript client version 2.0. Features and functionality are subject to change during the preview period.

Some of the options enable advanced features, which should be used with caution, as they can lead to unexpected behavior if used improperly.

For information about changes in 2.0 and migrating from 1.x to 2x, see Data API client upgrade guide.

You can use custom serialization and deserialization logic to customize client usage at a lower level, enabling features such as object mapping, custom data types, validation, and more.

mutateInPlace

Enables a minor optimization to allow any serialized object to be mutated in-place.

This means you can serialize objects like rows, documents, filters, and sorts without creating a copy of every nested object or array.

Only use this option if you don’t need to reuse an object after passing it to any operation. Otherwise, unexpected mutations and behaviors may occur.

codecs (alpha)

You can use codecs to customize the way that objects are serialized and deserialized, allowing you to change data types, add validation logic, add object mapping, and more.

This feature is still in alpha, and may be subject to changes in future versions.

enableBigNumbers

A collection-specific option which enables serialization & deserialization of bigint and BigNumber.

Big numbers are not enabled by default for two primary reasons:

  1. Performance: Enabling big numbers necessitates usage of a specialized JSON library which is capable of serializing/deserializing these numbers without loss of precision, which is much slower than the native JSON library.

    Realistically, however, the difference is likely negligible for most cases

  2. Ambiguity in Deserialization: There is an inherent ambiguity in deciding how to deserialize big numbers, as certain numbers may be representable in various different numerical formats, and not in an easily predictable way.

    For example, 9007199254740992 is equally representable as either a number, bigint, a BigNumber, or even a string.

    For serialization, there is no such ambiguity: any number is just a series of digits in JSON.

Configuring this option

Deserialization behavior must be configured to enable big numbers on a collection-by-collection basis.

This option can be configured in two ways:

  • As a function, which takes in the path of the field being deserialized and returns a coercion type. See CollNumCoercionFn for more details.

    const collection = db.collection('COLLECTION_NAME', {
      serdes: {
        enableBigNumbers: (_path, pathMatches) => (pathMatches(['info', '*', 'price']))
          ? 'bignumber'
          : 'number',
      },
    });
  • As a configuration object, which allows you to specify the coercion type for any path. See CollNumCoercionCfg for more details.

    const collection = db.collection('COLLECTION_NAME', {
      serdes: {
        enableBigNumbers: {
          'items..price': 'bignumber',
          '': 'number',
        },
      },
    });

Numerical coercions

There are a variety of built-in numerical coercions (see CollNumCoercion) that may be used to suit your needs.

astra-db-ts internally uses json-bigint once enableBigNumbers is enabled, which parses the JSON numbers as either native JS numbers, or BigNumbers from BigNumber.js.

The following table shows how each numerical coercion coerces each aforementioned type:

Representation number Coercion Behavior BigNumber Coercion Behavior

number

Uses the number directly

Coerces the BigNumber into a number, potentially losing precision

strict_number

Uses the number directly

Coerces the BigNumber into a number, throwing an error if any precision is lost

bigint

Coerces the number into a bigint, throwing an error if it was not already an integer

Coerces the BigNumber into a bigint, throwing an error if it was not already an integer

bignumber

Coerces the number into a BigNumber

Uses the BigNumber directly

string

Coerces the number into a string

Coerces the BigNumber into a string

number_or_string

Uses the number directly

Coerces the BigNumber into a string

Alternatively, if none of the above coercions work for you, your own coercion function may be used instead:

const collection = db.collection('test_coll', {
  serdes: {
    enableBigNumbers: {
      'info.*.price': (val: number | BigNumber, path: PathSegment[]): unknown => { /* ... */ },
      '*': 'number',
    },
  },
});

sparseData

A table-specific option which enables sparse returned rows.

This means that any fields not present in the row are omitted from the returned row rather than being returned as null.

HTTP Options

This section pertains to astra-db-ts version 2.0.0 and later.

Previous versions of astra-db-ts would attempt to use fetch-h2 by default, and only needed to manually import and set fetch-h2 in minified or otherwise unsupported environments.

See the v1.x astra-db-ts README for more information on this topic.

astra-db-ts offers heavily extensible HTTP options, allowing you to customize the client’s HTTP behavior to suit your needs.

There are a couple of default Fetcher implementations (FetchNative and FetchH2), but a custom implementation is simple to write and provide as well.

See the examples/using-http2 and examples/customize-http directories in the astra-db-ts repository for actual examples of using the TypeScript client with various custom HTTP configurations.

Using HTTP/2

astra-db-ts uses the native fetch API by default, but it can also work with HTTP/2 using the fetch-h2 module.

However, due to compatability reasons, fetch-h2 is no longer dynamically imported by default, and must be provided to the client manually.

Fortunately, it is only a couple of easy steps to get HTTP/2 working in your project:

  1. Install fetch-h2 by running npm i fetch-h2.

  2. Provide fetch-h2 to the client.

Depending on your environment, module system, and preferences, you may pass in the fetch-h2 module in a few different ways:

With normal imports:
import { DataAPIClient } from '@datastax/astra-db-ts';
import * as fetchH2 from 'fetch-h2';

const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
  httpOptions: { client: 'fetch-h2', fetchH2: fetchH2 },
});
With import and top-level await:
import { DataAPIClient } from '@datastax/astra-db-ts';

const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
  httpOptions: { client: 'fetch-h2', fetchH2: await import('fetch-h2') },
});
With require:
const { DataAPIClient } = require('@datastax/astra-db-ts');
const fetchH2 = require('fetch-h2');

const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
  httpOptions: { client: 'fetch-h2', fetchH2: fetchH2 },
});

// or just pass it in inline
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
  httpOptions: { client: 'fetch-h2', fetchH2: require('fetch-h2') },
});

See the examples/using-http2 directory in the astra-db-ts repository for an actual example of using the TypeScript client with HTTP/2.

Extending and writing fetchers

In the most advanced use-cases, a custom fetch implementation may be used as well.

import { DataAPIClient, Fetcher, FetcherRequestInfo, FetcherResponseInfo } from '@datastax/astra-db-ts';

class CustomFetcher implements Fetcher {
  async fetch(info: FetcherRequestInfo): Promise<FetcherResponseInfo> {
    // Custom fetch implementation
  }
}

const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
  httpOptions: { client: 'custom', fetcher: new CustomFetcher() },
});

See the CustomHttpClientOptions and Fetcher types for more information about this feature.

See also the examples/customize-http directory in the astra-db-ts repository, which contains the following examples:

Browser support

For most use cases, DataStax strongly advises against using the Data API TypeScript client in the browser.

Doing so may expose your database credentials to client-side code, allowing attackers to easily extract the token through browser developer tools, XSS, or other means.

Prior to version 2.0 of the TypeScript client, an events polyfill was also required to be available for astra-db-ts to work in the browser.

astra-db-ts is designed foremost to work in server-side environments, but it ensures full completely in the browser as well.

However, you may need to set up a CORS proxy (such as CORS Anywhere) to forward your requests to the Data API.

See the examples/browser directory in the astra-db-ts repository for an actual example of using the TypeScript client in the browser.

See also

For command specifications, see the command references, such as About collections with the Data API, and About tables with the Data API.

For major changes between versions, see Data API client upgrade guide.

For the complete client reference, see TypeScript client reference.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com