TypeScript client usage
This page provides language-specific guidance for using the Data API TypeScript client.
For information about installing and getting started with the TypeScript client, see Get started with the Data API.
Client hierarchy
When you create apps using the Data API clients, you must instantiate a DataAPIClient
object.
The DataAPIClient
object serves as the entry point to the client hierarchy. It includes the following concepts:
Adjacent to these concepts are the administration classes for database administration. The specific administration classes you use, and how you instantiate them, depends on your client language and database type (Astra DB, HCD, or DSE).
-
-
AstraDbAdmin
orDataAPIDbAdmin
-
You directly instantiate the DataAPIClient
object only.
Then, through the DataAPIClient
object, you can instantiate and access other classes and concepts.
Where necessary, instructions for instantiating other classes are provided in the command reference relevant to each class.
For instructions for instantiating the DataAPIClient
object, see Instantiate a client object.
Options hierarchy
These concepts are the same for both the 1.x and 2.x versions of the TypeScript client, but specific names can change between major versions.
For example, |
Like the client hierarchy, the options for each class also exist in a hierarchy—ptions defined in parent classes are deeply merged with those in their child classes.
The root of the hierarchy is the DataAPIClientOptions
, which branches into AdminOptions
and DbOptions
, and
DbOptions
further branches into TableOptions
and CollectionOptions
.
At each level, options are inherited and merged from parent to child.
For the full list of available options at each level, see the TypeScript client reference. for each interface.
Datatypes
For information about the datatypes used in the TypeScript client, see the datatypes overview in the astra-db-ts
repository.
TimeoutDescriptor
|
Timeout defaults
The timeoutDefaults
option, of type Partial<TimeoutDescriptor>
, is present throughout the options hierarchy.
It represents the group of timeouts that cover any operation the client can perform, including individual and multi-call methods, as well as the time required to make the actual request call.
All timeout values are expressed in milliseconds.
A timeout of 0
disables that timeout, but operations associated with that timeout setting can still follow other concurrent timeouts to limit their duration.
Most operations are subject to two different, concurrent timeouts.
For example, the total duration of multi-call methods is limited by the method-specific timeout, such as generalMethodTimeoout
or keyspaceAdminTimeout
.
Meanwhile, each underlying HTTP request is independently subject to the requestTimeout
.
Per-method timeouts
For all methods that issue HTTP requests, you can override the method’s timeouts through the timeout
option field.
The timeout
field is always in the omnipresent options
parameters, and there are two ways that you can set it:
-
Set it to a plain
number
, which resolves to the most appropriate timeout for the operation. -
Set it to a subset of
TimeoutDescriptor
, which provides slighly more fine-grained control over the timeouts.
You can check method signatures/autocomplete to find the timeouts that are available for each method.
Timeout errors
If a timeout occurs, a DataAPITimeoutException
or DevOpsAPITimeoutError
is thrown, depending on the operation being performed at the time and the database type (DevOpsAPITimeoutError
only occurs for Astra DB database).
To help you debug the error, the error and error message include the timeout fields that hit their defined timeout limit.
Timeout fields
Name | Default | Summary |
---|---|---|
|
|
The timeout imposed on a single HTTP request. This applies to HTTP requests to both the Data API and DevOps API, with the exception of API calls that inherently take a long time to resolve, such as |
|
|
The timeout imposed on the overall duration of a method invocation. It is valid for all DML methods that aren’t concerned with schema or admin operations. For single-call methods, such as For methods that can take multiple HTTP calls, |
|
|
A timeout imposed on all collection-related schema/admin operations, such as Except |
|
|
A timeout for all table-related schema/admin operations, such as Each individual request issued as part of these operations must obey |
|
|
A timeout for all database-related schema/admin operations, such as The longest running operations in this class are Each individual request issued as part of these operations must obey |
|
|
A timeout for all keyspace-related operations, such as Each individual request issued as part of these operations must obey |
Cursors
This following information applies to TypeScript client version 2.0 or later. Earlier TypeScript client versions had a singular The primary differences between the
For more information about moving from 1.x to 2.x, see Data API client upgrade guide. |
When you call a find
command on a Table
or a Collection
, a FindCursor
object is returned.
FindCursor
objects also have the subclassed forms TableFindCursor
and CollectionFindCursor
.
Similarity, the findAndRerank
command returns a FindAndRerankCursor
object, which shares the same AbstractCursor
interface as FindCursor
, but with slightly different options.
Cursor objects represent a lazy stream of results that you can use to progressively retrieve new results (pagination). The basic usage pattern is to consume the cursor item by item. For example:
// Using find
to create a fully-configured TableFindCursor
object
const cursor = table.find<{ winner: number }>({ matchId: 'challenge6' }, {
projection: { winner: 1 },
limit: 3,
});
// Lazily consuming the cursor as an async iterator
for await (const item of cursor) {
console.log(item);
}
// Or if you prefer to build the cursor using a fluent interface
// Also, you can use the toArray
method to get all the results at once
const rows = await table.find({ matchId: 'challenge6' })
.project<{ winner: number }>({ winner: 1 })
.limit(3)
.toArray();
console.log(rows);
Cursor properties
Cursors have the following properties that you can inspect at any time:
Name | Return type | Summary |
---|---|---|
|
|
The current state of the cursor:
|
|
|
The number of items already read by the code consuming the cursor. This does not include buffered, but not yet consumed, items. |
|
|
The number of items currently stored in the client-side buffer of this cursor. |
|
|
The This type can be more specific if a subclass of |
Methods that directly alter the cursor
The following methods, in addition to iterating over the cursor, alter the cursor’s internal state:
Name | Return type | Summary |
---|---|---|
|
|
Closes the cursor regardless of its You can close a cursor at any time. Closing a cursor discards any items that haven’t been consumed. You can use a closed cursor again unless you call |
|
|
Rewinds the cursor to it’s original, unconsumed state while retaining all cursor settings, such as |
|
|
Consumes (returns) up to the requested number of raw buffered items (rows or documents). Cursor mapping is not applied to these documents. The returned items are consumed, meaning that the cursor can’t return those items again. |
|
|
Consumes the remaining rows in the cursor, invoking a provided callback If the callback returns exactly Calling this method on a closed cursor results in an error. |
|
|
Converts all remaining, unconsumed rows from the cursor into a list. If the cursor is DataStax doesn’t recommend calling this method if you expect a large list of results because constructing the full list requires many data exchanges with the Data API and potentially massive memory usage. If you expect many results, DataStax recommends following a lazy pattern of iterating and consuming the rows. Calling this method on a closed cursor results in an error. |
|
|
Returns a Boolean indicating whether the cursor has more documents to return.
|
|
|
Returns the query vector used in the vector search that originated this cursor, if applicable.
If the query wasn’t a vector search or it was invoked without the If called an |
Builder methods
The following methods don’t directly alter the cursor’s internal state. Instead, these methods produce a copy of the cursor with altered attributes. You can use these to fluently build up the cursor.
Except for the clone
method, these methods require that the cursor is in the 'idle'
state.
Note that the following table lists the return type of each method as FindCursor<T, TRaw>
, but the actual type of the returned cursor is a subclass of FindCursor
(either TableFindCursor
or CollectionFindCursor
).
FindAndRerankCursor
also has a very similar set of methods, but with slightly different options available.
Name | Return type | Summary |
---|---|---|
|
|
Creates a new |
|
|
Returns a copy of the cursor with a new |
|
|
Return a copy of the cursor with a new To prevent typing misalignment, you can’t set |
|
|
Returns a copy of the cursor with a new |
|
|
Returns a copy of the cursor with a new |
|
|
Returns a copy of the cursor with a new To prevent typing misalignment, you can’t set |
|
|
Returns a copy of the cursor with a new |
|
|
Returns a copy of the cursor with a new |
|
|
Returns a copy of the cursor with a mapping function to transform the returned items. Calling this method on a cursor that already has a mapping causes the mapping functions to be composed. |
Typing Collections and Tables
This information applies to TypeScript client version 2.0 or later. For more information, see Data API client upgrade guide. |
The actual type signatures of Collection
and Table
aren’t standard Collection<Schema>
-like types that you may be used to.
Instead, they’re typed as follows:
class Collection<
WSchema extends SomeDoc = SomeDoc,
RSchema extends WithId<SomeDoc> = FoundDoc<WSchema>,
> {}
class Table<
WSchema extends SomeRow,
PKey extends SomePKey = Partial<FoundRow<WSchema>>,
RSchema extends SomeRow = FoundRow<WSchema>,
> {}
FoundDoc
and FoundRow
are generic types that represent the type of the document or row as it’s returned from the Data API in read operations.
For most use cases, you can treat Because all type parameters have default values, the usage and behavior can be the same as if they were typed simply. This means that the following is valid:
The difference is typically negligible, except for advanced use case like custom serialization/deserialization. |
If necessary, such as for advanced use cases with custom serialization/deserialization logic, you can specify dual types:
type MyPKey = Pick<MyWSchema, 'ptKey' | 'clKey'>;
const coll = await db.createCollection<MyWSchema, MyRSchema>('my_collection');
const table = await db.table<MyWSchema, MyPKey, MyRSchema>('my_table');
function useCollection<W extends SomeDoc, R extends SomeDoc>(coll: Collection<W, R>) {
// ...
}
function useTable<W extends SomeRow, P extends SomePKey, R extends SomeRow>(table: Table<W, P, R>) {
// ...
}
You can also create a type alias for specific types of
|
Logging
Logging is a public preview feature available in TypeScript client version 2.0. See the 1.x astra-db-ts README for information about monitoring/logging in previous versions. |
The TypeScript client supports logging and monitoring of events that occur during client operation.
You can enable logging when you instantiate the DataAPIClient
.
// Sets defaults for logging
const client = new DataAPIClient({ logging: 'all' });
Event defaults
When any event(s) are enabled with any specified output, the following defaults apply:
[{
events: ['adminCommandStarted', 'adminCommandPolling', 'adminCommandSucceeded', 'adminCommandFailed', 'adminCommandWarnings', 'commandFailed', 'commandWarnings'],
emits: ['event', 'stderr'],
}, {
events: ['commandStarted', 'commandSucceeded'],
emits: ['event'],
}];
Output types
You may consume events in a few different ways:
-
Log directly to
stdout
/stderr
:const client = new DataAPIClient({ logging: [ { events: 'commandStarted', emits: 'stdout' }, { events: 'commandFailed', emits: 'stderr' }, ], });
Example output
2025-02-12 13:18:23 CDT [32a4fe9b] [CommandStarted]: test_coll::insertMany {records=1,ordered=false} 2025-02-12 13:18:24 CDT [32a4fe9b] [CommandFailed]: test_coll::insertMany {records=1,ordered=false} (865ms) ERROR: 'Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (_), and hyphens (-)'
Custom formatting
You may override the custom default formatter for events using
BaseClientEvent.setDefaultFormatter()
.A custom
EventFormatter
takes in two parameters:-
The
event
being formatted -
The full event message
-
This is equal to
event.getMessagePrefix() + event.getMessage()
-
// Define a custom formatter const customFormatter: EventFormatter = (event, _fullMessage) => { return `[${event.requestId.slice(0, 8)}] (${event.name}) - ${event.getMessage()}`; } // Set the custom formatter as the default BaseClientEvent.setDefaultFormatter(customFormatter); // Now all events will use the custom formatter const coll = db.collection('*COLLECTION_NAME*', { logging: [{ events: 'all', emits: 'stdout' }], }); // Logs: // - [e31bc40e] (CommandStarted) - basic_logging_example_table::findOne // - [e31bc40e] (CommandFailed) - basic_logging_example_table::findOne (249ms) ERROR: 'Invalid filter expression: filter clause path ('$invalid') contains character(s) not allowed' coll.findOne({ $invalid: 1 });
-
-
Log directly to
stdout
/stderr
using a verbose JSON format:const client = new DataAPIClient({ logging: [ { events: 'commandStarted', emits: 'stdout:verbose' }, { events: 'commandFailed', emits: 'stderr:verbose' }, ], });
Example output
{ "name": "CommandStarted", "timestamp": "2025-02-12 16:38:28 IST", "requestId": "6d34edd4-ff64-4837-a9cc-04f91fe70b31", "extra": { "records": 1, "ordered": false }, "command": { "insertMany": { "documents": [ { "$invalid": "abc" } ], "options": { "returnDocumentResponses": true, "ordered": false } } }, "url": "https://480f95a5-fd3f-40a1-aac6-2ca765b09be0-us-east-2.apps.astra.datastax.com/api/json/v1/default_keyspace/test_coll", "timeout": { "requestTimeoutMs": 10000, "generalMethodTimeoutMs": 30000 } } { "name": "CommandFailed", "timestamp": "2025-02-12 16:38:29 IST", "requestId": "6d34edd4-ff64-4837-a9cc-04f91fe70b31", "extra": { "records": 1, "ordered": false }, "command": { "insertMany": { "documents": [ { "$invalid": "abc" } ], "options": { "returnDocumentResponses": true, "ordered": false } } }, "url": "https://480f95a5-fd3f-40a1-aac6-2ca765b09be0-us-east-2.apps.astra.datastax.com/api/json/v1/default_keyspace/test_coll", "duration": 1730.055833, "error": { "name": "DataAPIResponseError", "message": "DataAPIResponseError fields not separately serialized; all context already in the CommandFailed event" }, "resp": { "status": { "documentResponses": [ { "id": "634e8a91-cc25-4461-8e8a-91cc25f461db", "status": "ERROR", "errorsIdx": 0 } ] }, "errors": [ { "message": "Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (), and hyphens (-)", "errorCode": "SHRED_DOC_KEY_NAME_VIOLATION", "id": "f920fcea-8955-4ce3-a7ea-ffc73aeeb7c7", "family": "REQUEST", "title": "Document field name invalid", "scope": "DOCUMENT" } ] } }
-
Use
DataAPIClient
as anEventEmitter
-like object:const client = new DataAPIClient({ logging: [ { events: ['commandStarted', 'commandFailed'], emits: 'event' }, ], }); client.on('commandStarted', (event) => { console.log('FYI,', event.commandName, 'started.'); }); client.once('commandFailed', (event) => { console.error('On no!!', event.format()); });
Example output
FYI, insertMany started. Oh no!! 2025-02-12 13:18:24 CDT [32a4fe9b] [CommandFailed]: test_coll::insertMany {records=1,ordered=false} (865ms) ERROR: 'Document field name invalid: field name ('$invalid') contains invalid character(s), can contain only letters (a-z/A-Z), numbers (0-9), underscores (_), and hyphens (-)'
Use case | Recommendation |
---|---|
Basic logs for a small application |
Log to |
Quick, temporary, debug logging |
Log to |
Structured, customized logging |
Use |
Using your own logging framework |
Use |
Event types
There are two main categories of logged events:
-
command
events: operations fromDb
,Table
, andCollection
instances -
adminCommand
events: operations fromAstraAdmin
andDbAdmin
instances
The following events are emitted:
Event | Description | Default Behavior |
---|---|---|
|
Emitted when a command is started, before the initial HTTP request is made. |
Emits the event, does not log to the console. |
|
Emitted when a command has succeeded (status code is 200, and no errors are returned). |
Emits the event, does not log to the console. |
|
Emitted when a command has errored (status code is not 200, or errors are returned). |
Emits the event, logs to |
|
Emitted when a command has warnings ( |
Emits the event, logs to |
|
Emitted when an admin command is started, before the initial HTTP request is made. |
Emits the event, logs to |
|
Emitted when a command is polling in a long-running operation (e.g. |
Emits the event, logs to |
|
Emitted when an admin command has succeeded (HTTP 200 is returned). |
Emits the event, logs to |
|
Emitted when an admin command has failed (HTTP 4xx/5xx is returned, even if while polling). |
Emits the event, logs to |
|
Emitted when an admin command has warnings ( |
Emits the event, logs to |
In addition to these events, you may provide custom regex patterns to match multiple events at once, or use 'all'
to match all events, as a semantic equivalent to the regex /.*/
.
Config format
Logging is configured through a list of hierarchical rules, determining which events to emit where. Each rule layer overrides previous rules for the same events.
There are multiple ways to configure logging, and the rules may be combined in various ways:
-
logging: 'all'
: This will enable all events with default behaviors. -
logging: [{ events: 'all', emits: ['<outputs>'] }]
: This will allow you to define the specific outputs for all events -
logging: '<event>' | ['<event>']
: This will enable only the specified event(s) with default behaviors -
logging: /<regex>/ | [/<regex>/]
: This will enable only the matching event(s) with default behaviors -
logging: [{ events: ['<events>'], emits: ['<outputs>'] }]
: This will allow you to define the specific outputs for specific events -
logging: ['all', { events: ['<events>'], emits: [] }]
: This demonstrates how'all'
may be used as a base configuration, to be overridden by specific rules. An emptyemits
array effectively disables outputs for the specified events
Examples
// Set defaults for logging
const client = new DataAPIClient({
logging: 'all',
});
// Just emit all events as events
const client = new DataAPIClient({
logging: [{ events: 'all', emits: 'event' }],
});
// Define specific outputs for specific events
const client = new DataAPIClient({
logging: [
{ events: ['commandSucceeded', 'adminCommandStarted', 'adminCommandSucceeded'], emits: ['stdout', 'event'] },
{ events: ['commandFailed', 'adminCommandFailed'], emits: ['stderr', 'event'] },
],
});
// Use 'all' as a base configuration, and override specific events
const client = new DataAPIClient({
logging: ['all', { events: /.*(Started|Succeeded)/, emits: [] }],
});
// Enable specific events with default behaviors
const client = new DataAPIClient({
logging: ['commandSucceeded', 'adminCommandStarted'],
});
Further usage and examples
For more information about logging, and viewing actual code examples involving it, you may check out the examples/logging
directory in the astra-db-ts repository, which contains the following examples:
Custom ser/des
Ser/des configuration is a public preview feature available in TypeScript client version 2.0. Features and functionality are subject to change during the preview period. Some of the options enable advanced features, which should be used with caution, as they can lead to unexpected behavior if used improperly. For information about changes in 2.0 and migrating from 1.x to 2x, see Data API client upgrade guide. |
You can use custom serialization and deserialization logic to customize client usage at a lower level, enabling features such as object mapping, custom data types, validation, and more.
mutateInPlace
Enables a minor optimization to allow any serialized object to be mutated in-place.
This means you can serialize objects like rows, documents, filters, and sorts without creating a copy of every nested object or array.
Only use this option if you don’t need to reuse an object after passing it to any operation. Otherwise, unexpected mutations and behaviors may occur.
codecs
(alpha)
You can use codecs to customize the way that objects are serialized and deserialized, allowing you to change data types, add validation logic, add object mapping, and more.
This feature is still in alpha, and may be subject to changes in future versions.
enableBigNumbers
A collection-specific option which enables serialization & deserialization of bigint
and BigNumber
.
Big numbers are not enabled by default for two primary reasons:
-
Performance: Enabling big numbers necessitates usage of a specialized JSON library which is capable of serializing/deserializing these numbers without loss of precision, which is much slower than the native JSON library.
Realistically, however, the difference is likely negligible for most cases
-
Ambiguity in Deserialization: There is an inherent ambiguity in deciding how to deserialize big numbers, as certain numbers may be representable in various different numerical formats, and not in an easily predictable way.
For example,
9007199254740992
is equally representable as either anumber
,bigint
, aBigNumber
, or even astring
.For serialization, there is no such ambiguity: any number is just a series of digits in JSON.
Configuring this option
Deserialization behavior must be configured to enable big numbers on a collection-by-collection basis.
This option can be configured in two ways:
-
As a function, which takes in the path of the field being deserialized and returns a coercion type. See
CollNumCoercionFn
for more details.const collection = db.collection('COLLECTION_NAME', { serdes: { enableBigNumbers: (_path, pathMatches) => (pathMatches(['info', '*', 'price'])) ? 'bignumber' : 'number', }, });
-
As a configuration object, which allows you to specify the coercion type for any path. See
CollNumCoercionCfg
for more details.const collection = db.collection('COLLECTION_NAME', { serdes: { enableBigNumbers: { 'items..price': 'bignumber', '': 'number', }, }, });
Numerical coercions
There are a variety of built-in numerical coercions (see CollNumCoercion
) that may be used to suit your needs.
astra-db-ts
internally uses json-bigint
once enableBigNumbers
is enabled, which parses the JSON numbers as either native JS number
s, or BigNumber
s from BigNumber.js
.
The following table shows how each numerical coercion coerces each aforementioned type:
Representation | number Coercion Behavior |
BigNumber Coercion Behavior |
---|---|---|
|
Uses the |
Coerces the |
|
Uses the |
Coerces the |
|
Coerces the |
Coerces the |
|
Coerces the |
Uses the |
|
Coerces the |
Coerces the |
|
Uses the |
Coerces the |
Alternatively, if none of the above coercions work for you, your own coercion function may be used instead:
const collection = db.collection('test_coll', {
serdes: {
enableBigNumbers: {
'info.*.price': (val: number | BigNumber, path: PathSegment[]): unknown => { /* ... */ },
'*': 'number',
},
},
});
sparseData
A table-specific option which enables sparse returned rows.
This means that any fields not present in the row are omitted from the returned row rather than being returned as null
.
HTTP Options
This section pertains to Previous versions of See the v1.x astra-db-ts README for more information on this topic. |
astra-db-ts
offers heavily extensible HTTP options, allowing you to customize the client’s HTTP behavior to suit your needs.
There are a couple of default Fetcher
implementations (FetchNative
and FetchH2
), but a custom implementation is simple to write and provide as well.
See the examples/using-http2
and examples/customize-http
directories in the astra-db-ts repository for actual examples of using the TypeScript client with various custom HTTP configurations.
Using HTTP/2
astra-db-ts
uses the native fetch
API by default, but it can also work with HTTP/2
using the fetch-h2
module.
However, due to compatability reasons, fetch-h2
is no longer dynamically imported by default, and must be provided to the client manually.
Fortunately, it is only a couple of easy steps to get HTTP/2
working in your project:
-
Install
fetch-h2
by runningnpm i fetch-h2
. -
Provide
fetch-h2
to the client.
Depending on your environment, module system, and preferences, you may pass in the fetch-h2
module in a few different ways:
With normal import
s:
import { DataAPIClient } from '@datastax/astra-db-ts';
import * as fetchH2 from 'fetch-h2';
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
httpOptions: { client: 'fetch-h2', fetchH2: fetchH2 },
});
With import
and top-level await
:
import { DataAPIClient } from '@datastax/astra-db-ts';
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
httpOptions: { client: 'fetch-h2', fetchH2: await import('fetch-h2') },
});
With require
:
const { DataAPIClient } = require('@datastax/astra-db-ts');
const fetchH2 = require('fetch-h2');
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
httpOptions: { client: 'fetch-h2', fetchH2: fetchH2 },
});
// or just pass it in inline
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
httpOptions: { client: 'fetch-h2', fetchH2: require('fetch-h2') },
});
See the examples/using-http2
directory in the astra-db-ts repository for an actual example of using the TypeScript client with HTTP/2.
Extending and writing fetchers
In the most advanced use-cases, a custom fetch implementation may be used as well.
import { DataAPIClient, Fetcher, FetcherRequestInfo, FetcherResponseInfo } from '@datastax/astra-db-ts';
class CustomFetcher implements Fetcher {
async fetch(info: FetcherRequestInfo): Promise<FetcherResponseInfo> {
// Custom fetch implementation
}
}
const client = new DataAPIClient(process.env.ASTRA_DB_TOKEN!, {
httpOptions: { client: 'custom', fetcher: new CustomFetcher() },
});
See the CustomHttpClientOptions
and Fetcher
types for more information about this feature.
See also the examples/customize-http
directory in the astra-db-ts repository, which contains the following examples:
Browser support
For most use cases, DataStax strongly advises against using the Data API TypeScript client in the browser. Doing so may expose your database credentials to client-side code, allowing attackers to easily extract the token through browser developer tools, XSS, or other means. |
Prior to version 2.0 of the TypeScript client, an |
astra-db-ts
is designed foremost to work in server-side environments, but it ensures full completely in the browser as well.
However, you may need to set up a CORS proxy (such as CORS Anywhere) to forward your requests to the Data API.
See the examples/browser
directory in the astra-db-ts repository for an actual example of using the TypeScript client in the browser.
See also
For command specifications, see the command references, such as About collections with the Data API, and About tables with the Data API.
For major changes between versions, see Data API client upgrade guide.
For the complete client reference, see TypeScript client reference.