• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Serverless Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting private endpoints
        • AWS Private Link
        • Azure Private Link
        • GCP Private Endpoints
        • Connecting custom DNS
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Additional resources
        • Glossary
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Bring Your Own Key
          • BYOK AWS Astra DB console
          • BYOK GCP Astra DB console
          • BYOK AWS DevOps API
          • BYOK GCP DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Export metrics to third party
          • Export metrics via Astra Portal
          • Export metrics via DevOps API
        • Manage access lists
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing BYOK AWS
        • Managing BYOK GCP
        • Managing access list
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • DataStax Astra Block
      • FAQs
      • About NFTs
      • DataStax Astra Block for Ethereum quickstart
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Serverless Documentation
  • Developing with Stargate APIs
  • Develop with gRPC
  • gRPC Node.js client

gRPC Node.js Client

This client is designed for use with a Node.js application or backend. It is not intended for use with Javascript running in a browser environment.

Node.js setup

Install the package

Install stargate-grpc-node-client using either npm or yarn:

  • npm command

  • Yarn command

npm i @stargate-oss/stargate-grpc-node-client
yarn add @stargate-oss/stargate-grpc-node-client

This set-up will make all the Stargate gRPC functionality available.

The next sections explain the parts of a script to use the Stargate functionality. A full working script is included below.

Node.js connecting

Authentication

This client supports both table-based and JWT-based authentication to Stargate.

For table-based auth, use the StargateTableBasedToken class:

// Stargate OSS configuration for locally hosted docker image
const auth_endpoint = "http://localhost:8081/v1/auth";
const username = "cassandra";
const password = "cassandra";
const stargate_uri = "localhost:8090";

// Set up the authentication
// For Stargate OSS: Create a table based auth token Stargate/Cassandra
// authentication using the default C* username and password
const credentials = new StargateTableBasedToken(
  {authEndpoint: auth_endpoint,
    username: username,
    password: password
  }
);

// Uncomment if you need to check the credentials
// console.log(credentials);

For JWT-based auth, use the StargateBearerToken class and pass your token in directly:

// Astra DB configuration
const astra_uri = "{astra-base-url}-{astra-region}.apps.astra.datastax.com:443";
const bearer_token = "AstraCS:xxxxxxx";

// Set up the authentication
// For Astra DB: Enter a bearer token for Astra, downloaded from the Astra DB dashboard
const bearerToken = new StargateBearerToken(bearer_token);
const credentials = grpc.credentials.combineChannelCredentials(grpc.credentials.createSsl(), bearerToken);

// Uncomment if you need to check the credentials
// console.log(credentials);

Creating a client

If you’re connecting to Stargate through insecure gRPC credentials, you must manually generate metadata for each call For a local Stargate instance, for instance, the following client code will fetch an auth token with a REST call:

// Create the gRPC client
// For Stargate OSS: passing it the address of the gRPC endpoint
const stargateClient = new StargateClient(stargate_uri, grpc.credentials.createInsecure());

console.log("made client");

// Create a promisified version of the client, so we don't need to use callbacks
const promisifiedClient = promisifyStargateClient(stargateClient);

console.log("promisified client")

// For Stargate OSS: generate authentication metadata that is passed in the executeQuery and executeBatch statements
const authenticationMetadata = await credentials.generateMetadata({service_url: auth_endpoint});

This is because the Node gRPC implementation does not allow composing insecure credentials. However, if you’re using secure gRPC credentials, you can include the token metadata generator when constructing the client. For a connection to a remote Stargate instance like Astra automatically generate on every call to the client:

const bearerToken = new StargateBearerToken('my-token');
const credentials = grpc.credentials.combineChannelCredentials(
  grpc.credentials.createSsl(), StargateBearerToken
);

const stargateClient = new StargateClient(grpcEndpoint, credentials);
const promisifiedClient = promisifyStargateClient(stargateClient);

// No need to pass metadata;
// the credentials passed to the client constructor will do that for us
await promisifiedClient.executeQuery(query);

Node.js querying

A simple query can be performed by passing a CQL query to the client using the executeQuery() function for standard query execution:

// For Stargate OSS: SELECT the data to read from the table
const query = new Query();
const queryString = 'SELECT firstname, lastname FROM test.users;'
// Set the CQL statement using the string defined in the last line
query.setCql(queryString);

// For Stargate OSS: execute the query statement
const response = await promisifiedClient.executeQuery(
  query,
  authenticationMetadata
);

console.log("select executed")

Data definition (DDL) queries are supported in the same manner:

// For Stargate OSS: Create a new keyspace
const createKeyspaceStatement = new Query();
// Set the CQL statement
createKeyspaceStatement.setCql("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1};");

await promisifiedClient.executeQuery(createKeyspaceStatement, authenticationMetadata);

console.log("created keyspace");

// For Stargate OSS: Create a new table
const createTableStatement = new Query();
// Set the CQL statement
createTableStatement.setCql("CREATE TABLE IF NOT EXISTS test.users (firstname text PRIMARY KEY, lastname text);");

await promisifiedClient.executeQuery(
  createTableStatement,
  authenticationMetadata
);

console.log("created table");

Parameterized queries are also supported:

const query = new Query();
query.setCql("select * from system_schema.keyspaces where keyspace_name = ?");

const keyspaceNameValue = new Value();
keyspaceNameValue.setString("system");

const queryValues = new Values();
queryValues.setValuesList([keyspaceNameValue]);

query.setValues(queryValues);

const queryParameters = new QueryParameters();
queryParameters.setTracing(false);
queryParameters.setSkipMetadata(false);

query.setParameters(queryParameters);

const response = await promisifiedClient.executeQuery(query, metadata);

If you would like to use a batch statement, the client also provides an executeBatch() function to execute a batch query:

// For Stargate OSS: INSERT two rows/records
 // Create two queries that will be run in a batch statement
 const insertOne = new BatchQuery();
 const insertTwo = new BatchQuery();

 // Set the CQL statement
 insertOne.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Jane', 'Doe')`);
 insertTwo.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Serge', 'Provencio')`);

 // Define the new batch to include the 2 insertions
 const batch = new Batch();
 batch.setQueriesList([insertOne, insertTwo]);

 // For Stargate OSS: execute the batch statement
 const batchResult = await promisifiedClient.executeBatch(
   batch,
   authenticationMetadata
 );
 console.log("inserted data");

Promise support

The Node gRPC implementation uses callbacks by default. If you’d prefer promises, this library provides a utility function to create a promisified version of the Stargate gRPC client. The promise will reject if an error occurs:

import {
  StargateClient,
  promisifyStargateClient,
} from "@stargate-oss/stargate-grpc-node-client";

const stargateClient = new StargateClient(
  "localhost:8090",
  grpc.credentials.createInsecure()
);

const promisifiedClient = promisifyStargateClient(stargateClient);
try {
  const queryResult = await promisifiedClient.executeQuery(
    query,
    metadata,
    callOptions
  );
  const batchResult = await promisifiedClient.executeBatch(
    query,
    metadata,
    callOptions
  );
} catch (e) {
  // something went wrong
}

The metadata and callOptions arguments are both optional.

Node.js processing result set

After executing a query a response will be returned containing rows for a SELECT statement, otherwise the returned payload will be unset. The convenience function ToResultSet()` is provided to help transform this response into a ResultSet that’s easier to work with.

// Get the results from the execute query statement
// and separate into an array to print out the results

if (resultSet) {
  const resultSet = response.getResultSet();
  const rows = resultSet.getRowsList();

  // This for loop gets 2 results
  for ( let i = 0; i < 2; i++) {
    var valueToPrint = "";
    for ( let j = 0; j < 2; j++) {
      var value = rows[i].getValuesList()[j].getString();
      valueToPrint += value;
      valueToPrint += " ";
    }
    console.log(valueToPrint);
  }
}

Reading primitive values

Individual values from queries will be returned as a Value object. These objects have boolean hasX() methods, where X is the possible type of a value.

There are corresponding getX() methods on the Value type that will return the value, if present. If the value does not represent type X, calling getX() will not return an error. You’ll get undefined or another falsy value based on the expected data type.

// Assume we know this is a string
const firstValueInRow = row.getValuesList()[0];

// This should resolve to true
const isString = firstValueInRow.hasString();
// This should resolve to the string value
const stringValue = firstValueInRow.getString();

// This should resolve to false
const isInt = firstValueInRow.hasInt();
// This should resolve to 0 - zero value for this data type
const intValue = firstValueInRow.getInt();

Reading CQL data types

The built-in toX() methods for Values representing more complicated types like UUIDs can be hard to work with. This library exposes helper functions to translate a Value into a more easily used type:

  • toUUIDString

  • toCQLTime

Unlike the built-in toX() methods, these helper functions will throw an error if the conversion fails.

Here’s an example of processing a UUID:

const insert = new Query();
insert.setCql("INSERT INTO ks1.tbl2 (id) VALUES (f066f76d-5e96-4b52-8d8a-0f51387df76b);");
await promisifiedClient.executeQuery(insert, authenticationMetadata);

// Read the data back out
const read = new Query();
read.setCql("SELECT id FROM ks1.tbl2");
const result = await promisifiedClient.executeQuery(read, authenticationMetadata);

const resultSet = result.getResultSet();

if (resultSet) {
  const firstRow = resultSet.getRowsList()[0];
  const idValue = firstRow.getValuesList()[0];
  try {
  const uuidAsString = toUUIDString(idValue);
  console.log(`UUID: ${uuidAsString}`);
  } catch (e) {
    console.error(`Conversion of Value to UUID string failed: ${e}`);
  }
}

Node full sample script

To put all the pieces together, here is a sample script that combines all the pieces shown above:

  • Sample script

  • Result

#!/usr/bin/env zx

// This script uses zx - if you wish to rename to index.js and use in a node application
// remove the line above.

import * as grpc from "@grpc/grpc-js";
import { StargateClient, StargateTableBasedToken, Query, Batch, BatchQuery, Response, promisifyStargateClient } from "@stargate-oss/stargate-grpc-node-client";

try {
    // Stargate OSS configuration for locally hosted docker image
    const auth_endpoint = "http://localhost:8081/v1/auth";
    const username = "cassandra";
    const password = "cassandra";
    const stargate_uri = "localhost:8090";

    // Set up the authentication
    // For Stargate OSS: Create a table based auth token Stargate/Cassandra authentication using the default C* username and password
    const credentials = new StargateTableBasedToken({authEndpoint: auth_endpoint, username: username, password: password});

    // Uncomment if you need to check the credentials
// console.log(credentials);

    // Create the gRPC client
    // For Stargate OSS: passing it the address of the gRPC endpoint
    const stargateClient = new StargateClient(stargate_uri, grpc.credentials.createInsecure());

    console.log("made client");

    // Create a promisified version of the client, so we don't need to use callbacks
    const promisifiedClient = promisifyStargateClient(stargateClient);

    console.log("promisified client")

    // For Stargate OSS: generate authentication metadata that is passed in the executeQuery and executeBatch statements
    const authenticationMetadata = await credentials.generateMetadata({service_url: auth_endpoint});

    // For Stargate OSS: Create a new keyspace
    const createKeyspaceStatement = new Query();
    // Set the CQL statement
    createKeyspaceStatement.setCql("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1};");

    await promisifiedClient.executeQuery(createKeyspaceStatement, authenticationMetadata);

    console.log("created keyspace");

    // For Stargate OSS: Create a new table
    const createTableStatement = new Query();
    // Set the CQL statement
    createTableStatement.setCql("CREATE TABLE IF NOT EXISTS test.users (firstname text PRIMARY KEY, lastname text);");

    await promisifiedClient.executeQuery(
      createTableStatement,
      authenticationMetadata
    );

    console.log("created table");

    // For Stargate OSS: INSERT two rows/records
    // Create two queries that will be run in a batch statement
    const insertOne = new BatchQuery();
    const insertTwo = new BatchQuery();

    // Set the CQL statement
    insertOne.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Lorina', 'Poland')`);
    insertTwo.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Doug', 'Wettlaufer')`);

    // Define the new batch to include the 2 insertions
    const batch = new Batch();
    batch.setQueriesList([insertOne, insertTwo]);

    // For Stargate OSS: execute the batch statement
    const batchResult = await promisifiedClient.executeBatch(
      batch,
      authenticationMetadata
    );
    console.log("inserted data");

    // For Stargate OSS: SELECT the data to read from the table
    const query = new Query();
    const queryString = 'SELECT firstname, lastname FROM test.users;'
    // Set the CQL statement using the string defined in the last line
    query.setCql(queryString);

    // For Stargate OSS and Astra DB: execute the query statement
    const response = await promisifiedClient.executeQuery(
      query,
      authenticationMetadata
    );

    console.log("select executed")

    // Get the results from the execute query statement
    // and separate into an array to print out the results
    if (resultSet) {
      const resultSet = response.getResultSet();
      const rows = resultSet.getRowsList();
    
      // This for loop gets 2 results
      for ( let i = 0; i < 2; i++) {
        var valueToPrint = "";
        for ( let j = 0; j < 2; j++) {
          var value = rows[i].getValuesList()[j].getString();
          valueToPrint += value;
          valueToPrint += " ";
        }
        console.log(valueToPrint);
      }
    }

    console.log("everything worked!")
  } catch (e) {
    // Print out any errors that occur while running this script
    console.log(e);
  }
[ ~/CLONES/grpc-node ] (main ✏️ 1) $./connect-sgoss.mjs
StargateTableBasedToken {
  metadataGenerators: [ [Function: bound getStargateAuthMetadata] ],
  httpClient: [Function: wrap] {
    request: [Function: wrap],
    getUri: [Function: wrap],
    delete: [Function: wrap],
    get: [Function: wrap],
    head: [Function: wrap],
    options: [Function: wrap],
    post: [Function: wrap],
    put: [Function: wrap],
    patch: [Function: wrap],
    defaults: {
      headers: [Object],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 5000,
      adapter: [Function: httpAdapter],
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: -1,
      maxBodyLength: -1,
      validateStatus: [Function: validateStatus],
      transitional: [Object]
    },
    interceptors: { request: [InterceptorManager], response: [InterceptorManager] }
  }
}
made client
promisified client
created keyspace
created table
inserted data
select executed
Doug Wettlaufer
Lorina Poland
everything worked!

Node.js developing

Getting started

Clone the repo, then install dependencies:

npm i

Testing

Running tests will require you have Docker installed on your machine; see their documentation for installation details based on your platform.

Then run the tests:

npm test

These tests include an integration suite that uses the Testcontainers library to spin up a Docker container running Stargate to test a gRPC connection and issues queries.

See the integration tests at src/client/client.test.ts for more example uses of this client.

Generating gRPC code stubs

Should the Stargate protobuf files change and you need to generate new gRPC code, this project has an NPM script you can use:

npm run gen

After running, you will find the new generated *grpc_pb.d.ts and *pb.d.ts files in stargate-grpc-node-client/src/proto/.

TypeScript compilation

This client is written in TypeScript but must compile JS for use in vanilla JavaScript environments. If you change source code in this client, be sure to use the npm run compile command after your changes to reflect them in the lib folder packaged for NPM consumers.

Testing changes locally

You can use the yalc library to publish changes to this client locally and test with a consuming application. Assuming you have yalc installed and you’ve already made your changes, do the following:

(In your local copy of this repo)
npm run compile
yalc publish @stargate-oss/stargate-grpc-node-client

(In your local consuming application)
yalc add @stargate-oss/stargate-grpc-node-client

You may need to rm -rf node_modules and do a fresh npm i in the consuming application for the changes to take effect.

Once you have this dependency established, you can update this client locally with new changes at any time by running yalc push from the stargate-grpc-node-client directory.

Coding style

This project uses eslint and prettier to lint and format code. These standards are enforced automatically with a husky pre-commit hook and in the CI pipeline.

The Stargate gRPC Node.js Client repository is located at https://github.com/stargate/stargate-grpc-node-client.

gRPC Go client gRPC Java client

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage