• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Classic Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting to a VPC
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Additional resources
        • Glossary
        • Troubleshooting
          • Troubleshooting tips
          • Troubleshooting scenarios
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
        • Resize your classic database
        • Park your classic database
        • Unpark your classic database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Classic Documentation
  • Node.js querying

Node.js querying

A simple query can be performed by passing a CQL query to the client using the executeQuery() function for standard query execution:

// For Stargate OSS: SELECT the data to read from the table
const query = new Query();
const queryString = 'SELECT firstname, lastname FROM test.users;'
// Set the CQL statement using the string defined in the last line
query.setCql(queryString);

// For Stargate OSS: execute the query statement
const response = await promisifiedClient.executeQuery(
  query,
  authenticationMetadata
);

console.log("select executed")

Data definition (DDL) queries are supported in the same manner:

// For Stargate OSS: Create a new keyspace
const createKeyspaceStatement = new Query();
// Set the CQL statement
createKeyspaceStatement.setCql("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1};");

await promisifiedClient.executeQuery(createKeyspaceStatement, authenticationMetadata);

console.log("created keyspace");

// For Stargate OSS: Create a new table
const createTableStatement = new Query();
// Set the CQL statement
createTableStatement.setCql("CREATE TABLE IF NOT EXISTS test.users (firstname text PRIMARY KEY, lastname text);");

await promisifiedClient.executeQuery(
  createTableStatement,
  authenticationMetadata
);

console.log("created table");

Parameterized queries are also supported:

const query = new Query();
query.setCql("select * from system_schema.keyspaces where keyspace_name = ?");

const keyspaceNameValue = new Value();
keyspaceNameValue.setString("system");

const queryValues = new Values();
queryValues.setValuesList([keyspaceNameValue]);

query.setValues(queryValues);

const queryParameters = new QueryParameters();
queryParameters.setTracing(false);
queryParameters.setSkipMetadata(false);

query.setParameters(queryParameters);

const response = await promisifiedClient.executeQuery(query, metadata);

If you would like to use a batch statement, the client also provides an executeBatch() function to execute a batch query:

// For Stargate OSS: INSERT two rows/records
 // Create two queries that will be run in a batch statement
 const insertOne = new BatchQuery();
 const insertTwo = new BatchQuery();

 // Set the CQL statement
 insertOne.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Jane', 'Doe')`);
 insertTwo.setCql(`INSERT INTO test.users (firstname, lastname) VALUES('Serge', 'Provencio')`);

 // Define the new batch to include the 2 insertions
 const batch = new Batch();
 batch.setQueriesList([insertOne, insertTwo]);

 // For Stargate OSS: execute the batch statement
 const batchResult = await promisifiedClient.executeBatch(
   batch,
   authenticationMetadata
 );
 console.log("inserted data");

Promise support

The Node gRPC implementation uses callbacks by default. If you’d prefer promises, this library provides a utility function to create a promisified version of the Stargate gRPC client. The promise will reject if an error occurs:

import {
  StargateClient,
  promisifyStargateClient,
} from "@stargate-oss/stargate-grpc-node-client";

const stargateClient = new StargateClient(
  "localhost:8090",
  grpc.credentials.createInsecure()
);

const promisifiedClient = promisifyStargateClient(stargateClient);
try {
  const queryResult = await promisifiedClient.executeQuery(
    query,
    metadata,
    callOptions
  );
  const batchResult = await promisifiedClient.executeBatch(
    query,
    metadata,
    callOptions
  );
} catch (e) {
  // something went wrong
}

The metadata and callOptions arguments are both optional.

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage