• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Serverless Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting private endpoints
        • AWS Private Link
        • Azure Private Link
        • GCP Private Endpoints
        • Connecting custom DNS
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Additional resources
        • Glossary
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Bring Your Own Key
          • BYOK AWS Astra DB console
          • BYOK GCP Astra DB console
          • BYOK AWS DevOps API
          • BYOK GCP DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Export metrics to third party
          • Export metrics via Astra Portal
          • Export metrics via DevOps API
        • Manage access lists
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing BYOK AWS
        • Managing BYOK GCP
        • Managing access list
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • DataStax Astra Block
      • FAQs
      • About NFTs
      • DataStax Astra Block for Ethereum quickstart
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Serverless Documentation
  • Node.js processing result set

Node.js processing result set

After executing a query a response will be returned containing rows for a SELECT statement, otherwise the returned payload will be unset. The convenience function ToResultSet()` is provided to help transform this response into a ResultSet that’s easier to work with.

// Get the results from the execute query statement
// and separate into an array to print out the results

if (resultSet) {
  const resultSet = response.getResultSet();
  const rows = resultSet.getRowsList();

  // This for loop gets 2 results
  for ( let i = 0; i < 2; i++) {
    var valueToPrint = "";
    for ( let j = 0; j < 2; j++) {
      var value = rows[i].getValuesList()[j].getString();
      valueToPrint += value;
      valueToPrint += " ";
    }
    console.log(valueToPrint);
  }
}

Reading primitive values

Individual values from queries will be returned as a Value object. These objects have boolean hasX() methods, where X is the possible type of a value.

There are corresponding getX() methods on the Value type that will return the value, if present. If the value does not represent type X, calling getX() will not return an error. You’ll get undefined or another falsy value based on the expected data type.

// Assume we know this is a string
const firstValueInRow = row.getValuesList()[0];

// This should resolve to true
const isString = firstValueInRow.hasString();
// This should resolve to the string value
const stringValue = firstValueInRow.getString();

// This should resolve to false
const isInt = firstValueInRow.hasInt();
// This should resolve to 0 - zero value for this data type
const intValue = firstValueInRow.getInt();

Reading CQL data types

The built-in toX() methods for Values representing more complicated types like UUIDs can be hard to work with. This library exposes helper functions to translate a Value into a more easily used type:

  • toUUIDString

  • toCQLTime

Unlike the built-in toX() methods, these helper functions will throw an error if the conversion fails.

Here’s an example of processing a UUID:

const insert = new Query();
insert.setCql("INSERT INTO ks1.tbl2 (id) VALUES (f066f76d-5e96-4b52-8d8a-0f51387df76b);");
await promisifiedClient.executeQuery(insert, authenticationMetadata);

// Read the data back out
const read = new Query();
read.setCql("SELECT id FROM ks1.tbl2");
const result = await promisifiedClient.executeQuery(read, authenticationMetadata);

const resultSet = result.getResultSet();

if (resultSet) {
  const firstRow = resultSet.getRowsList()[0];
  const idValue = firstRow.getValuesList()[0];
  try {
  const uuidAsString = toUUIDString(idValue);
  console.log(`UUID: ${uuidAsString}`);
  } catch (e) {
    console.error(`Conversion of Value to UUID string failed: ${e}`);
  }
}

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage