• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Classic Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • CQL for Astra
      • Astra DB glossary
      • Get support
    • Getting Started
      • Create your database
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting to a VPC
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Connecting Legacy drivers
        • Drivers retry policies
      • Get Secure Connect Bundle
    • Migrating
      • Components
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Proxy Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
        • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Leverage metrics provided by ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
        • Cassandra Data Migrator
        • DSBulk Migrator
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Glossary
      • Contribution guidelines
      • Release Notes
    • API QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL CQL-first API QuickStart
    • Developing with APIs
      • Developing with Document API
      • Developing with REST API
      • Developing with GraphQL API
        • Developing with GraphQL API (CQL-first)
        • Developing with GraphQL API (Schema-first)
      • Developing with gRPC API
        • gRPC Rust Client
        • gRPC Go Client
        • gRPC Node.js Client
        • gRPC Java Client
      • Developing with CQL API
      • Tooling Resources
      • Node.js Document Collection Client
      • Node.js REST Client
    • API References
      • Astra DB JSON API v1
      • Astra DB REST API v2
      • Astra DB Document API v2
      • Astra DB DevOps API v2
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Delete an account
        • Bring Your Own Key
          • BYOK AWS DevOps API
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
        • Resize your classic database
        • Park your classic database
        • Unpark your classic database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI

Insert data

Any of the created APIs can be used to interact with the GraphQL data, to write or read data.

First, let’s navigate to your new keyspace library inside the playground. Change the location to http://localhost:8080/graphql/library and add a couple of books to the book table:

  • graphQL command

  • Result

# insert 2 books in one mutation
mutation insert2Books {
  moby: insertbook(value: {title:"Moby Dick", author:"Herman Melville"}) {
    value {
      title
    }
  }
  catch22: insertbook(value: {title:"Catch-22", author:"Joseph Heller"}) {
    value {
      title
    }
  }
}
{
  "data": {
    "moby": {
      "value": {
        "title": "Moby Dick"
      }
    },
    "catch22": {
      "value": {
        "title": "Catch-22"
      }
    }
  }
}

Note that the keyword value is used twice in the mutation. The first use defines the value that the record is set to, for instance, the title to Moby Dick and the author to Herman Melville. The second use defines the values that will be displayed after the success of the mutation, so that proper insertion can be verified. This same method is valid for updates and read queries.

Insertion options

Three insertion options are configurable during data insertion or updating:

  • consistency level

  • serial consistency level

  • time-to-live (TTL)

An example insertion that sets the consistency level and TTL:

  • graphQL command

  • Result

# insert a book and set the option for consistency level
mutation insertBookWithOption {
  nativeson: insertbook(value: {title:"Native Son", author:"Richard Wright"}, options: {consistency: LOCAL_QUORUM, ttl:86400}) {
    value {
      title
    }
  }
}
{
  "data": {
    "moby": {
      "value": {
        "title": "Moby Dick"
      }
    }
  }
}

The serial consistency can also be set with serialConsistency in the options, if needed.

Insert collections (set, list, map)

Inserting a collection is simple. An example of inserting a list:

  • graphQL command

  • Result

# insert an article USING A LIST (authors)
mutation insertArticle {
  magarticle: insertarticle(value: {title:"How to use GraphQL", authors: ["First author", "Second author"], mtitle:"Database Magazine"}) {
    value {
      title
      mtitle
      authors
    }
  }
}
{
  "data": {
    "magarticle": {
      "value": {
        "title": "How to use GraphQL",
        "mtitle": "Database Magazine",
        "authors": [
          "First author",
          "Second author"
        ]
      }
    }
  }
}

A map is slightly more complex:

  • graphQL command

  • Result

mutation insertOneBadge {
  gold: insertBadges(value: { btype:"Gold", earned: "2020-11-20", category: ["Editor", "Writer"] } ) {
    value {
      btype
      earned
      category
    }
  }
}
{
  "data": {
    "gold": {
      "value": {
        "badge_type": "Gold",
        "badge_id": 100,
        "earned": [
          {
            "key": "Writer",
            "value": "2020-11-20"
          }
        ]
      }
    }
  }
}

Insert a tuple

Inserting a tuple involves inserting an object; note the use of item0, item`1, and so on, to insert the parts of the tuple

  • graphQL command

  • Result

# insert a reader record that uses a TUPLE
mutation insertJaneWithTuple{
   jane: insertreader(
     value: {
       user_id: "b5b5666b-2a37-4d0b-a5eb-053e54fc242b"
       name: "Jane Doe"
       birthdate: "2000-01-01"
       email: ["janedoe@gmail.com", "janedoe@yahoo.com"]
       reviews: { item0: "Moby Dick", item1: 5, item2: "2020-12-01" }
     }
   ) {
     value {
       user_id
       name
       birthdate
       reviews {
        item0
        item1
        item2
      }
     }
   }
}
{
  "data": {
    "jane": {
      "value": {
        "user_id": "b5b5666b-2a37-4d0b-a5eb-053e54fc242b",
        "name": "Jane Doe",
        "birthdate": "2000-01-01",
        "reviews": {
          "item0": "Moby Dick",
          "item1": 5,
          "item2": "2020-12-01"
        }
      }
    }
  }
}

Insert a user-defined type (UDT)

Inserting a UDT requires taking careful note of the brackets used:

  • graphQL command

  • Result

# insert a reader record that uses a UDT
mutation insertReaderWithUDT{
  ag: insertreader(
    value: {
      user_id: "e0ed81c3-0826-473e-be05-7de4b4592f64"
      name: "Allen Ginsberg"
      birthdate: "1926-06-03"
      addresses: [{ street: "Haight St", city: "San Francisco", zip: "94016" }]
    }
  ) {
    value {
      user_id
      name
      birthdate
      addresses {
        street
        city
        zip
      }
    }
  }
 }
{
  "data": {
    "ag": {
      "value": {
        "user_id": "e0ed81c3-0826-473e-be05-7de4b4592f64",
        "name": "Allen Ginsberg",
        "birthdate": "1926-06-03",
        "addresses": [
          {
            "street": "Haight St",
            "city": "San Francisco",
            "zip": "94016"
          }
        ]
      }
    }
  }
}

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage