• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Getting support
    • Installing DataStax Mission Control
      • Planning your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Installing Control Plane
        • Installing Data Plane
    • Migrating
      • Migrating DSE Cluster to DataStax Mission Control
    • Managing
      • Managing DSE clusters
        • Configuring DSE
          • Authentication
          • Authorization
          • Securing DSE
          • DSE Unified Authorization
        • Cluster lifecycle
          • Creating a cluster
          • Creating a single-token cluster
          • Creating a multi-token cluster
          • Terminating a DSE cluster
          • Upgrading a DSE cluster
        • Datacenter lifecycle
          • Adding a DSE datacenter
          • Terminating a DSE datacenter
        • Node lifecycle
          • Adding DSE nodes
          • Terminating DSE nodes
          • Using per-node configurations
      • Managing DataStax Mission Control infrastructure
        • Adding a node to DataStax Mission Control clusters
        • Terminating a node from DataStax Mission Control clusters
        • Storage classes defined
      • Managing DataStax Mission Control resources
        • Accessing Admin Console
        • Configuring DataStax Mission Control
        • Generating a support bundle
    • Operating on DSE Clusters
      • Cleanup
      • Rebuilding
      • Replacing a node
      • Rolling restart
      • Upgrading SSTables
    • Reference
      • DSECluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Managing
  • Managing DSE clusters
  • Node lifecycle
  • Using per-node configurations

Using per-node configurations

DataStax Mission Control is current in Private Preview. It is subject to the beta agreement executed between you and DataStax. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please contact your DataStax account team.

In general, all nodes in a datacenter are created with exactly the same configuration. Set up one node configuration and share that specification across all nodes in a datacenter (DC). This is enforced by how clusters are created. When you create a DSECluster object, each datacenter specification contains a set of configuration properties. These configuration properties are passed down to all nodes in each datacenter without distinction.

Example DSECluster specification:

apiVersion: missioncontrol.datastax.com/v1alpha1
kind: DSECluster
metadata:
  name: dse-cluster1
spec:
  serverVersion: 6.8.19
  datacenters:
    - metadata:
        name: dc1
      k8sContext: context-0
      size: 3
      config:
        cassandraYaml:
          allocate_tokens_for_local_replication_factor: 3

This specification defines one datacenter named dc1. Its size: indicates it is to run 3 nodes in three (3) different pods. The cassandraYaml: option of allocate_tokens_for_local_replication_factor sets a value of 3. The three pods in dc1 share this value and it is set in their respective cassandra.yaml files.

It is not possible to create distinct configurations for pods in the same datacenter using the DSECluster specification.

However, it is possible to customize the base configuration and override some of its contents per specific nodes, using more advanced techniques described in this topic.

Customizations per-node are useful in cases such as the following to:

  • Assign a node’s server ID in dse.yaml.

  • Specific initial_token property in cassandra.yaml.

  • Distinct Transport Layer Security (TLS) certificates for each node.

Using per-node configurations is considered an advanced feature and should only be performed by trained DataStax Enterprise database administrators. Per-node configuration bypasses some of the checks that DataStax Mission Control usually performs in order to determine if the configuration is viable. Applying an invalid or otherwise inappropriate per-node configuration may cause the target node to fail to start or its data to be damaged permanently, or both.

You may wish to avoid setting up advanced base configuration overrides. For example, when creating single-token clusters, it is not required to specify the initial_token property using per-node configurations. Let DataStax Mission Control compute the initial tokens automatically for you. See Creating Single-Token Clusters.

The following procedure provides detailed steps for the advanced feature of setting up per-node configurations.

Procedure

  1. Determine which options to override. Creating a table simplifies this task. For example:

    DC Node Configuration file Configuration overrides

    dc1

    first rack, first node

    cassandra.yaml

    num_tokens: N1

    dc1

    first rack, first node

    dse.yaml

    server_id: X1

    …​

    …​

    …​

    …​

    dcN

    last rack, last node

    cassandra.yaml

    num_tokens: NN

    dcN

    last rack, last node

    dse.yaml

    server_id: XN

  2. Provide the per-node configurations. First create a ConfigMap that contains the per-node configurations for some or all nodes in each datacenter.

    The ConfigMaps should be created per datacenter. For exmaple, if you have 3 DCs and all of them require per-node configuration, then you should create 3 distinct ConfigMaps.

    The ConfigMaps must be local to the datacenter that they reference. In other words, create them in the same Kubernetes context and in the same namespace as the CassandraDatacenter resource (CR) describing the physical datacenter.

    1. Write the ConfigMap, following these instructions:

      Each entry in the ConfigMap targets a specific node and one of its configuration files.

      Each key in the ConfigMap must use this form:

      <POD_NAME>_<CONFIG_FILE>

      where:

      <POD_NAME>

      is the name of the target pod running the Cassandra node; its name can be determined using the following template:

      1. <CLUSTER>-<DC>-<RACK>-sts-<INDEX>, where:

        1. CLUSTER is the Cassandra cluster name (this may be different from the DSECluster object’s name);

        2. DC is the CassandraDatacenter object’s name;

        3. RACK is the rack name; use default if no racks are specified.

        4. INDEX is the zero-based index of the node within the rack.

      <CONFIG_FILE>

      must be either: cassandra.yaml or dse.yaml.

      Both POD_NAME and CONFIG_FILE must resolve to an existing pod, and an existing configuration file. If that’s not the case, that entry is simply ignored. No errors or warnings are raised.

      Only YAML per-node configuration files are supported: these include cassandra.yaml and dse.yaml. Specifying non-YAML files results in the pod failing to start.

      If no entry in the ConfigMap exists for either a particular pod name or a configuration file, or both, then that pod’s configuration is not altered. No errors or warnings are raised.

      Each entry must contain a valid YAML snippet. Each snippet is applied on top of the base configuration file, overlaying and superceding the base file content. Here is an example entry:

      cluster1-dc1-rack1-sts-0_cassandra.yaml: |
          	num_tokens: 1
              initial_token: 3074457345618258600

      This example entry only overrides values in the cassandra.yaml base configuration file for pod cluster1-dc1-rack1-sts-0. Specifically, the num_tokens value is overlaid with the value 1 and the initial_token value is overlaid with the value 3074457345618258600.

      Note that the snippet must be a multi-line string, and must be introduced by the pipe “|” indicator as shown in the example.

      The YAML snippet must be valid YAML. If it is not, the pod fails to start.

      If, for whatever reason, after applying the per-node configuration overlay, the resulting configuration file becomes invalid, the pod fails to start. A typical reason could be if you injected a setting that is not supported by the server. Ensure that your per-node configuration is valid for the server version in use before applying it.

      Here is an example of a valid per-node ConfigMap that customizes both cassandra.yaml and dse.yaml configuration files for 3 nodes in dc1:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: dc1-per-node-configs
      data:
        cluster1-dc1-rack1-sts-0_cassandra.yaml: |
          initial_token: -9223372036854775808
        cluster1-dc1-rack1-sts-0_dse.yaml: |
          server_id: node1
        cluster1-dc1-rack2-sts-0_cassandra.yaml: |
          initial_token: -3074457345618258604
        cluster1-dc1-rack2-sts-0_dse.yaml: |
          server_id: node2
        cluster1-dc1-rack3-sts-0_cassandra.yaml: |
          initial_token: 3074457345618258600
        cluster1-dc1-rack3-sts-0_dse.yaml: |
          server_id: node3
  3. Attach the per-node ConfigMap to the DSECluster object.

    After you create each ConfigMap with the appropriate per-node configurations, attach them to the correct datacenter definition in the DSECluster object.

    For example, if your DSECluster object defines 3 datacenters, and you need to attach two ConfigMaps for datacenters dc2 and dc3, modify the DSECluster definition as follows:

    apiVersion: missioncontrol.datastax.com/v1alpha1
    kind: DSECluster
    metadata:
      name: dse-cluster1
    spec:
      size: 3
      datacenters:
        - metadata:
            name: dc1
          k8sContext: context-0
        - metadata:
            name: dc2
          k8sContext: context-1
          perNodeConfigMapRef:
            name: dc2-per-node-configs
        - metadata:
            name: dc3
          k8sContext: context-2
          perNodeConfigMapRef:
            name: dc3-per-node-configs

    Given this DSECluster definition, when dc2 and dc3 are created, the per-node configurations are injected into the appropriate pods.

You MUST create all of the per-node ConfigMaps prior to creating the DSECluster object. Failing to do so results in the DSECluster object being caught in an error loop, until the ConfigMaps are eventually created.

Attaching or detaching a per-node ConfigMap by adding, removing, or modifying the perNodeConfigMapRef field in the DSECluster object on a running datacenter, causes a rolling restart of ALL the pods in the datacenter. This includes pods that are not affected by per-node configuration overrides!

Some configuration options are ignored when the node is already bootstrapped; for example, initial_token. In order to manually assign initial tokens using per-node configurations, you MUST do it when first creating the cluster, not after the nodes have been bootstrapped.

Terminating DSE nodes Managing DataStax Mission Control infrastructure

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage