Use per-node configurations
DataStax Mission Control is currently in Public Preview. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.
If you are interested in trying out DataStax Mission Control please join the Public Preview.
In general, all nodes in a datacenter are created with exactly the same configuration.
Set up one node configuration and share that specification across all nodes in a datacenter (DC). When you create a
MissionControlCluster object, each datacenter specification contains a set of configuration properties. These configuration properties are passed down to all nodes in each datacenter without distinction.
apiVersion: missioncontrol.datastax.com/v1beta1 kind: MissionControlCluster metadata: name: demo spec: k8ssandra: cassandra: serverVersion: 6.8.19 datacenters: - metadata: name: dc1 k8sContext: context-0 size: 3 config: cassandraYaml: allocate_tokens_for_local_replication_factor: 3
This specification defines one datacenter named
size: indicates it is to run
3 nodes in three (3) different pods. The
cassandraYaml: option of
allocate_tokens_for_local_replication_factor sets a value of
3. The three pods in
dc1 share this value and it is set in their respective
It is not possible to create distinct configurations for pods in the same datacenter using the
However, it is possible to customize the base configuration and override some of its contents per specific nodes, using more advanced techniques described in this topic.
Customizations per-node are useful in cases such as the following to:
Assign a node’s server ID in
Specific initial_token property in
Distinct Transport Layer Security (TLS) certificates for each node.
Using per-node configurations is considered an advanced feature and should only be performed by trained DataStax Enterprise database administrators. Per-node configuration bypasses some of the checks that DataStax Mission Control usually performs in order to determine if the configuration is viable. Applying an invalid or otherwise inappropriate per-node configuration may cause the target node to fail to start or its data to be damaged permanently, or both.
You may wish to avoid setting up advanced base configuration overrides. For example, when creating single-token clusters, it is not required to specify the
initial_token property using per-node configurations. Let DataStax Mission Control compute the initial tokens automatically for you. See Creating Single-Token Clusters.
The following procedure provides detailed steps for the advanced feature of setting up per-node configurations.
Determine which options to override. Creating a table simplifies this task. For example:
DC Node Configuration file Configuration overrides
first rack, first node
first rack, first node
last rack, last node
last rack, last node
Provide the per-node configurations. First create a ConfigMap that contains the per-node configurations for some or all nodes in each datacenter.
ConfigMapsshould be created per datacenter. For example, if you have three (3) DCs and all of them require per-node configuration, then you should create three (3) distinct
ConfigMapsmust be local to the datacenter that they reference. In other words, create them in the same Kubernetes context and in the same namespace as the
CassandraDatacenter resource(CR) describing the physical datacenter.
ConfigMap, following these instructions:
Each entry in the
ConfigMaptargets a specific node and one of its configuration files.
Each key in the
ConfigMapmust use this form:
is the name of the target pod running the Cassandra node; its name can be determined using the following template:
CLUSTER is the Cassandra cluster name (this may be different from the
DC is the datacenter object’s name (this name may be overridden, and it is not the CassandraDatacenter name and therefore may not be unique);
RACK is the rack name; use
defaultif no racks are specified;
INDEX is the zero-based index of the node within the rack.
must be either:
Both POD_NAME and CONFIG_FILE must resolve to an existing pod and an existing configuration file. If that is not the case, that entry is simply ignored. No errors or warnings are raised.
Only YAML per-node configuration files are supported: these include
dse.yaml. Specifying non-YAML files results in the pod failing to start.
If no entry in the
ConfigMapexists for either a particular pod name or a configuration file, or both, then that pod’s configuration is not altered. No errors or warnings are raised.
Each entry must contain a valid YAML snippet. Each snippet is applied on top of the base configuration file, overlaying and superceding the base file content. Here is an example entry:
cluster1-dc1-rack1-sts-0_cassandra.yaml: | num_tokens: 1 initial_token: 3074457345618258600
This example entry only overrides values in the
cassandra.yamlbase configuration file for pod
cluster1-dc1-rack1-sts-0. Specifically, the
num_tokensvalue is overlaid with the value
initial_tokenvalue is overlaid with the value
Note that the snippet must be a multi-line string, and must be introduced by the pipe “|” indicator as shown in the example.
The YAML snippet must be valid YAML. If it is not, the pod fails to start.
If, for whatever reason, after applying the per-node configuration overlay, the resulting configuration file becomes invalid, the pod fails to start. A typical reason could be if you injected a setting that is not supported by the server. Ensure that your per-node configuration is valid for the server version in use before applying it.
Here is an example of a valid per-node
ConfigMapthat customizes both
dse.yamlconfiguration files for three (3) nodes in
apiVersion: v1 kind: ConfigMap metadata: name: dc1-per-node-configs data: cluster1-dc1-rack1-sts-0_cassandra.yaml: | initial_token: -9223372036854775808 cluster1-dc1-rack1-sts-0_dse.yaml: | server_id: node1 cluster1-dc1-rack2-sts-0_cassandra.yaml: | initial_token: -3074457345618258604 cluster1-dc1-rack2-sts-0_dse.yaml: | server_id: node2 cluster1-dc1-rack3-sts-0_cassandra.yaml: | initial_token: 3074457345618258600 cluster1-dc1-rack3-sts-0_dse.yaml: | server_id: node3
Attach the per-node
ConfigMapto the MissionControlCluster object.
After you create each
ConfigMapwith the appropriate per-node configurations, attach them to the correct datacenter definition in the
For example, if your
MissionControlClusterobject defines three (3) datacenters, and you need to attach two
dc3, modify the
MissionControlClusterdefinition as follows:
apiVersion: missioncontrol.datastax.com/v1beta1 kind: MissionControlCluster metadata: name: demo spec: k8ssandra: cassandra: size: 3 datacenters: - metadata: name: dc1 k8sContext: context-0 - metadata: name: dc2 k8sContext: context-1 perNodeConfigMapRef: name: dc2-per-node-configs - metadata: name: dc3 k8sContext: context-2 perNodeConfigMapRef: name: dc3-per-node-configs
dc3are created, the per-node configurations are injected into the appropriate pods.
You MUST create all of the per-node ConfigMaps prior to creating the
Attaching or detaching a per-node
Some configuration options are ignored when the node is already bootstrapped; for example,