Use per-node configurations
In general, all nodes in a datacenter are created with exactly the same configuration.
Set up one node configuration and share that specification across all nodes in a datacenter (DC).
When you create a MissionControlCluster
object, each datacenter specification contains a set of configuration properties.
These configuration properties are passed down to all nodes in each datacenter without distinction.
Example MissionControlCluster
specification:
apiVersion: missioncontrol.datastax.com/v1beta2
kind: MissionControlCluster
metadata:
name: demo
spec:
k8ssandra:
cassandra:
serverVersion: 6.8.19
serverType: dse
datacenters:
- metadata:
name: dc1
k8sContext: context-0
size: 3
config:
cassandraYaml:
allocate_tokens_for_local_replication_factor: 3
This specification defines one datacenter named dc1
.
A size:
of 3
runs three nodes in three different pods.
The cassandraYaml:
option of allocate_tokens_for_local_replication_factor
sets a value of 3
.
The three pods in dc1
share this value, which is set in the`cassandra.yaml` file in each pod.
You cannot create distinct configurations for pods in the same datacenter using the MissionControlCluster
specification.
However, you can customize the base configuration and override some of its contents per specific nodes, using more advanced techniques described in this topic.
Customizations per-node are useful in cases such as the following to:
-
Assign a node’s server ID in the
dse.yaml
file. -
Specific initial_token property in the
cassandra.yaml
file. -
Distinct Transport Layer Security (TLS) certificates for each node.
Use of per-node configurations is an advanced feature. DataStax recommends that only trained database administrators use this feature. Per-node configuration bypasses some of the checks that Mission Control usually performs to determine if the configuration is viable. If you apply an invalid or otherwise inappropriate per-node configuration, the target node might fail to start, the data might be damaged permanently, or both. |
You may wish to avoid setting up advanced base configuration overrides. For example, when creating single-token clusters, it is not required to specify the initial_token
property using per-node configurations.
Let Mission Control compute the initial tokens automatically for you.
For more information, see Create single-token clusters.
The following procedure provides detailed steps for the advanced feature of setting up per-node configurations.
Set up and share one node configuration
-
Determine which options to override. Creating a table simplifies this task. For example:
DC | Node | Configuration file | Configuration overrides |
---|---|---|---|
dc1 |
first rack, first node |
cassandra.yaml |
num_tokens: N1 |
dc1 |
first rack, first node |
dse.yaml |
server_id: X1 |
… |
… |
… |
… |
dcN |
last rack, last node |
cassandra.yaml |
num_tokens: NN |
dcN |
last rack, last node |
dse.yaml |
server_id: XN |
+ . Provide the per-node configurations. First create a ConfigMap that contains the per-node configurations for some or all nodes in each datacenter.
+
The The |
+
.. Write the ConfigMap
, following these instructions:
+
Each entry in the ConfigMap
targets a specific node and one of its configuration files.
+
Each key in the ConfigMap
must use this form:
+
<POD_NAME>_<CONFIG_FILE>
+ where:
+ <POD_NAME>:: is the name of the target pod running the Cassandra node; its name can be determined using the following template:
+ … <CLUSTER>-<DC>-<RACK>-sts-<INDEX>, where:
+
…. CLUSTER is the Cassandra cluster name (this may be different from the MissionControlCluster
object’s name);
…. DC is the datacenter object’s name (this name may be overridden, and it is not the CassandraDatacenter name and therefore may not be unique);
…. RACK is the rack name; use default
if no racks are specified;
…. INDEX is the zero-based index of the node within the rack.
+
<CONFIG_FILE>:: must be either: cassandra.yaml
or dse.yaml
.
+
Both POD_NAME and CONFIG_FILE must resolve to an existing pod and an existing configuration file. If that is not the case, that entry is ignored. No errors or warnings are raised. Only YAML per-node configuration files are supported: these include If no entry in the |
+ Each entry must contain a valid YAML snippet. Each snippet is applied on top of the base configuration file, overlaying and superseding the base file content. Here is an example entry:
+
cluster1-dc1-rack1-sts-0_cassandra.yaml: |
num_tokens: 1
initial_token: 3074457345618258600
This example entry only overrides values in the cassandra.yaml
base configuration file for pod cluster1-dc1-rack1-sts-0
. Specifically, the num_tokens
value is overlaid with the value 1
and the initial_token
value is overlaid with the value 3074457345618258600
.
+
Note that the snippet must be a multi-line string, and must be introduced by the pipe "|" indicator as shown in the example. The YAML snippet must be valid YAML. If it is not, the pod fails to start. If, for whatever reason, after applying the per-node configuration overlay, the resulting configuration file becomes invalid, the pod fails to start. A typical reason could be if you injected a setting that is not supported by the server. Ensure that your per-node configuration is valid for the server version in use before applying it. |
+
Here is an example of a valid per-node ConfigMap
that customizes both cassandra.yaml
and dse.yaml
configuration files for three nodes in dc1
:
+
apiVersion: v1
kind: ConfigMap
metadata:
name: dc1-per-node-configs
data:
cluster1-dc1-rack1-sts-0_cassandra.yaml: |
initial_token: -9223372036854775808
cluster1-dc1-rack1-sts-0_dse.yaml: |
server_id: node1
cluster1-dc1-rack2-sts-0_cassandra.yaml: |
initial_token: -3074457345618258604
cluster1-dc1-rack2-sts-0_dse.yaml: |
server_id: node2
cluster1-dc1-rack3-sts-0_cassandra.yaml: |
initial_token: 3074457345618258600
cluster1-dc1-rack3-sts-0_dse.yaml: |
server_id: node3
+
. Attach the per-node ConfigMap
to the MissionControlCluster object.
+
After you create each ConfigMap
with the appropriate per-node configurations, attach them to the correct datacenter definition in the MissionControlCluster
object.
+
For example, if your MissionControlCluster
object defines three datacenters, and you need to attach two ConfigMaps
for datacenters dc2
and dc3
, modify the MissionControlCluster
definition as follows:
+
apiVersion: missioncontrol.datastax.com/v1beta2
kind: MissionControlCluster
metadata:
name: demo
spec:
k8ssandra:
cassandra:
size: 3
datacenters:
- metadata:
name: dc1
k8sContext: context-0
- metadata:
name: dc2
k8sContext: context-1
perNodeConfigMapRef:
name: dc2-per-node-configs
- metadata:
name: dc3
k8sContext: context-2
perNodeConfigMapRef:
name: dc3-per-node-configs
+
Given this MissionControlCluster
definition, when dc2
and dc3
are created, the per-node configurations are injected into the appropriate pods.
You MUST create all of the per-node ConfigMaps prior to creating the Attaching or detaching a per-node Some configuration options are ignored when the node is already bootstrapped; for example, |