Use per-node configurations
In general, all nodes in a datacenter are created with exactly the same configuration.
Set up one node configuration and share that specification across all nodes in a datacenter (DC).
When you create a MissionControlCluster
object, each datacenter specification contains a set of configuration properties.
These configuration properties are passed down to all nodes in each datacenter without distinction.
Example MissionControlCluster
specification:
apiVersion: missioncontrol.datastax.com/v1beta2
kind: MissionControlCluster
metadata:
name: demo
spec:
k8ssandra:
cassandra:
serverVersion: 6.8.19
serverType: dse
datacenters:
- metadata:
name: dc1
k8sContext: context-0
size: 3
config:
cassandraYaml:
allocate_tokens_for_local_replication_factor: 3
This specification defines one datacenter named dc1
.
Its size:
indicates it is to run 3
nodes in three (3) different pods.
The cassandraYaml:
option of allocate_tokens_for_local_replication_factor
sets a value of 3
. The three pods in dc1
share this value and it is set in their respective cassandra.yaml
files.
It is not possible to create distinct configurations for pods in the same datacenter using the |
However, it is possible to customize the base configuration and override some of its contents per specific nodes, using more advanced techniques described in this topic.
Customizations per-node are useful in cases such as the following to:
-
Assign a node’s server ID in
dse.yaml
. -
Specific initial_token property in
cassandra.yaml
. -
Distinct Transport Layer Security (TLS) certificates for each node.
Using per-node configurations is considered an advanced feature and should only be performed by trained DataStax Enterprise (DSE) database administrators. Per-node configuration bypasses some of the checks that Mission Control usually performs in order to determine if the configuration is viable. Applying an invalid or otherwise inappropriate per-node configuration may cause the target node to fail to start or its data to be damaged permanently, or both. |
You may wish to avoid setting up advanced base configuration overrides. For example, when creating single-token clusters, it is not required to specify the initial_token
property using per-node configurations. Let Mission Control compute the initial tokens automatically for you. See Creating Single-Token Clusters.
The following procedure provides detailed steps for the advanced feature of setting up per-node configurations.
Set up and share one node configuration
-
Determine which options to override. Creating a table simplifies this task. For example:
DC Node Configuration file Configuration overrides dc1
first rack, first node
cassandra.yaml
num_tokens: N1
dc1
first rack, first node
dse.yaml
server_id: X1
…
…
…
…
dcN
last rack, last node
cassandra.yaml
num_tokens: NN
dcN
last rack, last node
dse.yaml
server_id: XN
-
Provide the per-node configurations. First create a ConfigMap that contains the per-node configurations for some or all nodes in each datacenter.
The
ConfigMaps
should be created per datacenter. For example, if you have three (3) DCs and all of them require per-node configuration, then you should create three (3) distinctConfigMaps
.The
ConfigMaps
must be local to the datacenter that they reference. In other words, create them in the same Kubernetes context and in the same namespace as theCassandraDatacenter resource
(CR) describing the physical datacenter.-
Write the
ConfigMap
, following these instructions:Each entry in the
ConfigMap
targets a specific node and one of its configuration files.Each key in the
ConfigMap
must use this form:<POD_NAME>_<CONFIG_FILE>
where:
- <POD_NAME>
-
is the name of the target pod running the Cassandra node; its name can be determined using the following template:
-
<CLUSTER>-<DC>-<RACK>-sts-<INDEX>, where:
-
CLUSTER is the Cassandra cluster name (this may be different from the
MissionControlCluster
object’s name); -
DC is the datacenter object’s name (this name may be overridden, and it is not the CassandraDatacenter name and therefore may not be unique);
-
RACK is the rack name; use
default
if no racks are specified; -
INDEX is the zero-based index of the node within the rack.
-
-
- <CONFIG_FILE>
-
must be either:
cassandra.yaml
ordse.yaml
.Both POD_NAME and CONFIG_FILE must resolve to an existing pod and an existing configuration file. If that is not the case, that entry is simply ignored. No errors or warnings are raised.
Only YAML per-node configuration files are supported: these include
cassandra.yaml
anddse.yaml
. Specifying non-YAML files results in the pod failing to start.If no entry in the
ConfigMap
exists for either a particular pod name or a configuration file, or both, then that pod’s configuration is not altered. No errors or warnings are raised.Each entry must contain a valid YAML snippet. Each snippet is applied on top of the base configuration file, overlaying and superceding the base file content. Here is an example entry:
cluster1-dc1-rack1-sts-0_cassandra.yaml: | num_tokens: 1 initial_token: 3074457345618258600
This example entry only overrides values in the
cassandra.yaml
base configuration file for podcluster1-dc1-rack1-sts-0
. Specifically, thenum_tokens
value is overlaid with the value1
and theinitial_token
value is overlaid with the value3074457345618258600
.Note that the snippet must be a multi-line string, and must be introduced by the pipe "|" indicator as shown in the example.
The YAML snippet must be valid YAML. If it is not, the pod fails to start.
If, for whatever reason, after applying the per-node configuration overlay, the resulting configuration file becomes invalid, the pod fails to start. A typical reason could be if you injected a setting that is not supported by the server. Ensure that your per-node configuration is valid for the server version in use before applying it.
Here is an example of a valid per-node
ConfigMap
that customizes bothcassandra.yaml
anddse.yaml
configuration files for three (3) nodes indc1
:apiVersion: v1 kind: ConfigMap metadata: name: dc1-per-node-configs data: cluster1-dc1-rack1-sts-0_cassandra.yaml: | initial_token: -9223372036854775808 cluster1-dc1-rack1-sts-0_dse.yaml: | server_id: node1 cluster1-dc1-rack2-sts-0_cassandra.yaml: | initial_token: -3074457345618258604 cluster1-dc1-rack2-sts-0_dse.yaml: | server_id: node2 cluster1-dc1-rack3-sts-0_cassandra.yaml: | initial_token: 3074457345618258600 cluster1-dc1-rack3-sts-0_dse.yaml: | server_id: node3
-
-
Attach the per-node
ConfigMap
to the MissionControlCluster object.After you create each
ConfigMap
with the appropriate per-node configurations, attach them to the correct datacenter definition in theMissionControlCluster
object.For example, if your
MissionControlCluster
object defines three (3) datacenters, and you need to attach twoConfigMaps
for datacentersdc2
anddc3
, modify theMissionControlCluster
definition as follows:apiVersion: missioncontrol.datastax.com/v1beta2 kind: MissionControlCluster metadata: name: demo spec: k8ssandra: cassandra: size: 3 datacenters: - metadata: name: dc1 k8sContext: context-0 - metadata: name: dc2 k8sContext: context-1 perNodeConfigMapRef: name: dc2-per-node-configs - metadata: name: dc3 k8sContext: context-2 perNodeConfigMapRef: name: dc3-per-node-configs
Given this
MissionControlCluster
definition, whendc2
anddc3
are created, the per-node configurations are injected into the appropriate pods.
You MUST create all of the per-node ConfigMaps prior to creating the Attaching or detaching a per-node Some configuration options are ignored when the node is already bootstrapped; for example, |