Add Cassandra, Solr, Spark, or Hadoop nodes to a DataStax Enterprise local
cluster.
Add Cassandra, Solr, Spark, or Hadoop nodes to a local DataStax Enterprise cluster.
Specify the datacenter in which the nodes should reside.
Note: If you intend to
encrypt sensitive
configuration values, enable configuration encryption and
copy the key to the agents before adding
a node to a cluster. OpsCenter automatically encrypts the sensitive fields such as
passwords and writes the encrypted values to the configuration files. Do not enter
manually encrypted values in the password fields if configuration encryption is
active.
Procedure
-
Click
cluster name in the left navigation
pane.
-
Click .
The Add Nodes to Cluster dialog appears. Most of the fields are
prepopulated and read-only based on the existing cluster.
-
Enter your credentials.
-
Click Add Datacenter.
The Add Local Datacenter dialog appears.
-
Entering your Amazon EC2 Credentials on the prior dialog auto-populates the
associated EC2 fields, such as Region, VPC, ID, Availability Zone, Subnet, Size,
and AMI with the default values associated with your EC2 AWS account. Adjusting
the values is not usually required or recommended unless your environment has
specific requirements to do so. Consult the following table for assistance with
completing the fields:
Datacenter fields
Field |
Description |
Node Type |
Each datacenter can only have one type of node. Like
Cassandra, node type must be homogenous within each data
center. Available node types:
- Cassandra
- Hadoop
- Solr
- Spark (DataStax Enterprise 4.5 and higher)
|
Node Properties |
The Node hostname or IP and if applicable, the Token. |
Rack |
The rack location for the node. Required when using the
GPFS. |
-
Click Add. The node is queued for adding to the
cluster.
-
Repeat as necessary to queue nodes to add to the datacenter.
The nodes added to a datacenter must be the same Node Type.
-
When you are done adding nodes, click Add
Datacenter.