Enabling SSL/TLS for OpsCenter and Agent communication - Tarball Installations

About this task

To enable SSL for tarball installations, edit the configuration file and run a script to generate the keys used by OpsCenter and the DataStax Agents.

Prerequisites

OpsCenter requires the .der file format for SSL. If the existing agents ssl_certfile in opscenterd.conf is in a .pem format, run the following command to convert the format:

openssl x509 -outform der -in /install_location/ssl/opscenter.pem -out /install_location/ssl/opscenter.der

For more information about SSL cert file formats, see converting SSL certificates.

Procedure

  1. If the SSL files already exist in the install_location/ssl directory, they are not automatically recreated. Before running setup.py, remove the old SSL files from that directory.

  2. Run the OpsCenter setup.py script:

    sudo install_location/bin/jython/setup.py

    The script generates the SSL keys and certificates used by the OpsCenter daemon and the DataStax Agents to communicate with one another in the following directory: install_location/ssl

  3. Locate the opscenterd.conf file. The location of this file depends on the type of installation:

    • Package installations: /etc/opscenter/opscenterd.conf

    • Tarball installations: install_location/conf/opscenterd.conf

  4. Open opscenterd.conf in an editor and add an [agents] section with the use_ssl option set to true.

    sudo vi install_location/conf/opscenterd.conf
    [agents]
    use_ssl = true
    ssl_keyfile =  install_location/ssl/opscenter.key
    ssl_certfile = install_location/ssl/opscenter.der
    agent_keyfile = install_location/ssl/agentKeyStore
    agent_keyfile_raw = install_location/ssl/agentKeyStore.key
    agent_certfile = install_location/ssl/agentKeyStore.der

    The agent_keyfile_raw file is used only HA configurations.

  5. Restart the OpsCenter daemon.

  6. If you need to connect to a cluster in which DataStax Agents have already been deployed, log in to each of the nodes and reconfigure the address.yaml file.

    If you do not want to manually edit all of the node configuration files, follow the procedure to install DataStax Agents automatically.

    1. On each node in the cluster, copy install_location/ssl/agentKeyStore from the OpsCenter machine to agent_install_location/ssl/agentKeyStore.

      scp /opt/opscenter/ssl/agentKeyStore  user@node:*agent_install_location*/ssl/agentKeyStore

      Where user is the user ID on the node, and node is either the host name of the node or its IP address.

    2. Log in to each node in the cluster using ssh.

      ssh user@node

      Where user is the user ID on the node, and node is either the host name of the node or its IP address.

    3. Locate the address.yaml file. The location of this file depends on the type of installation.

      • Package installations: /var/lib/datastax-agent/conf/address.yaml

      • Tarball installations: install_location/conf/address.yaml

    4. Edit the address.yaml file, changing the value of use_ssl to 1.

      sudo vi /var/lib/datastax-agent/conf/address.yaml
      use_ssl: 1

      If your keystore and truststore files reside in a different location from the default, define the following parameters to indicate the location of the keystore and truststore, plus the password for each:

      opscenter_ssl_truststore: /etc/datastax-agent/key/dse-truststore.jks
      opscenter_ssl_truststore_password: truststore_password
      opscenter_ssl_keystore: /etc/datastax-agent/key/keystore.jks
      opscenter_ssl_keystore_password: keystore_password
    5. Restart the DataStax Agent.

      sudo install_location/bin/datastax-agent
  7. After opscenterd and all DataStax Agents have been configured and restarted, verify proper connection through the Agent Status tab.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com