Upgrading DataStax Enterprise 5.0 to 6.7 or 6.8

The upgrade process for DataStax Enterprise (DSE) provides minimal downtime (ideally zero). During this process, upgrade and restart one node at a time while other nodes continue to operate online. With a few exceptions, the cluster continues to work as though it were on the earlier version of DSE until all of the nodes in the cluster are upgraded.

Read and understand these instructions before upgrading. Carefully reviewing the planning and upgrade instructions can prevent errors and data loss. In addition, review the 5.1 release notes and DSE 6.8 release notes for all changes before upgrading.

DSE 6.8 has a new metadata_directory property that holds information about the local node and all peers. metadata_directory keeps the same information that system.local and system.peers had.

Upgrade Order

Upgrade nodes in this order:

  1. In multiple datacenter clusters, upgrade every node in one datacenter before upgrading another datacenter.

  2. Upgrade the seed nodes within a datacenter first.

  3. DSE Analytics datacenters

    1. For DSE Analytics nodes using DSE Hadoop, upgrade the Job Tracker node first. Then upgrade Hadoop nodes, followed by Spark nodes.

  4. Transactional/DSE Graph datacenters

  5. DSE Search nodes or datacenters

Direct upgrades from DSE 5.0 to 6.8 are not supported. To upgrade from DSE 5.0 to 6.7 or 6.8, first upgrade to DSE 5.1 and then follow the instructions in this section. See Upgrading DataStax Enterprise 5.0 to 5.1.

Due to a serious bug which affects DSE 6.8.7 and DSE 6.8.8, DataStax recommends against upgrading to those versions at this time. If you have already upgraded to these versions, please EITHER set zerocopy_streaming_enabled=false in cassandra.yaml and perform a rolling restart, OR run upgradesstables on all nodes in your cluster before adding new nodes, running repair, or restoring from backups, or do BOTH. This bug is addressed in DSE 6.8.9.

Back Up your Existing Installation

DataStax recommends backing up your data prior to any version upgrade.

A backup provides the ability to revert and restore all the data used in the previous version if necessary. For manual backup instructions, see Backing up a tarball installation or Backing up a package installation.

OpsCenter provides a Backup Service that manages enterprise-wide backup and restore operations for DataStax Enterprise clusters and is highly recommended over any manual backup procedure. Ensure you use a compatible version of OpsCenter for your DSE version.

Upgrade SSTables

Be certain to upgrade SSTables on your nodes both before and after upgrading. Failure to upgrade SSTables will result in severe performance penalties and possible data loss.

Version-Specific Notes

DSE Search changes: As of DSE 6.7.7 and later, the Solr timeAllowed parameter is enabled by default to prevent long running shard queries, such as complex facets and Boolean queries, from using system resources after they have timed out from the DSE Search coordinator. For details, see Limiting queries by time.

DSE Search changes: As of DSE 6.8.0, unbounded facet searches are no longer allowed using facet.limit=-1. The maximum facet limit value is 20,000 as set by solr.max.facet.limit.size. While the facet limit size can be overriden using -Dsolr.max.facet.limit.size in jvm[ 8 | 11 ]-server.options, depending upon your JVM version, it is not recommended.

Upgrade Restrictions and Limitations

Restrictions and limitations apply while a cluster is in a partially upgraded state. The cluster continues to work as though it were on the earlier version of DataStax Enterprise until all of the nodes in the cluster are upgraded.

General Restrictions

  • Do not enable new features.

  • Ensure OpsCenter compatibility.

    Compatibility
    OpsCenter version DSE version

    6.8

    6.8, 6.7, 6.0, 5.1

    6.7

    DSE 6.0

    6.5

    6.0, 5.1, 5.0 (EOL)

    6.1

    5.1, 5.0, 5.0 (EOL)

    6.0

    5.0 (EOL), 4.8 (EOSL), 4.7 (EOSL)

  • Do not run nodetool repair.

  • Stop the OpsCenter Repair Service if enabled: 6.5 | 6.7 | 6.8.

  • During the upgrade, do not bootstrap new nodes or decommission existing nodes.

  • Do not issue TRUNCATE or DDL related queries during the upgrade process.

  • Do not alter schemas for any workloads.

  • Complete the cluster-wide upgrade before the expiration of gc_grace_seconds (default 10 days) to ensure any repairs complete successfully.

  • If the DSE Performance Service was disabled before the upgrade, do not enable it during the upgrade. See DSE Performance Service: 6.8 | 6.7 | 5.1 | 5.0 | OpsCenter 6.8 | OpsCenter 6.7 | OpsCenter 6.5.

Nodes on different versions might show a schema disagreement during an upgrade.

Restrictions for Nodes Using Security

  • Do not change security credentials or permissions until the upgrade is complete on all nodes.

  • If you are not already using Kerberos, do not set up Kerberos authentication before upgrading. First upgrade the cluster, and then set up Kerberos.

Restrictions for DSE Analytics Nodes

Spark versions change between major DSE versions. DSE release notes [5.x | 6.8.x] indicate which version of Spark is used.

When upgrading to a major version of DSE, all nodes in a DSE datacenter that run Spark must be on the same version of Spark and the Spark jobs must be compiled for that version. Each datacenter acting as a Spark cluster must be on the same upgraded DSE version before reinitiating Spark jobs.

In the case where Spark jobs run against Graph keyspaces, you must update all of the nodes in the cluster first to avoid Spark jobs failing.

Restrictions for DSE Advanced Replication Nodes

Upgrades are supported only for DSE Advanced Replication V2.

Restrictions for DSE Search Nodes

  • Do not update DSE Search configurations or schemas.

  • Do not reindex DSE Search nodes during upgrade.

  • DSE 5.1 and 6.x use a different Lucene codec than DSE 5.0 for new search cores. Segments written with this new codec cannot be read by earlier versions of DSE.

Driver Version Impacts

Be sure to check driver compatibility. Depending on the driver version, you might need to recompile your client application code.

DataStax drivers come in two types:

  • DataStax drivers for DataStax Enterprise (DSE) — for use by DSE 4.8 and later

  • DataStax drivers for Apache Cassandra® — for use by Apache Cassandra and DSE 4.7 and earlier

    While the DataStax drivers for Apache Cassandra drivers can connect to DSE 5.0 and later clusters, DataStax strongly recommends upgrading to the DSE drivers. The DSE drivers provide functionality for all DataStax Enterprise (DSE) features.

During upgrades, you might experience driver-specific impact when clusters have mixed versions of drivers. If your cluster has mixed versions, the protocol version is negotiated with the first host to which the driver connects, although certain drivers, such as Java 4.x/2.x automatically select a protocol version that works across nodes. To avoid driver version incompatibility during upgrades, use one of these workarounds:

  • Protocol version: Set the protocol version explicitly in your application at start up. Switch to the Java driver to the new protocol version only after the upgrade is complete on all nodes in the cluster.

  • Initial contact points: Ensure that the list of initial contact points contains only hosts with the oldest DSE version or protocol version. For example, the initial contact points contain only protocol version 2.

For details on protocol version negotiation, see protocol versions with mixed clusters in the Java driver version you’re using, for example, Java driver.

Starting January 2020, you can use the same DataStax driver for Apache Cassandra® (OSS), DataStax Enterprise. DataStax has unified drivers to avoid user confusion and enhance the OSS drivers with some of the features in the DSE drivers. For more information, see the Better Drivers for Cassandra blog.

DataStax Enterprise and Apache Cassandra Configuration Files

DataStax Enterprise (DSE) configuration files
Configuration file Installer-Services and package installations Installer-No Services and tarball installations

dse

/etc/default/dse (systemd) or /etc/init.d/ (SystemV)

N/A Node type is set via command line flags.

dse-env.sh

/etc/dse/dse-env.sh

<installation_location>/bin/dse-env.sh

byoh-env.sh

/etc/dse/byoh-env.sh

<installation_location>/bin/byoh-env.sh

dse.yaml

/etc/dse/dse.yaml

<installation_location>/resources/dse/conf/dse.yaml

logback.xml

/etc/dse/cassandra/logback.xml

<installation_location>/resources/logback.xml

spark-env.sh

/etc/dse/spark/spark-env.sh

<installation_location>/resources/spark/conf/spark-env.sh

spark-defaults.conf

/etc/dse/spark/spark-defaults.conf

<installation_location>/resources/spark/conf/spark-defaults.conf

Cassandra configuration files

Configuration file

Installer-Services and package installations

Installer-No Services and tarball installations

cassandra.yaml

/etc/dse/cassandra/cassandra.yaml

<installation_location>/conf/cassandra.yaml

cassandra.in.sh

/usr/share/cassandra/cassandra.in.sh

<installation_location>/bin/cassandra.in.sh

cassandra-env.sh

/etc/dse/cassandra/cassandra-env.sh

<installation_location>/conf/cassandra-env.sh

cassandra-rackdc.properties

/etc/dse/cassandra/cassandra-rackdc.properties

<installation_location>/conf/cassandra-rackdc.properties

cassandra-topology.properties

/etc/dse/cassandra/cassandra-topology.properties

<installation_location>/conf/cassandra-topology.properties

jmxremote.password

/etc/cassandra/jmxremote.password

<installation_location>/conf/jmxremote.password

Tomcat server configuration file
Configuration file Installer-Services and package installations Installer-No Services and tarball installations

server.xml

/etc/dse/resources/tomcat/conf/server.xml

<installation_location>/resources/tomcat/conf/server.xml

The location of the jvm.options file depends on the type of installation:

Package installations

/etc/dse/cassandra/jvm.options

Tarball installations

<installation_location>/resources/cassandra/conf/jvm.options

Advanced Preparation for Upgrading DSE Search and SearchAnalytics Nodes

Before continuing, complete all the advanced preparation steps on DSE Search and SearchAnalytics nodes while DSE 5.0 is still running.

Changes to DSE Search and DSE SearchAnalytics between version 5.0 and both versions 5.1 and 6.x are extensive. Plan sufficient time to implement and test the required changes before the upgrade. Contact the DataStax Support team with any questions or for help with upgrading.

Schema changes may require a full reindex and configuration changes require reloading the core.

Make the following changes as required:

  1. Change HTTP API queries to CQL queries:

    • Delete-by-id is removed, use CQL DELETE by primary key instead.

    • Delete-by-query no longer supports wildcards, use CQL TRUNCATE instead.

  2. If any Solr core was created on DSE 4.6 or earlier and never reindexed after being upgraded to DSE 4.7 or later, you must reindex on DSE 5.0 before upgrading to DSE 6.x:

    dsetool reload_core keyspace\_name.table\_name schema=filepath solrconfig=filepath reindex=true deleteAll=true distributed=false

    You must reindex all nodes before beginning the upgrade.

  3. If you are using Apache Solr SolrJ, the minimum required Solr version is 6.0.0. To find the current Solr version:

    installation\_directory/bin/solr status

    For information on upgrading Apache Solr, see Upgrading Solr.

  4. For SpatialRecursivePrefixTreeFieldType (RPT) in search schemas, you must adjust your queries for these changes:

    • IsDisjointTo is no longer supported in queries on SpatialRecursivePrefixTreeFieldType.

      Replace IsDisjointTo with a NOT Intersects query.

      For example:

      foo:0,0 TO 1000,1000 AND -"Intersects(POLYGON((338 211, 338 305, 404 305, 404 211, 338 211)))")
    • The ENVELOPE syntax is now required for WKT-style queries against SpatialRecursivePrefixTreeFieldType fields. You must specify ENVELOPE(10, 15, 15, 10), where queries on earlier releases could specify 10 10 15 15. See Spatial Search for details on using distanceUnits in spatial queries.

  5. The Circle syntax is no longer a part of Well-Known-Text (WKT); therefore, Spatial Search queries such as:

    Intersects(Circle(10 10 d=2))

    must be rewritten as:

    Intersects(BUFFER(POINT(10 10), 2)
  6. Edit the solrconfig.xml file and make these changes, as needed:

    • Remove these unsupported Solr requestHandlers:

      • XmlUpdateRequestHandler

      • BinaryUpdateRequestHandler

      • CSVRequestHandler

      • JsonUpdateRequestHandler

      • DataImportHandler

        For example:

        <requestHandler name="/dataimport" class="solr.DataImportHandler"/>

        or

        <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"/>
    • Change the directoryFactory from:

      <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>

      to

      <directoryFactory name="DirectoryFactory" class="solr.StandardDirectoryFactory"/>
    • <unlockOnStartup> is unsupported.

    • Change the updateLog from:

      <updateLog class="solr.FSUpdateLog" force="false">

      to

      <updateLog force="false">

      For more information, see solrconfig.xml.

  7. Upgrading DSE search nodes requires replacing unsupported Solr types with supported types.

    Special handling is also required for BCDStrField, addressed in this step.

    Sorting limitations apply to mixed version clusters. Some of the removed Solr types, due to the way they marshal sort values during distributed queries (combined with the way the suggested new types unmarshal sort values), cannot be sorted on during rolling upgrades when some nodes use an unsupported type and other nodes use the suggested new type. The following type transitions are problematic:

    Removed Solr field types Supported Solr field types

    ByteField

    TrieIntField

    DateField

    TrieDateField

    BCDIntField

    TrieIntField

    BCDLongField

    TrieLongField

    Two options are available:

    1. Avoid sorting on removed Solr field types until the upgrade is complete for all nodes in the datacenter being queried.

      When using two search datacenters, isolate queries to a single datacenter and then change the schema and reindex the other datacenter. Then isolate queries to the newly reindexed datacenter while you change the schema and upgrade the first datacenter.

    2. If you are using BCDIntField or BCDLongField, update the schema to replace BCDIntField and BCDLongField with types that are sort-compatible with the supported Solr types TrieIntField and TrieLongField:

      Removed Solr field types Interim sort-compatible supported Solr field types

      BCDIntField

      SortableIntField

      BCDLongField

      SortableLongField

      Change the schema in a distributed fashion, and do not reindex. After the schema is updated on all nodes, then continue with tuning the schema.

  8. Update the schema and configuration for the Solr field types that are removed from Solr 5.5 and later.

    1. Update the schema to replace unsupported Solr field types with supported Solr field types:

      Removed Solr field types Supported Solr field types

      ByteField

      TrieIntField

      DateField

      TrieDateField

      DoubleField

      TrieDoubleField

      FloatField

      TrieFloatField

      IntField

      TrieIntField

      LongField

      TrieLongField

      ShortField

      TrieIntField

      SortableDoubleField

      TrieDoubleField

      SortableFloatField

      TrieFloatField

      SortableIntField

      TrieIntField

      SortableLongField

      TrieLongField

      BCDIntField

      TrieIntField

      BCDLongField

      TrieLongField

      BCDStrField (see upgrade data type, if used)

      TrieIntField

    2. If you are using type mapping version 0, or you do not specify a type mapper, verify or update the solrconfig.xml to use dseTypeMappingVersion 1:

      <dseTypeMappingVersion>1</dseTypeMappingVersion>

      If the Solr core is backed by a CQL table and the type mapping is unspecified, use type mapping version 2.

      For more information, see solrconfig.xml.

    3. Reload the core:

      dsetool reload_core keyspace\_name.table\_name schema=filepath solrconfig=filepath

      If you were using the unsupported data types, do a full reindex node-by-node:

      dsetool reload_core keyspace\_name.table\_name schema=filepath solrconfig=filepath reindex=true deleteAll=true distributed=false

      In DSE 5.1 and later, auto-generated schemas use data type mapper 2.

  9. If using BCDStrField:

    In DSE 5.0 and earlier, DSE-mapped Cassandra text columns to BCDStrField. The deprecated BCDStrField is removed.

    The recommended strategy is to upgrade the data type to TrieIntField. However, DSE cannot map text directly to TrieIntField. If you are using BCDStrField, you must complete one of these options before the upgrade:

    1. If BCDStrField is no longer used, remove the BCDStrField field from the Solr schema. Reindexing is not required.

    2. If you want to index the field as a TrieIntField and a full reindex is acceptable, change the underlying database column to use the type int.

    3. If you want to keep the database column as text and you still want to do simple matching queries on the indexed field, switch from BCDStrField to StrField in the schema. Indexing should not be required, but the field is no longer appropriate for numeric range queries or sorting because StrField uses a lexicographic order rather than a numeric one.

    4. Not recommended: If you want to keep the database column as text and still want to perform numeric range queries and sorts on the former BCDStrField, but would rather change their application than perform a full reindex:

      1. Change the field to StrField in the Solr schema with indexed=false.

      2. Add a new copy field with the type TrieIntField that has its values supplied by the original BCDStrField. This solution still requires reindex to work because the copy field target must be populated. This non-recommended option is supplied only to support a sub-optimal data model; for example, a text column with values that would fit only into an int.

        After you make these schema changes, do a rolling, node-by-node reload_core:

        dsetool reload_core keyspace\_name.table\_name schema=filepath solrconfig=filepath reindex=true deleteAll=true distributed=false

        If you have two datacenters and upgrade them one at a time, reload the core with distributed=true and deleteAll=true.

  10. Tune the schema before you upgrade. After the upgrade, all field definitions in the schema are validated and must be DSE Search compatible, even if the fields are not indexed, have docValues applied, or used for copy-field source. The default behavior of automatic resource generation includes all columns. To improve performance, take action to prevent the fields from being loaded from the database. Include only the required fields in the schema by removing or commenting out unused fields in the schema.

Advanced Preparation for Upgrading DSE Graph Nodes with Search Indexes

These steps apply to graph nodes that have search indexes. Before continuing, complete these advanced preparation steps while DSE 5.0 is still running.

Upgrading DSE Graph nodes with search indexes requires these edits to the solrconfig file. Configuration changes require reloading the core. Plan sufficient time to implement and test changes that are required before the upgrade.

Edit solrconfig.xml and make these changes, as needed:

  1. Remove these unsupported Solr requestHandlers:

    • XmlUpdateRequestHandler

    • BinaryUpdateRequestHandler

    • CSVRequestHandler

    • JsonUpdateRequestHandler

    • DataImportHandler

      For example:

      <requestHandler name="/dataimport" class="solr.DataImportHandler"/>

      or

      <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"/>
  2. Change the directoryFactory from:

    <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>

    to

    <directoryFactory name="DirectoryFactory" class="solr.StandardDirectoryFactory"/>
  3. Remove <unlockOnStartup>.

  4. Reload the core:

    dsetool reload_core keyspace\_name.table\_name reindex=false

Advanced Preparation for Upgrading DSE Analytics Nodes

Before upgrading DSE Analytics nodes:

  1. DSE versions earlier than 5.1 use an older version of Spark and applications written using that version (1.6) may not be compatible with Spark 2.2. You must recompile all DSE 5.0 Scala Spark applications against Scala 2.11 and use only Scala 2.11 third-party libraries.

    Changing the dse-spark-dependencies in your build files is not sufficient to change the compilation target. See the example projects for how to set up your build files.

  2. Cassandra File System (CFS) is removed. Remove the cfs and cfs_archive keyspaces before upgrading. See the From CFS to DSEFS blog post and the Copying data from CFS to DSEFS documentation for more information.

    DROP KEYSPACE cfs
    DROP KEYSPACE cfs_archive
  3. If DSEFS is enabled, copy CFS hivemetastore directory to dse:

    DSE\_HOME/bin/dse hadoop fs -cp cfs://node\_ip\_address/user/spark/warehouse/ dsefs://node\_ip\_address/user/spark/warehouse/
  4. Spark applications should use dse:// URLs instead of spark://spark\_master\_IP:Spark\_RPC\_port\_number URLs, as described in Specifying Spark URLs (5.1) | Specifying Spark URLs (6.7) | Specifying Spark URLs (6.8).

  5. Modify calls to setMasterv and `setAppName.

    For example, the following code works in DSE 5.0 but does not work in DSE 5.1 or later.

    val conf = new SparkConf(true)
    .setMaster("spark://192.168.123.10:7077")
    .setAppName("cassandra-demo")
    .set("cassandra.connection.host" , "192.168.123.10") // initial contact
    .set("cassandra.username", "cassandra")
    .set("cassandra.password", "cassandra")
    val sc = new SparkContext(conf)

    To connect, modify the call to setMaster:

    val conf = new SparkConf(true)
    **.appName**("cassandra-demo")
    **.master**("dse://192.168.123.10:7077")
    .set("cassandra.connection.host" , "192.168.123.10") // initial contact
    .set("cassandra.username", "cassandra")
    .set("cassandra.password", "cassandra")
    val sc = new SparkContext(conf)

Preparing to Upgrade

Follow these steps to prepare each node for the upgrade:

Direct upgrades from DSE 5.0 to 6.8 are not supported. To upgrade from DSE 5.0 to 6.8, first upgrade to DSE 5.1 and then follow the instructions in this section. See Upgrading DataStax Enterprise 5.0 to 5.1.

The DataStax Installer is not supported for DSE 6.0 and later. To upgrade from DSE 5.x that was installed with the DataStax Installer, you must first change from a standalone installer installation to a tarball or package installation for the same DSE version. See Upgrading to DSE 6.0 or DSE 6.7 from DataStax Installer installations.

These steps are performed in your current version and use DSE 5.0 documentation.

  1. If you are upgrading from DSE 5.0 to DSE 6.8, upgrade to DSE 5.1 first and then follow the instructions in this section. See Upgrading DataStax Enterprise 5.0 to 5.1.

  2. Upgrade to the latest patch release on your current version. Fixes included in the latest patch release can simplify the upgrade process.

    Get the current DSE version:

    bin/dse -v
    current\_dse\_version
  3. If you are upgrading from DSE 5.0 to DSE 6.8, upgrade to DSE 5.1 first and then follow the instructions in this section. See Upgrading DataStax Enterprise 5.0 to 5.1.

  4. Familiarize yourself with the changes and features in the new release:

  5. Before upgrading, be sure that each node has adequate free disk space.

    Determine current DSE data disk space usage:

    sudo du -sh /var/lib/cassandra/data/
    3.9G	/var/lib/cassandra/data/

    Determine available disk space:

    sudo df -hT /
    Filesystem     Type  Size  Used Avail Use% Mounted on
    /dev/sda1      ext4   59G   16G   41G  28% /

    The required space depends on the compaction strategy. See Disk space.

  6. Replace ITriggers and custom interfaces.

    All custom implementations, including the following interfaces, must be replaced with supported implementations when upgrading to DSE 6.x:

    1. The org.apache.cassandra.triggers.ITrigger interface was modified from augment to augmentNonBlocking for non-blocking internal architecture. Updated trigger implementations must be provided on upgraded nodes. If unsure, drop all existing triggers before upgrading. To check for existing triggers:

      SELECT * FROM system_schema.triggers;
      DROP TRIGGER trigger\_name ON keyspace\_name.table\_name;
    2. The org.apache.cassandra.index.Index interface was modified to comply with the core storage engine changes. Updated implementations are required. If unsure, drop all existing custom secondary indexes before upgrading, except DSE Search indexes, which do not need to be replaced. To check for existing indexes:

      SELECT * FROM system_schema.indexes;
      DROP INDEX index\_name;
    3. The org.apache.cassandra.cql3.QueryHandler, org.apache.cassandra.db.commitlog.CommitLogReadHandler, and other extension points have been changed. See QueryHandlers.

      For help contact the DataStax Support team.

  7. Support for Thrift-compatible tables (COMPACT STORAGE) is dropped. Before upgrading, migrate all non-system tables that have COMPACT STORAGE to CQL table format:

    cqlsh -e 'DESCRIBE FULL SCHEMA;' > schema\_file
    cat schema\_file | while read -d $';\n' line ; do
      if echo "$line"|grep 'COMPACT STORAGE' 2>&1 > /dev/null ; then
        TBL="`echo $line|sed -e 's|^CREATE TABLE \([^ ]*\) .*$|\1|'`"
        if echo "$TBL"|egrep -v '^system' 2>&1 > /dev/null; then
          echo "ALTER TABLE $TBL DROP COMPACT STORAGE;" >> schema-drop-list
        fi
      fi
    done
    cqlsh -f schema-drop-list

    The script above dumps the complete DSE schema to schema_file, uses grep to find lines containing COMPACT STORAGE, and then writes only those table names to schema-drop-list along with the required ALTER TABLE commands. The schema-drop-list file is then read by cqlsh which runs the ALTER TABLE commands contained therein.

    DSE does not start if tables using COMPACT STORAGE are present.

  8. If audit logging is configured to use CassandraAuditWriter (5.1) | CassandraAuditWriter (6.8), run these CQL commands as superuser on DSE 5.0 nodes:

    ALTER TABLE dse_audit.audit_log ADD authenticated text;
    ALTER TABLE dse_audit.audit_log ADD consistency text

    Ensure that the entire cluster has schema agreement:

    nodetool describecluster
    Cluster Information:
    	Name: Test Cluster
    	Snitch: com.datastax.bdp.snitch.DynamicEndpointSnitch
    	DynamicEndPointSnitch: enabled
    	Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
    	Schema versions:
    		0fffd971-b7a4-33ae-859d-8ca792cd2852: [10.116.138.23]

    If there are any schema discrepancies, restart the nodes in question and rerun nodetool describecluster until there is only one schema version in the output.

  9. Upgrade the SSTables on each node to ensure that all SSTables are on the current version:

    nodetool upgradesstables

    Failure to upgrade SSTables when required results in a significant performance impact and increased disk usage.

    Use the --jobs option to set the number of SSTables that upgrade simultaneously. The default setting is 2, which minimizes impact on the cluster. Set to 0 to use all available compaction threads. DataStax recommends running the nodetool upgradesstables command on one node at a time or, when using racks, one rack at a time.

    If the SSTables are already on the current version, the command returns immediately and no action is taken.

  10. Ensure that keyspace replication factors are correct for your environment:

    cqlsh --execute "DESCRIBE KEYSPACE keyspace-name;" | grep "replication"
    CREATE KEYSPACE keyspace-name WITH replication = {'class': 'NetworkTopologyStrategy, '**replication\_factor**': '3'}  AND durable_writes = true;
  11. Verify the Java runtime version and upgrade to the recommended version.

    java -version
    openjdk version "1.8.0_222"
    OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10)
    OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
  12. Run nodetool repair (5.1) | nodetool repair (6.8) to ensure that data on each replica is consistent with data on other nodes:

    nodetool repair -pr
  13. Install the libaio package for optimal performance.

    RHEL platforms:

    sudo yum install libaio

    Debian:

    sudo apt-get install libaio1
  14. Back up any customized configuration files since they may be overwritten with default values during installation of the new version.

    If you backed up your installation using the instructions in Backing up a tarball installation or Backing up a package installation, your original configuration files are included in the archive.

Upgrade Steps

Follow these steps on each node in the recommended order. The upgrade process requires upgrading and restarting one node at a time.

These steps are performed in your upgraded version and use DSE 6.7 or DSE 6.8 documentation depending upon your target version.

Direct upgrades from DSE 5.0 to 6.8 are not supported.

  1. If you are upgrading from DSE 5.0 to DSE 6.8, upgrade to DSE 5.1 first and then follow the instructions in this section. See Upgrading DataStax Enterprise 5.0 to 5.1.

  2. Flush the commit log of the current installation:

    nodetool drain
  3. DSE Analytics nodes only:

    Kill all Spark worker processes:

    for pid in $(jps | grep Worker | awk '{print $1}'); do kill -9 $pid; done
  4. Stop the node:

    • Package installations:

      sudo service dse stop
    • Tarball installations:

      installation\_dir/bin/dse cassandra-stop
  5. Use the appropriate method to install the new product version on a supported platform:

  6. To configure the new version:

    1. The upgrade installs a new server.xml for Tomcat 8. If your existing server.xml has custom connectors, migrate those connectors to the new server.xml before starting the upgraded nodes.

    2. Compare changes in the new configuration files with the backup configuration files after the upgrade but before restarting, remove deprecated settings, and update any new settings if required.

      You must use the new configuration files that are generated from the upgrade installation. Copy any parameters needed from your old configuration files into these new files.

      Do not replace the newly-generated configuration files with the old files.

      Use the DSE yaml_diff tool (5.1) | yaml_diff tool (6.7) | yaml_diff tool (6.8) to compare backup YAML files with the upgraded YAML files:

      cd /usr/share/dse/tools/yamls
      ./yaml_diff path/to/yaml-file-old path/to/yaml-file-new
      ...
       CHANGES
      =========
      authenticator:
      - AllowAllAuthenticator
      + com.datastax.bdp.cassandra.auth.DseAuthenticator
      
      authorizer:
      - AllowAllAuthorizer
      + com.datastax.bdp.cassandra.auth.DseAuthorizer
      
      roles_validity_in_ms:
      - 2000
      + 120000
      ...

      cassandra.yaml changes

      Deprecated cassandra.yaml settings:

       no-highlight
       rpc_address
       rpc_broadcast_address

      Replacement settings:

      native_transport_address
      native_transport_broadcast_address

      Deprecated cassandra.yaml settings:

      memtable_heap_space_in_mb
      memtable_offheap_space_in_mb

      Replacement setting:

      memtable_space_in_mb
      memtable_allocation_type: offheap_objects

      Deprecated cassandra.yaml settings:

      user_defined_function_warn_timeout
      user_defined_function_fail_timeout

      Replacement settings:

      user_defined_function_warn_micros: 500
      user_defined_function_fail_micros: 10000
      user_defined_function_warn_heap_mb: 200
      user_defined_function_fail_heap_mb: 500
      user_function_timeout_policy: die

      Settings are in microseconds. The new timeouts are not equivalent to the deprecated settings.

      Internode encryptions settings

      Deprecated cassandra.yaml setting:

      server_encryption_options:
          store_type: JKS

      Replacement settings:

      server_encryption_options:
          keystore_type: JKS
          truststore_type: JKS

      Valid type options are JKS, JCEKS, PKCS11, or PKCS12 for keystore_type, and JKS, JCEKS, or PKCS12 for truststore_type.

      Deprecated cassandra.yaml setting:

      no-highlight client_encryption_options:

      store_type: JKS

      Replacement settings:

      client_encryption_options:     keystore_type: JKS     truststore_type: JKS

      Valid type options are JKS, JCEKS, PKCS11, or PKCS12 for keystore_type, and JKS, JCEKS, or PKCS12 for truststore_type.

      Replacement settings:

      shard_transport_options:

      netty_client_request_timeout: 60000        `

      Remove any other options under shard_transport_options.

      Deprecated dse.yaml settings:

      Remove these options:

      no-highlight
      cql_solr_query_executor_threads
      enable_back_pressure_adaptive_nrt_commit max_solr_concurrency_per_core solr_indexing_error_log_options

      DSE 6.x does not start with those options present.

      Changed dse.yaml settings:

      The dsefs_enabled: settings are commented out. To enable DSEFS, uncomment all dsefs_options: settings.

  7. When upgrading DSE to versions earlier than 5.1.16, 6.0.8, or 6.7.4 inclusive, if any tables are using DSE Tiered Storage, remove all txn_compaction log files from second-level tiers and lower. For example, given the following dse.yaml configuration, remove txn_compaction log files from /mnt2 and /mnt3 directories:

    tiered_storage_options:
        strategy1:
            tiers:
                - paths:
                    - /mnt1
                - paths:
                    - /mnt2
                - paths:
                    - /mnt3

    The following example removes the files using the find command:

    find /mnt2 -name "*_txn_compaction_*.log" -type f -delete &&
    find /mnt3 -name "*_txn_compaction_*.log" -type f -delete

    Failure to complete this step may result in data loss.

  8. Remove any previously installed JTS JAR files from the CLASSPATHS in your DSE installation. JTS (Java Topology Suite) is distributed with DSE 6.7.

  9. DSE Analytics nodes only: If your DSE 5.0 clusters had any datacenters running in Analytics Hadoop mode and if the DseSimpleSnitch was used, you must use one of these options for starting nodes in your cluster. Select the option that works best for your environment:

  10. Start the node.

  11. Verify that the upgraded datacenter names match the datacenter names in the keyspace schema definition:

    • Get the node’s datacenter name:

      nodetool status | grep "Datacenter"
      Datacenter: datacenter-name
    • Verify that the node’s datacenter name matches the datacenter name for a keyspace:

      cqlsh --execute "DESCRIBE KEYSPACE keyspace-name;" | grep "replication"
      CREATE KEYSPACE keyspace-name WITH replication = {'class': 'NetworkTopologyStrategy, 'datacenter-name': '3'};
  12. Review the logs for warnings, errors, and exceptions:

    grep -w 'WARNING\|ERROR\|exception' /var/log/cassandra/*.log

    Warnings, errors, and exceptions are frequently found in the logs when starting an upgraded node. Some of these log entries are informational to help you execute specific upgrade-related steps. If you find unexpected warnings, errors, or exceptions, contact DataStax Support.

    Non-standard log locations are configured in dse-env.sh.

  13. Repeat the upgrade process on each node in the cluster following the recommended order.

  14. After the entire cluster upgrade is complete: upgrade the SSTables on one node at a time or, when using racks, one rack at a time.

    Failure to upgrade SSTables when required results in a significant performance impact and increased disk usage and possible data loss. Upgrading is not complete until the SSTables are upgraded.

    nodetool upgradesstables

    Use the --jobs option to set the number of SSTables that upgrade simultaneously. The default setting is 2, which minimizes impact on the cluster. Set to 0 to use all available compaction threads. DataStax recommends running the upgradesstables command on one node at a time or, when using racks, one rack at a time.

    You can run the upgradesstables command before all the nodes are upgraded as long as you run the command on only one node at a time or, when using racks, one rack at a time. Running upgradesstables on too many nodes at once degrades performance.

General Post-Upgrade Steps

After all nodes are upgraded:

  1. If you use the OpsCenter Repair Service, turn on the Repair Service (6.7) | turn on the Repair Service (6.8).

  2. If you encounter serialization-header errors, stop the node and repair them using the sstablescrub -e option:

    sstablescrub -e fix-only keyspace table

    For more details on serialization-header errors and repairs, see DSE 5.0 SSTables with UDTs corrupted after upgrading to DSE 5.1, 6.0, or 6.7

  3. Drop the following legacy tables, if they exist: system_auth.users, system_auth.credentials, and system_auth.permissions:

    DROP TABLE IF EXISTS system_auth.users;
    DROP TABLE IF EXISTS system_auth.credentials;
    DROP TABLE IF EXISTS system_auth.permissions;
  4. Review your security configuration. To use security, enable and configure DSE Unified Authentication 5.1 | DSE Unified Authentication 6.7 | DSE Unified Authentication 6.8.

    In cassandra.yaml, the default authenticator is DseAuthenticator and the default authorizer is DseAuthorizer. Other authenticators and authorizers are no longer supported. Security is disabled in dse.yaml by default.

  5. TimeWindowCompactionStrategy (TWCS) (5.1) | ((6.7) | (6.8)) is set only on new dse_perf and dse_audit_log tables. Manually change dse_perf and dse_audit_log tables that were created in earlier releases to use TWCS. For example:

    ALTER TABLE dse_perf.read_latency_histograms WITH COMPACTION={'class':'TimeWindowCompactionStrategy'};
    ALTER TABLE dse_audit_log.audit_log WITH COMPACTION={'class':'TimeWindowCompactionStrategy'};
  6. DSE 6.7 introduces, and enables by default, the DSE Metrics Collector, a diagnostics information aggregator used to help facilitate DSE problem resolution. For more information on the DSE Metrics Collector, or to disable metrics collection, see DataStax Enterprise Metrics Collector (6.7) | DataStax Enterprise Metrics Collector (6.8).

Post-Upgrade Steps for DSE Analytics Nodes

For DSE Analytics nodes:

  1. Spark Jobserver uses DSE custom version 8.0.4.45. Ensure that applications use the compatible Spark Jobserver API from the DataStax repository.

  2. If you are using Spark SQL tables, migrate them to the new Hive metastore format:

    dse client-tool spark metastore migrate --from 5.0.0 --to 6.7.0

Post-Upgrade Steps for DSEFS-Enabled Nodes

A new schema is available for DSEFS.

The new DSEFS schema is required only if DSEFS is configured for multiple datacenters. If you do not have a multi-datacenter setup using DSEFS, no action is required, and DSEFS continues to work using the DSE 5.0 schema.

A multi-datacenter setup for DSEFS is not a supported feature.

If you have no data in DSEFS or if you are using DSEFS only for temporary data, follow these steps to use the new schema:

  1. Stop the node:

    • Package installations:

      sudo service dse stop
    • Tarball installations:

      installation\_dir/bin/dse cassandra-stop
  2. Clear the DSEFS data directories on each node.

    For example, if the dsefs_options section of dse.yaml has data_directories configured as:

    dsefs_options:
         ...
         data_directories:
             - dir: /var/lib/dsefs/data

    this command removes the directories:

    rm -r /var/lib/dsefs/data/*
  3. In the dsefs_options section of dse.yaml, change the keyspace_name parameter to a different name:

    ##########################
    # DSE File System options
    dsefs_options:
        ...
        **keyspace-name: new\_keyspace\_name**
  4. Start the node.

  5. If you backed up existing DSEFS data before the upgrade, copy the data back into DSEFS from local storage.

    dse hadoop fs -cp /local\_backup\_location/* /dsefs\_data\_directory/
  6. OPTIONAL: Drop the old DSEFS keyspace:

    DROP KEYSPACE dsefs

Post-Upgrade Steps for DSE Search Nodes

For DSE Search nodes:

  1. The appender SolrValidationErrorAppender and the logger SolrValidationErrorLogger are no longer used and may safely be removed from logback.xml.

  2. In contrast to earlier versions, DataStax recommends accepting the new default value of 1024 for back_pressure_threshold_per_core (6.7) | back_pressure_threshold_per_core (6.8) in dse.yaml. See Configuring and tuning indexing performance (6.7) | Configuring and tuning indexing performance (6.8).

  3. If SpatialRecursivePrefixTreeFieldType (RPT) is used in the search schema, replace the units field type with a suitable (degrees, kilometers, or miles) distanceUnits, and then verify that spatial queries behave as expected.

  4. If you are using HTTP API writes with JSON documents (deprecated), a known issue may cause the auto-generated solrconfig.xml to have invalid requestHandler for JSON core creations. If necessary, change the auto-generated solrconfig.xml:

    <requestHandler name="/update/json" class="solr.UpdateUpdateRequestHandler" startup="lazy"/>

    to

    <requestHandler name="/update/json" class="solr.UpdateRequestHandler" startup="lazy"/>

    For more information, see solrconfig.xml.

  5. Do a full reindex of all encrypted search indexes on each node in your cluster:

    dsetool reload_core keyspace\_name.table\_name distributed=false reindex=true deleteAll=true

    Plan sufficient time after the upgrade is complete to reindex with deleteAll=true on all nodes.

Warning Messages During and after Upgrade

You can ignore some log messages that occur during and after an upgrade:

  • When upgrading nodes with DSE Advanced Replication, there might be some WriteTimeoutExceptions during a rolling upgrade while mixed versions of nodes exist. Some write consistency limitations apply while mixed versions of nodes exist. The WriteTimeout issue is resolved after all nodes are upgraded.

  • Some gremlin_server properties in earlier versions of DSE are no longer required. If properties exist in the dse.yaml file after upgrading, logs display warnings similar to:

    WARN  [main] 2017-08-31 12:25:30,523 GREMLIN DseWebSocketChannelizer.java:149 - Configuration for the org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0 serializer in dse.yaml overrides the DSE default - typically it is best to allow DSE to configure these.

    You can ignore these warnings or modify dse.yaml so that only the required gremlin server properties are present.

Locking DSE Package Versions

If you have upgraded a DSE package installation, you can prevent future unintended upgrades.

RHEL yum installations

To hold a package at the current version:

  1. Install yum-versionlock (one-time operation):

    sudo yum install yum-versionlock
  2. Lock the current DSE version:

    sudo yum versionlock dse-*

To clear the version lock and enable upgrades:

sudo yum versionlock clear

For details, see the versionlock command.

Debian apt-get installations

To hold a package at the current version:

sudo apt-mark hold dse-*

To remove the version hold:

sudo apt-mark unhold dse-*

For details, see the apt-mark command.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com