• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Enterprise OpsCenter 6.8

    • About OpsCenter
      • New features
      • Key features
      • Labs features
        • Exporting and importing dashboard presets
        • Adding a Swift CLI backup location
        • Configuring named route linking
        • Viewing logs from node details
      • Architecture overview
      • OpsCenter policy for DDAC and OSS
      • Feedback about OpsCenter
    • Release notes
    • Installing OpsCenter
    • Upgrading OpsCenter
    • OpsCenter recommended settings
      • OpsCenter basic configurations
      • Cluster synchronization settings
      • Backup Service settings
      • Knowledge Base articles
    • Configuring OpsCenter
      • OpsCenter Security
        • OpsCenter SSL overview
          • Enabling/Disabling HTTPS for the OpsCenter server
          • Configuring SSL/TLS between OpsCenter and the DataStax Agents
          • Connect to DSE with client-to-node encryption in OpsCenter and the DataStax Agents
          • Editing/OpsCenter cluster connections for authentication or encryption
          • SSL configuration options for OpsCenter
        • Configuring OpsCenter role-based security
        • Encrypting sensitive configuration values
          • Activating configuration encryption
          • Creating a system key to encrypt sensitive configuration values
          • Manually encrypting a configuration value
          • Deactivating configuration encryption
        • Authenticating with LDAP
          • Configuring LDAP
          • Adding a role for an LDAP user
          • Troubleshooting OpsCenter LDAP
        • Kerberos authentication
          • Configuring OpsCenter for Kerberos authentication
          • OpsCenter Kerberos configuration options
          • Troubleshooting Kerberos in OpsCenter
        • Configuring security logging
      • Configuring alerts for events
        • SNMP alerts overview
          • Enabling SNMP alerts
        • Enabling SMTP email alerts
        • Enabling alerts posted to a URL
          • Verifying that events are posting correctly
          • Posting URL alerts to a Slack channel
      • Configuring data collection and expiration
        • Controlling data collection
        • Storing collection data on a separate cluster
      • OpsCenter DSE definitions files updates
        • Updating and configuring definitions files properties
      • Automatic failover overview
        • Enabling automatic failover
        • Failover configuration options reference
      • Backing up critical configuration data
      • Configuring named route linking
      • Configuring the OpsCenter JVM
      • Configuring the DataStax Agent JVM
        • Setting and securing the tmp directory for the DataStax Agent
        • Encrypting JMX communications
      • Changing the replication strategy for the OpsCenter keyspace
      • Configuration files for OpsCenter
        • OpsCenter configuration properties
          • Statistics reporter properties
        • Cluster configuration properties
          • Cassandra connection properties
          • Metrics Collection Properties
        • DataStax Agent configuration
        • OpsCenter logback.xml configuration
      • Customize scripts for starting and stopping DataStax Enterprise
      • Example configuration scenarios
        • Configuring for multiple regions
        • Configuring for very large clusters
    • Using OpsCenter
      • OpsCenter workspace overview
        • Ring View
        • List View
        • DataStax Agents Status View
        • Nodes Detail View
          • Node management operations
      • Configuring alerts
        • Adding an alert for agent issues
          • Troubleshooting DataStax Agent Issues
        • Adding an alert for down nodes
        • Configuring an alert for KMIP errors
        • Configuring an alert for percentage of in-memory usage
        • Configuring an alert for percentiles
      • Monitoring node operations
        • Viewing the Spark Console
        • Monitoring in-memory usage
        • Viewing logs from node details
      • Managing and maintaining nodes
        • Running cleanup
        • Performing garbage collection
        • Running compaction
        • Flushing tables
        • Decommission a node
        • Draining a node
        • Moving a node
        • Running a manual repair
        • Configure an alias for a node
      • Starting and stopping DSE
        • Starting DSE on a node
        • Stopping DSE on a node
        • Restarting DSE on a node
      • Managing keyspaces and tables
        • Keyspaces
          • Managing a keyspace
          • Managing tables
        • Browsing data deprecated
      • Cluster administration
        • Adding an existing cluster
        • Disconnecting a cluster from OpsCenter and Lifecycle Manager
        • Rebalancing a cluster overview
          • Rebalancing a cluster
          • Configuring an alert for rebalancing a cluster
        • Restarting a cluster
        • Changing the display name of a cluster
        • Downloading diagnostic data
          • Diagnostic tarball reference
          • Creating an alternate directory for diagnostic information
        • Downloading Insights diagnostic data
        • Generating a cluster report
      • OpsCenter Metrics Tooltips Reference
        • Dashboard performance metrics
        • Performance metrics overview
          • Working with metrics performance graphs
          • Organizing performance metrics presets
          • Exporting and importing dashboard presets
        • Cluster performance metrics
        • Pending task metrics
          • Pending task metrics for writes
          • Pending task metrics for reads
          • Pending task metrics for cluster operations
        • Table performance metrics
        • Tiered storage performance metrics
          • Configuring tiered storage metric graphs
          • Configuring tiered storage alerts
        • Message latency metrics
          • Adding dashboard graphs for datacenter and node messaging latency
          • Adding alerts for DC and node message latency
        • Search performance metrics
        • Graph metrics
        • NodeSync metrics
        • Thread Pool (TP) metrics
          • Viewing TP stats in Node Details
          • Enabling network backpressure
        • Dropped Messages metrics
        • Operating system performance metrics
        • Alert metrics
          • Advanced system alert metrics
    • OpsCenter 6.8 Reference
      • OpsCenter ports reference
      • Installation and configuration locations
        • Default file locations for package installations
        • Default file locations tarball installations
      • Starting, stopping, and restarting OpsCenter
        • Startup log for OpsCenter
      • Stopping, starting, and restarting DataStax Agents
    • DSE Management Services
      • Backup Service
        • Quick Video Tour: Backup Service
        • Adding a backup location
          • Adding a local file system backup location
          • Adding an Amazon S3 backup location
          • Adding an Azure backup location
        • Backing up data
          • Backing up a cluster
          • Backing up to Amazon S3
          • Backing up and restoring DataStax Graphs in OpsCenter
          • Viewing backup and restore history
          • Synchronizing backup data after an upgrade
          • Deleting backup data
        • Restoring a cluster
          • Restoring from a backup
          • Restoring a backup to a specific point-in-time
          • Monitoring sufficient disk space for restoring backups
        • Cloning cluster data
          • Cloning cluster data from a defined other location
          • Cloning cluster data from clusters managed by the same OpsCenter instance
        • Configuring the Backup Service
          • Configuring commit log backups
          • Configuring the free disk space threshold for backups
          • Configuring encryption key storage for backups
          • Configuring custom scripts to run before and after backups
          • Configuring restore to continue after a download failure
          • Backup Service configuration options
        • Troubleshooting Backup Service errors
      • NodeSync Service
        • Enabling NodeSync
        • Configuring the NodeSync refresh data interval
        • Viewing NodeSync Status
        • Configuring the NodeSync rate using LCM
        • NodeSync metrics
      • Repair Service
        • Repair Service overview
          • Subrange repairs overview
          • Distributed subrange overview
          • Incremental repairs overview
          • Repair Service behavior during environment changes
          • Estimating remaining repair time
        • Turning the Repair Service on
        • Turning the Repair Service off
        • Viewing repair status
        • Basic repair configuration
          • Configuring incremental repairs
          • Excluding keyspaces or tables from subrange repairs
          • Enabling distributed subrange repairs
          • Logging for the Repair Service
          • Basic Repair Service configuration reference
        • Advanced repair configuration
          • Adjusting or disabling the throttle for subrange repairs
          • Running validation compaction sequentially
          • Advanced Repair Service configuration reference
        • Expert repair configuration
          • Setting the maximum for parallel subrange repairs
          • Expert Repair Service configuration reference
          • Tuning Repair Service for multi-datacenter environments
        • Expedited Repair Service configuration
        • Troubleshoot Repair Service errors
        • Learn more about repairs
      • Capacity Service
        • Forecasting trends for metric graphs
        • Advanced forecast configuration
      • Best Practice Service
        • Configuring Best Practice service rules
        • Monitoring the results of Best Practice service scans
        • Best Practice Rules Reference
      • Performance Service
        • Performance Service Overview
        • Why use the OpsCenter Performance Service?
        • Enabling the OpsCenter Performance Service
        • Disabling the OpsCenter Performance Service
        • Setting permissions for the OpsCenter Performance Service
        • Tuning a database cluster with the Performance Service
          • Identifying and tuning slow queries
    • Identifying poorly performing tables
    • Monitoring node thread pool statistics
    • Troubleshooting OpsCenter
    • Lifecycle Manager
      • Overview of Lifecycle Manager
        • Supported capabilities
        • Defining the topology
        • Using configuration profiles
        • Defining repositories
        • Running jobs in LCM
          • Job types in LCM
          • Job concurrency in LCM
        • Monitoring job status
      • Installing DSE using LCM
        • Accessing OpsCenter Lifecycle Manager
        • Creating custom data directories
        • Adding SSH credentials
        • Adding a configuration profile
        • Adding a repository
        • Defining the cluster topology
          • Adding a cluster
          • Adding a datacenter
          • Adding a node
        • Running an installation job
        • Viewing job details
        • Using LCM in an offline environment
          • Required software for offline DSE installs
          • Downloading DSE in an offline environments
      • Managing SSH credentials
        • Adding SSH credentials
        • Editing SSH credentials
        • Deleting SSH credentials
        • Configuring SSH connection thresholds for LCM jobs
      • Managing configuration profiles
        • Adding a configuration profile
        • Editing a configuration profile
        • Customizing configuration profile files
        • Cloning a configuration profile
        • Deleting a configuration profile
        • Configuring an HTTP or HTTPS proxy
      • Configuring repositories
        • Adding a repository
        • Editing a repository
        • Deleting a repository
      • Defining DSE topologies
        • Managing cluster topologies
          • Adding a cluster
          • Editing a cluster
          • Deleting a cluster
          • Importing a cluster topology
        • Managing datacenter topologies
          • Adding a datacenter
          • Editing a datacenter
          • Deleting a datacenter
        • Managing node topologies
          • Adding a node
          • Editing a node
          • Deleting a node
      • Running LCM jobs
        • Running an installation job
        • Running an configure job
        • Running an upgrade job
          • Example: Upgrading DSE to a minor release using LCM
        • Aborting a job
        • Adjusting idle timeout
      • Configuring Java options
        • Choosing a Java vendor in LCM
        • Managing Java installs
        • Configuring JVM options for DSE using LCM
      • Configuring DSE security using LCM
        • Native transport authentication schemes and limitations in LCM
          • Configuring row-level access control
        • Configuring SSL/TLS for DSE
        • Configuring a JMX Connection to DSE
      • Lifecycle Manager configuration options
      • Configuration known issues and limitations
      • Using advanced configurations with LCM
        • Exporting metrics collection
        • Configuring AlwaysOn SQL
        • Configuring DSE Graph
        • Configuring the NodeSync rate
        • Configuring tiered storage
    • OpsCenter API reference for developers
      • Enable and access the Datastax Agent API
      • Authentication
      • OpsCenter configuration
      • Retrieving cluster and node information
      • Performing Cluster Operations
      • Managing Keyspaces and Tables
      • Retrieving Metric Data
      • Managing Events and Alerts
      • Schedule management
      • Backup Management and Restoring from Backups
      • Best Practice Rules
      • Hadoop
      • Spark
      • Managing Performance Service Configuration
      • User Interface
      • Agent Install and Status
      • Cluster Lifecycle Management
      • DataStax Agent API example curl commands
  • DataStax Enterprise OpsCenter 6.8
  • OpsCenter API reference for developers
  • Schedule management

Schedule management

With these methods, you can schedule periodic operations to run in your cluster. The types of operations are currently limited to backups and best practice rules.

Managing Job Schedules

Methods

List jobs scheduled to run in OpsCenter.

[GET /\{cluster_id}/job-schedules]

Get the description of a scheduled job.

[GET /\{cluster_id}/job-schedules]

Schedule a job.

[POST /\{cluster_id}/job-schedules]

Modify a scheduled job.

[PUT /\{cluster_id}/job-schedules/\{schedule_id}]

Delete a scheduled job.

[DELETE /\{cluster_id}/job-schedules/\{schedule_id}]

Managing Job Schedules

Job Schedule

A job schedule describes an action that OpsCenter runs periodically. There are two possible schedule types; backups and best practice rules.

A job schedule has the following form:

        {
            "first_run_date": FIRST_RUN_DATE,
            "first_run_time": FIRST_RUN_TIME,
            "timezone": TIMEZONE,
            "interval": INTERVAL,
            "interval_unit": INTERVAL_UNIT
            "job_params": JOB_PARAMS,
            "id": ID,
            "next_run": NEXT_RUN,
            "last_run": LAST_RUN
        }

Data:

  • FIRST_RUN_DATE (string): A date in YYYY-MM-DD format the date for running the job.

  • FIRST_RUN_TIME (string) : A time in hh:mm:ss format specifying the time to begin running the job.

  • TIMEZONE (string): The time zone, listed in the OpsCenter /meta/timezones directory, for the job schedule. For example, GMT, US/Central, US/Pacific, and US/Eastern are valid timezones.

  • INTERVAL (integer): In conjunction with interval_unit, this controls how often the job is executed. For example, an interval of 2 and an interval_unit of weeks results in a job that runs every two weeks.

  • INTERVAL_UNIT (string): The unit of time for interval. Values are minutes, hours, days, or weeks.

  • JOB_PARAMS (dict): A dictionary that describes the job. Only the type field is required to be present in the dictionary, and its value is currently limited to backup and best-practice. Fields for this dictionary are specific to the type; for the backup type, see Backup-Job-Params. For the best-practice type see Best Practice Job Params.

  • ID (string): A unique ID that references a job schedule. Use only for retrieving a job schedule.

  • NEXT_RUN (string): The date and time of the next scheduled run.

  • LAST_RUN (string): The date and time of the last successful run.

Backup Job Params

A JSON dictionary that describes a backup type of job-schedule. This should be used as the job_params field in the Job Schedule.

        {
            "type": "backup",
            "keyspaces": KEYSPACES,
            "cleanup_age": CLEANUP_AGE,
            "cleanup_age_unit": CLEANUP_AGE_UNIT,
            "pre_snapshot_script": PRE_SNAPSHOT_SCRIPT,
            "post_snapshot_script": POST_SNAPSHOT_SCRIPT,
            "datacenters": DATACENTERS,
            "destinations": DESTINATIONS,
            "alert_on_failure": ALERT_ON_FAILURE,
            "cleanup_dests": CLEANUP_DESTS,
            "retries": RETRIES
        }

Data:

  • KEYSPACES (list): A JSON list of keyspace names that should be included in scheduled backups. An empty list or null results in all keyspaces being included in the backups.

  • CLEANUP_AGE (int): (Optional) In combination with CLEANUP_AGE_UNIT, this specifies the age at which old backups should be deleted. A value of 0 disables automatic backup cleanup. This option defaults to a value of 0, meaning automatic backup cleanups are disabled by default.

  • CLEANUP_AGE_UNIT (string): (Optional) The unit of time for CLEANUP_AGE. Valid values include "minutes", "hours", "days" (the default), and "weeks".

  • PRE_SNAPSHOT_SCRIPT (string): (Optional) The file name of a custom script to be automatically run prior to triggering each backup. This file must exist within the bin/backup-scripts/ directory where the OpsCenter agent is installed. Only letters, numbers, underscores and hyphens are permitted in the name.

  • POST_SNAPSHOT_SCRIPT (string): (Optional) The file name of a custom script to be automatically run after each backup is taken. The name of each file included in the backup is passed to the script through stdin . This file must exist within the bin/backup-scripts/ directory where the OpsCenter agent is installed. Only letters, numbers, underscores and hyphens are permitted in the name.

  • DATACENTERS (list): (Optional) A JSON list of data center names where backup should be performed. An empty list or null leads to performing backup in all data centers.

  • DESTINATIONS (object): (Optional) A JSON object representing the destinations for backup. The key is the destination ID obtained by using /{cluster_id}/backups/destinations API, and value is the JSON object with additional parameters specific for given destination. For example, you can specify retention time (with cleanup_age and cleanup_age_unit parameters), and/or compression (with parameter compressed (true/false)).

  • ALERT_ON_FAILURE (boolean): (Optional, default is false) boolean flag that defines, if the alert should be triggered on backup failure.

  • RETRIES (int): (Optional, default is 0) number of retries to perform backup before triggering failure.

Best Practice Job Params

A JSON dictionary that describes a best practice rule type of job-schedule. This should be used as the job_params field in the Job Schedule.

        {
            "type": "best-practice",
            "rules": RULE_LIST
        }

Data:

  • RULE_LIST (list): A JSON list of rules to run on the given schedule. At least one rule must be specified.

GET /{cluster_id}/job-schedules

Retrieve a list of jobs scheduled to run in OpsCenter. Currently the only types of jobs are a scheduled backup or a best practice rule.

Path arguments: * cluster_id: The ID of a cluster returned from GET /cluster-configs.

Returns a list of job-schedule objects.

Example:

 curl http://127.0.0.1:8888/Test_Cluster/job-schedules

Output:

        [
          {
            "first_run_date": "2012-04-19",
            "first_run_time": "18:00:00",
            "id": "19119720-115a-4f2c-862f-e10e1fb90eed",
            "interval": 1,
            "interval_unit": "days",
            "job_params": {
              "cleanup_age": 30,
              "cleanup_age_unit": "days",
              "keyspaces": [],
              "type": "backup"
            },
            "last_run": "2012-04-20 18:00:00 GMT",
            "next_run": "2012-04-21 18:00:00 GMT",
            "timezone": "GMT"
          },
          ...
        ]

GET /{cluster_id}/job-schedules/{schedule_id}

Get the description of a scheduled job.

Path arguments: * cluster_id: The ID of a cluster returned from GET /cluster-configs. * schedule_id: A unique ID of the scheduled job that matches the id of a job-schedule object.

Returns a Job Schedule object.

POST /{cluster_id}/job-schedules

Create a new scheduled job. You can create a scheduled job to run one time in the future by specifying an interval of -1 and interval_unit of null.

Path arguments: * cluster_id: The ID of a cluster returned from GET /cluster-configs. Body: A dictionary in the format of a Job Schedule describing the scheduled job to create. The id,last_run, and next_run fields should be omitted. * Responses: 201: Job schedule created successfully.

Returns the ID of the newly created job.

Example:

 curl -X POST
   http://127.0.0.1:8888/Test_Cluster/job-schedules/
   -d
   '{
     "first_run_date": "2012-05-03",
     "first_run_time": "18:00:00",
     "interval": 1,
     "interval_unit": "days",
     "job_params": {
         "cleanup_age": 30,
         "cleanup_age_unit": "days",
         "keyspaces": [],
         "type": "backup"
     },
     "timezone": "GMT"
   }'

Output:

 "905391b7-1920-486d-a633-282f22dce604"

PUT /{cluster_id}/job-schedules/{schedule_id}

Update a scheduled job.

Path arguments: * cluster_id: The ID of a cluster returned from GET /cluster-configs. * schedule_id: A unique ID identifying the schedule job to update. Body: A dictionary with fields from :http:response:`job-schedule` that you would like to update. * Responses: 200: Schedule updated successfully.

Returns null.

Example:

 curl -X PUT
   http://127.0.0.1:8888/Test_Cluster/job-schedules/905391b7-1920-486d-a633-282f22 dce604
   -d
   '{
     "interval": "12",
     "interval_unit": "hours"
   }'

DELETE /{cluster_id}/job-schedules/{schedule_id}

Delete a scheduled job.

Path arguments: * cluster_id: The ID of a cluster returned from GET /cluster-configs. * schedule_id: A unique ID identifying the schedule job to delete. Body: A dictionary with fields from :http:response:`job-schedule` that you would like to update. * Responses: 200: Schedule deleted successfully.

Returns null.

Example:

 curl -X DELETE
   http://127.0.0.1:8888/Test_Cluster/job-schedules/905391b7-1920-486d-a633-282f22 dce604
Managing Events and Alerts Backup Management and Restoring from Backups

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage