Export metrics and logs to third-party observability tools

Mission Control allows you to send metrics and logs from managed services to external observability tools such as Prometheus, Elasticsearch, New Relic, and Datadog. You can enhance your data with custom transformations before forwarding it to these external systems.

Vector sinks and transforms

Vector is an observability pipeline framework from Datadog that collects metrics and logs from various Mission Control services. Vector transforms and sends the metrics to destinations like Loki and Mimir. The mission-control-aggregator ConfigMap in the mission-control namespace stores configuration data for Vector, including the vector.yaml file. The vector.yaml file defines Vector’s behavior, such as configured transforms and sinks.

In the default Mission Control configuration, Vector uses a custom transform to enrich observability data with deployment metadata, such as managed cluster, datacenter, and instance information. After this enrichment, the streams are pushed to downstream sinks, Loki, and Mimir.

You can extend this default pipeline with additional transforms and sinks allowing for broader integration with your enterprise observability platform.

Prerequisites

View default sinks

By default, Mission Control includes Vector sink settings for Loki and Mimir observability metrics.

To view default sink settings, run the following command:

kubectl -n mission-control describe configmap mission-control-aggregator -o yaml
Results
...
    sinks:
      mimir:
        type: "prometheus_remote_write"
        inputs: ["vector", "internal_metrics", "kube_state_metrics", "cass_operator_metrics", "mimir-self-monitoring", "prometheus_push", "statsd"]
        endpoint: "http://mission-control-mimir-nginx/api/v1/push"
      loki:
        type: loki
        inputs: [vector_with_defaults, syslog]
        endpoint: "http://loki-gateway"
        tenant_id: anonymous
        out_of_order_action: accept
        labels:
          source_type: '{{ "{{" }} source_type {{ "}}" }}'
          namespace: '{{ "{{" }} namespace {{ "}}" }}'
          pod_name: '{{ "{{" }} pod_name {{ "}}" }}'
          file: '{{ "{{" }} file {{ "}}" }}'
          cluster: '{{ "{{" }} cluster {{ "}}" }}'
          datacenter: '{{ "{{" }} datacenter {{ "}}" }}'
          rack: '{{ "{{" }} rack {{ "}}" }}'
          container_name: '{{ "{{" }} container_name {{ "}}" }}'
        encoding:
          codec: json
...

Configure custom sinks

Custom sinks are set globally.

To configure a custom sink, you must identify your observability tool and obtain the necessary credentials.

This example configures Vector to forward the observability stream to Prometheus.

  1. Copy the YAML for your sink configuration from the Vector Sinks reference and paste it into your IDE.

  2. Define the inputs for your custom sink YAML configuration. The default sources available are vector, internal_metrics, kube_state_metrics, cass_operator_metrics, and vector_with_defaults for logs.

  3. Provide configuration requirements such as your ENDPOINT address or authentication credentials.

  4. Sign in to the KOTS admin console.

  5. Click Config.

  6. In the left nav, click Observability - Custom pipelines.

  7. Paste your modified YAML into Custom Vector Aggregator Sinks. This example configures Vector to forward the observability stream to Prometheus.

    my_prometheus:
      type: prometheus_remote_write
      inputs:
        - vector
        - internal_metrics
      endpoint: https://PROM_ENDPOINT_URL/api/v1/write
      request:
        retry_attempts: 0

    Replace PROM_ENDPOINT_URL with your host URL.

  8. Optional: Add additional observability tools or versions. This example configures Vector to forward two sets of observability streams to two Prometheus versions and one New Relic version.

    my_prometheus:
      type: prometheus_remote_write
      inputs:
        - vector
        - internal_metrics
       endpoint: https://PROM1_ENDPOINT_URL:8087/api/v1/write
       request:
         retry_attempts: 0
    my_other_prometheus:
      type: prometheus_remote_write
      inputs:
        - vector
        - internal_metrics
      endpoint: https://PROM2_ENDPOINT_URL:8087/api/v1/write
      request:
        retry_attempts: 0
    my_new_relic:
      type: new_relic
      inputs:
        - vector
        - internal_metrics
      account_id: ACCOUNT_ID
      api: events
      license_key: LICENSE_KEY
      request:
        retry_attempts: 0

    Replace the following:

    • PROM1_ENDPOINT_URL: The URL of your first Prometheus host URL

    • PROM2_ENDPOINT_URL: The URL of your second Prometheus host URL

    • ACCOUNT_ID: The New Relic account ID

    • LICENSE_KEY: The New Relic license key

  9. Click Save.

  10. Click Deploy.

Example sinks

Here are some examples of custom sinks that you can use to forward Mission Control metrics data to external observability tools.

Prometheus

The Prometheus sink requires the following configuration to forward metrics data to a Prometheus server:

  • type: prometheus_remote_write

  • inputs: A list of input sources to be forwarded to Prometheus

  • endpoint: The URL of the Prometheus server

my_prometheus:
  type: prometheus_remote_write
  inputs:
    - vector
    - internal_metrics
  endpoint: https://ENDPOINT_URL:8087/api/v1/write
  request:
    retry_attempts: 0

Replace ENDPOINT_URL with the URL of your Prometheus server.

New Relic

The New Relic sink requires the following configuration to forward metrics data:

  • type: new_relic

  • account_id: Your New Relic account ID

  • license_key: Your New Relic license key

  • inputs: The input source to be forwarded to New Relic

my_new_relic:
  type: new_relic
  inputs:
    - vector
    - internal_metrics
  account_id: ACCOUNT_ID
  api: events
  license_key: LICENSE_KEY
  request:
    retry_attempts: 0

Replace the following:

  • ACCOUNT_ID: The New Relic account ID

  • LICENSE_KEY: The New Relic license key

Elasticsearch

The Elasticsearch sink requires the following configuration to forward log streams:

  • type: elasticsearch

  • inputs: The input sources to be forwarded to Datadog

  • endpoint: The URL of the Elasticsearch server

my_elasticsearch:
  type: elasticsearch
  inputs:
    - vector
    - internal_metrics
  endpoint: http://ENDPOINT_URL:9000
  request:
    retry_attempts: 0

Replace ENDPOINT_URL with the URL of your endpoint.

Datadog

The Datadog sink requires the following configuration to forward metrics data:

  • type: datadog_metrics

  • inputs: The input sources to be forwarded to Datadog

  • endpoint: The URL of the Datadog server

my_datadog:
  type: datadog_metrics
  inputs:
    - vector
    - internal_metrics
  endpoint: http://ENDPOINT_URL:8080"
  request:
    retry_attempts: 0

Replace ENDPOINT_URL with the URL of the Datadog server.

Configure custom transforms

Custom transforms are set globally.

To configure a custom transform, do the following:

  1. Identify your observability tool and obtain the necessary credentials.

  2. Copy the YAML for your transform configuration from the Vector Transforms reference and paste it into your IDE.

  3. Define the inputs for your custom transform YAML configuration. The default sources available are vector, internal_metrics, kube_state_metrics, cass_operator_metrics, and vector_with_defaults for logs.

  4. Sign in to the KOTS admin console.

  5. Click Config.

  6. In the left nav, click Observability - Custom pipelines.

  7. Paste your modified YAML into Custom Vector Aggregator Transforms.

    transforms:
      my_transform_id:
        type: filter
        inputs:
          - SOURCE_OR_TRANSFORM_ID
        filter:
          cluster_id: CLUSTER_ID

    Replace the following:

    • SOURCE_OR_TRANSFORM_ID: The transform source or ID

    • CLUSTER_ID: The ID of the cluster you want to filter

  8. Click Save.

  9. Click Deploy.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com