Export metrics and logs to third-party observability tools
Mission Control lets you send metrics and logs from managed services to external observability tools such as Prometheus, Elasticsearch, New Relic, and Datadog. You can enhance your data with custom transformations before forwarding it to these external systems.
Vector sinks and transforms
Vector is an observability pipeline framework from Datadog that collects metrics and logs from various Mission Control services.
Vector transforms and sends the metrics to destinations like Loki and Mimir.
The mission-control-aggregator ConfigMap in the mission-control namespace stores configuration data for Vector, including the vector.yaml file.
The vector.yaml file defines Vector’s behavior, such as configured transforms and sinks.
In the default Mission Control configuration, Vector uses a custom transform to enrich observability data with deployment metadata, such as managed cluster, datacenter, and instance information. After this enrichment, the streams are pushed to downstream sinks, Loki, and Mimir.
You can extend this default pipeline with additional transforms and sinks allowing for broader integration with your enterprise observability platform.
Prerequisites
View default sinks
By default, Mission Control includes Vector sink settings for Loki and Mimir observability metrics.
To view default sink settings, run the following command:
kubectl -n mission-control describe configmap mission-control-aggregator -o yaml
Result
...
sinks:
mimir:
type: "prometheus_remote_write"
inputs: ["vector", "internal_metrics", "kube_state_metrics", "cass_operator_metrics", "mimir-self-monitoring", "prometheus_push", "statsd"]
endpoint: "http://mission-control-mimir-nginx/api/v1/push"
loki:
type: loki
inputs: [vector_with_defaults, syslog]
endpoint: "http://loki-gateway"
tenant_id: anonymous
out_of_order_action: accept
labels:
source_type: '{{ "{{" }} source_type {{ "}}" }}'
namespace: '{{ "{{" }} namespace {{ "}}" }}'
pod_name: '{{ "{{" }} pod_name {{ "}}" }}'
file: '{{ "{{" }} file {{ "}}" }}'
cluster: '{{ "{{" }} cluster {{ "}}" }}'
datacenter: '{{ "{{" }} datacenter {{ "}}" }}'
rack: '{{ "{{" }} rack {{ "}}" }}'
container_name: '{{ "{{" }} container_name {{ "}}" }}'
encoding:
codec: json
...
Configure custom sinks
|
Custom sinks are set globally. |
To configure a custom sink, you must identify your observability tool and obtain the necessary credentials.
-
KOTS installation
-
Helm installation
This example configures Vector to forward the observability stream to Prometheus.
-
Copy the YAML for your sink configuration from the Vector Sinks reference and paste it into your IDE.
-
Define the
inputsfor your custom sink YAML configuration. The default sources available arevector,internal_metrics,kube_state_metrics,cass_operator_metrics, andvector_with_defaultsfor logs. -
Provide configuration requirements such as your
ENDPOINTaddress or authentication credentials. -
Sign in to the KOTS admin console.
-
Click Config.
-
In the navigation menu, click Observability - Custom pipelines.
-
Paste your modified YAML into Custom Vector Aggregator Sinks. This example configures Vector to forward the observability stream to Prometheus.
my_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM_ENDPOINT_URL/api/v1/write request: retry_attempts: 0Replace
PROM_ENDPOINT_URLwith your host URL. -
Optional: Add additional observability tools or versions. This example configures Vector to forward two sets of observability streams to two Prometheus versions and one New Relic version.
my_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM1_ENDPOINT_URL:8087/api/v1/write request: retry_attempts: 0 my_other_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM2_ENDPOINT_URL:8087/api/v1/write request: retry_attempts: 0 my_new_relic: type: new_relic inputs: - vector - internal_metrics account_id: ACCOUNT_ID api: events license_key: LICENSE_KEY request: retry_attempts: 0Replace the following:
-
PROM1_ENDPOINT_URL: The URL of your first Prometheus host URL -
PROM2_ENDPOINT_URL: The URL of your second Prometheus host URL -
ACCOUNT_ID: The New Relic account ID -
LICENSE_KEY: The New Relic license key
-
-
Click Save.
-
Click Deploy.
For Helm-based installations, you configure custom sinks by adding them to your values.yaml file under the aggregator.customConfig.sinks section.
This example configures Vector to forward the observability stream to Prometheus.
-
Copy the YAML for your sink configuration from the Vector Sinks reference and paste it into your IDE.
-
Define the
inputsfor your custom sink YAML configuration. The default sources available arevector,internal_metrics,kube_state_metrics,cass_operator_metrics, andvector_with_defaultsfor logs. -
Provide configuration requirements such as your
ENDPOINTaddress or authentication credentials. -
Add your custom sink configuration to your
values.yamlfile under theaggregator.customConfig.sinkssection:aggregator: customConfig: sinks: loki: encoding: codec: json endpoint: http://loki-gateway inputs: - vector_with_defaults labels: cluster: '{{ "{{\" }} cluster {{ \"}}" }}' container_name: '{{ "{{\" }} container_name {{ \"}}" }}' datacenter: '{{ "{{\" }} datacenter {{ \"}}" }}' file: '{{ "{{\" }} file {{ \"}}" }}' namespace: '{{ "{{\" }} namespace {{ \"}}" }}' pod_name: '{{ "{{\" }} pod_name {{ \"}}" }}' rack: '{{ "{{\" }} rack {{ \"}}" }}' source_type: '{{ "{{\" }} source_type {{ \"}}" }}' out_of_order_action: accept tenant_id: anonymous type: loki mimir: endpoint: http://mission-control-mimir-nginx/api/v1/push inputs: - vector - internal_metrics - kube_state_metrics - cass_operator_metrics - mimir-self-monitoring - prometheus_push - statsd type: prometheus_remote_write my_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM_ENDPOINT_URL/api/v1/write request: retry_attempts: 0Replace
PROM_ENDPOINT_URLwith your host URL. -
Optional: Add additional observability tools or versions. This example configures Vector to forward two sets of observability streams to two Prometheus versions and one New Relic version:
aggregator: customConfig: sinks: # ... existing sinks ... my_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM1_ENDPOINT_URL:8087/api/v1/write request: retry_attempts: 0 my_other_prometheus: type: prometheus_remote_write inputs: - vector - internal_metrics endpoint: https://PROM2_ENDPOINT_URL:8087/api/v1/write request: retry_attempts: 0 my_new_relic: type: new_relic inputs: - vector - internal_metrics account_id: ACCOUNT_ID api: events license_key: LICENSE_KEY request: retry_attempts: 0Replace the following:
-
PROM1_ENDPOINT_URL: The URL of your first Prometheus host URL -
PROM2_ENDPOINT_URL: The URL of your second Prometheus host URL -
ACCOUNT_ID: The New Relic account ID -
LICENSE_KEY: The New Relic license key
-
-
Apply the updated configuration:
helm upgrade mission-control oci://registry.replicated.com/mission-control/mission-control --namespace mission-control -f values.yamlReplace
mission-controlwith your release name if different.
Example sinks
Here are some examples of custom sinks that you can use to forward Mission Control metrics data to external observability tools.
Prometheus
The Prometheus sink requires the following configuration to forward metrics data to a Prometheus server:
-
type:prometheus_remote_write -
inputs: A list of input sources to be forwarded to Prometheus -
endpoint: The URL of the Prometheus server
-
KOTS configuration
-
Helm configuration
my_prometheus:
type: prometheus_remote_write
inputs:
- vector
- internal_metrics
endpoint: https://ENDPOINT_URL:8087/api/v1/write
request:
retry_attempts: 0
aggregator:
customConfig:
sinks:
my_prometheus:
type: prometheus_remote_write
inputs:
- vector
- internal_metrics
endpoint: https://ENDPOINT_URL:8087/api/v1/write
request:
retry_attempts: 0
Replace ENDPOINT_URL with the URL of your Prometheus server.
New Relic
The New Relic sink requires the following configuration to forward metrics data:
-
type:new_relic -
account_id: Your New Relic account ID -
license_key: Your New Relic license key -
inputs: The input source to be forwarded to New Relic
-
KOTS configuration
-
Helm configuration
my_new_relic:
type: new_relic
inputs:
- vector
- internal_metrics
account_id: ACCOUNT_ID
api: events
license_key: LICENSE_KEY
request:
retry_attempts: 0
aggregator:
customConfig:
sinks:
my_new_relic:
type: new_relic
inputs:
- vector
- internal_metrics
account_id: ACCOUNT_ID
api: events
license_key: LICENSE_KEY
request:
retry_attempts: 0
Replace the following:
-
ACCOUNT_ID: The New Relic account ID -
LICENSE_KEY: The New Relic license key
Elasticsearch
The Elasticsearch sink requires the following configuration to forward log streams:
-
type:elasticsearch -
inputs: The input sources to be forwarded to Elasticsearch -
endpoint: The URL of the Elasticsearch server
-
KOTS configuration
-
Helm configuration
my_elasticsearch:
type: elasticsearch
inputs:
- vector
- internal_metrics
endpoint: http://ENDPOINT_URL:9000
request:
retry_attempts: 0
aggregator:
customConfig:
sinks:
my_elasticsearch:
type: elasticsearch
inputs:
- vector
- internal_metrics
endpoint: http://ENDPOINT_URL:9000
request:
retry_attempts: 0
Replace ENDPOINT_URL with the URL of your endpoint.
Datadog
The Datadog sink requires the following configuration to forward metrics data:
-
type:datadog_metrics -
inputs: The input sources to be forwarded to Datadog -
endpoint: The URL of the Datadog server
-
KOTS configuration
-
Helm configuration
my_datadog:
type: datadog_metrics
inputs:
- vector
- internal_metrics
endpoint: http://ENDPOINT_URL:8080"
request:
retry_attempts: 0
aggregator:
customConfig:
sinks:
my_datadog:
type: datadog_metrics
inputs:
- vector
- internal_metrics
endpoint: http://ENDPOINT_URL:8080"
request:
retry_attempts: 0
Replace ENDPOINT_URL with the URL of the Datadog server.
Configure custom transforms
|
Custom transforms are set globally. |
-
KOTS installation
-
Helm installation
To configure a custom transform, do the following:
-
Identify your observability tool and obtain the necessary credentials.
-
Copy the YAML for your transform configuration from the Vector Transforms reference and paste it into your IDE.
-
Define the
inputsfor your custom transform YAML configuration. The default sources available arevector,internal_metrics,kube_state_metrics,cass_operator_metrics, andvector_with_defaultsfor logs. -
Sign in to the KOTS admin console.
-
Click Config.
-
In the navigation menu, click Observability - Custom pipelines.
-
Paste your modified YAML into Custom Vector Aggregator Transforms.
transforms: my_transform_id: type: filter inputs: - SOURCE_OR_TRANSFORM_ID filter: cluster_id: CLUSTER_IDReplace the following:
-
SOURCE_OR_TRANSFORM_ID: The transform source or ID -
CLUSTER_ID: The ID of the cluster you want to filter
-
-
Click Save.
-
Click Deploy.
For Helm-based installations, you configure custom transforms by adding them to your values.yaml file under the aggregator.customConfig.transforms section.
To configure a custom transform, do the following:
-
Identify your observability tool and obtain the necessary credentials.
-
Copy the YAML for your transform configuration from the Vector Transforms reference and paste it into your IDE.
-
Define the
inputsfor your custom transform YAML configuration. The default sources available arevector,internal_metrics,kube_state_metrics,cass_operator_metrics, andvector_with_defaultsfor logs. -
Add your custom transform configuration to your
values.yamlfile under theaggregator.customConfig.transformssection:aggregator: customConfig: transforms: my_transform_id: type: filter inputs: - SOURCE_OR_TRANSFORM_ID filter: cluster_id: CLUSTER_IDReplace the following:
-
SOURCE_OR_TRANSFORM_ID: The transform source or ID -
CLUSTER_ID: The ID of the cluster you want to filter
-
-
Apply the updated configuration:
helm upgrade mission-control oci://registry.replicated.com/mission-control/mission-control --namespace mission-control -f values.yamlReplace
mission-controlwith your release name if different.