Ingress in Mission Control
The term Ingress refers to accessing your databases, Mission Control installation, and the Kubernetes cluster from outside of that cluster.
As of version 1.18, Mission Control supports multiple Ingress implementations and does not require a specific solution. Instead, it integrates with existing Ingress controllers to provide flexible connectivity options.
No Ingress by default
The default charts/values.yaml file does not enable any Ingress functionality. The Values.ingress configuration does not yet govern database ingress or ingress for auxiliary services. Configure database ingress as described in the cluster lifecycle documentation.
Ingress controller
The Ingress controller is a deployment in a Kubernetes cluster that receives external connections and routes them to other resources in the cluster. A service associated with this controller has an external IP assigned. This must be a public IP that is routable from other networks.
Configure a DNS record that associates the chosen domain or subdomain with this IP address, along with wildcard subdomains.
Mission Control supports multiple Ingress controller options depending on your environment. You can use the bundled HAProxy controller, leverage platform-specific controllers (GKE, OpenShift), or configure your own existing Ingress solution. The following sections describe the available options.
Mission Control’s default Ingress controller
Mission Control bundles the HAProxy Ingress controller in its Helm chart.
Configure HAProxy under the kubernetes-ingress key in the charts/values.yaml file.
To enable the HAProxy controller, add the following to your value overrides file:
kubernetes-ingress:
enabled: true
By default, this creates a deployment of HAProxy with one replica that accepts only HTTPS connections on port 443.
GKE Ingress controller
In GKE, enable the default Ingress controller.
An external IP address is assigned to a LoadBalancer service with a name ending in *-kubernetes-ingress, where * depends on the release name for your Helm installation.
This IP is accessible from the internet.
GKE does not perform any DNS setup by default.
OpenShift Ingress controller
OpenShift includes its own ingress solution.
Do not enable the HAProxy controller when you deploy to OpenShift.
Most OpenShift installations are assigned a domain that Routes are bound to, which does not require additional DNS setup.
Local Ingress controller
When you run Mission Control locally for demo or development purposes, you can use a kind cluster to deploy Mission Control.
In this case, enable the HAProxy controller.
You also need the cloud-provider-kind utility to ensure the Ingress service gets an external IP.
Edit /etc/hosts with DNS records pointing to domains that you want to use with this IP.
Platform ingress
The platform ingress in Mission Control provides access to Mission Control resources that are not databases. This feature is disabled by default.
Enable platform ingress by adding the following to your Helm overrides file:
ingress:
enabled: true
regionDomain: mc.example.com
wildcardDomain: "*.mc.example.com"
The regionDomain is the base domain Mission Control uses to expose its components, such as mc.example.com.
The Mission Control UI is available at the regionDomain directly.
If enabled, Grafana is available at grafana.regionDomain.
The Mission Control Control Plane’s Vector aggregator is available at vector.regionDomain.
The platform ingress supports TLS and the Server Name Indication (SNI) TLS extension.
Mission Control uses the regionDomain in the Certificates it generates for TLS purposes.
Use proper Host headers in requests aimed at Mission Control.
Web browsers set this automatically, but programmatic or scripted interaction (for example, with curl) requires explicit configuration.
Mission Control creates multiple Ingress objects that expose the various services. In OpenShift, Mission Control also creates Ingress objects, but OpenShift translates these into Routes automatically.
DNS configuration
After enabling platform ingress and deploying Mission Control, configure DNS records in your DNS provider to route traffic to the Ingress controller.
First, get the external IP address of your Ingress service:
kubectl get svc -n <namespace> | grep ingress
Then create the following DNS records in your DNS provider (such as Route53, CloudDNS, or your domain registrar):
A mc.example.com INGRESS_SERVICE_EXTERNAL_IP
A \*.mc.example.com INGRESS_SERVICE_EXTERNAL_IP
Replace INGRESS_SERVICE_EXTERNAL_IP with the external IP from the previous command.
These DNS records must be created outside of Kubernetes in your DNS management system.
Platform ingress in OpenShift configuration
In OpenShift, base the regionDomain on the cluster domain.
Find the cluster domain by running:
oc get ingress.config.openshift.io cluster -o jsonpath='{.spec.domain}'
This returns something like:
apps.userName.subSubDomain.subDomain.domain.com
The regionDomain needs one more level of subdomain, for example:
mc.apps.userName.subSubDomain.subDomain.domain.com
The UI is available on this FQDN directly.
Mission Control then prefixes it with additional subdomains such as grafana. or vector..
|
Due to the automatic translation of Ingress to Route, Mission Control doesn’t leverage OpenShift’s wildcard domain functionality. |
Ingress across Mission Control planes
Ingress can be configured only on the control plane. Data planes do not expose a UI or Grafana.
Vector aggregator connectivity
Data planes push observability data to the control plane’s Vector aggregator.
Configure the data plane to connect to the control plane’s Vector aggregator using the vector.regionDomain endpoint.
As of version 1.18, Mission Control uses Ingress as the default method for the Vector aggregator service.
In earlier versions, the Vector aggregator used NodePort on port 30600, and data planes connected directly to control plane nodes using CP_WORKER_NODE_IP:30600.
- Configure the data plane to use Ingress (recommended)
-
Update your data plane’s
helm-overrides.yamlfile to connect to the control plane’s Vector aggregator through Ingress:aggregator: customConfig: sinks: control_plane_aggregator: type: vector address: https://vector.REGION_DOMAIN tls: enabled: true crt_file: /tmp/certs/aggregator-cp/tls.crt key_file: /tmp/certs/aggregator-cp/tls.key ca_file: /tmp/certs/aggregator-cp/ca.crt verify_certificate: trueReplace
REGION_DOMAINwith your control plane’s region domain.The Vector-to-Vector connection uses mTLS. Configure the data plane’s Vector with the control plane’s Vector client certificate. For detailed certificate configuration steps, see Enable Vector aggregator TLS in data plane.
- Use NodePort (legacy)
-
If you disable Ingress in the control plane, the aggregator remains accessible through NodePort 30600. To explicitly maintain this legacy behavior, add the following to your control plane’s
helm-overrides.yamlfile:aggregator: service: type: NodePort ports: - name: vector protocol: TCP port: 6000 targetPort: 6000 nodePort: 30600The NodePort method is deprecated and will be removed in a future release. Migrate to the Ingress-based method.