| title | Deploy OBI in Kubernetes with Helm |
|---|---|
| linkTitle | Helm chart |
| description | Learn how to deploy OBI as a Helm chart in Kubernetes. |
| weight | 2 |
Note
For more details about the diverse Helm configuration options, check out the OBI Helm chart documentation or browse the chart on Artifact Hub. For detailed configuration parameters, see the values.yaml file.
Contents:
- Deploying OBI from helm
- Configuring OBI
- Configuring OBI metadata
- Providing secrets to the Helm configuration
First, you need to add the OpenTelemetry helm repository to Helm:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-chartsThe following command deploys a OBI DaemonSet with a default configuration in
the obi namespace:
helm install obi -n obi --create-namespace open-telemetry/opentelemetry-ebpf-instrumentationThe default OBI configuration:
- exports the metrics as Prometheus metrics in the Pod HTTP port
9090,/metricspath. - tries to instrument all the applications in your cluster.
- only provides application-level metrics and excludes network-level metrics by default
- configures OBI to decorate the metrics with Kubernetes metadata labels, for
example
k8s.namespace.nameork8s.pod.name
You might want to override the default configuration of OBI. For example, to export the metrics and/or spans as OpenTelemetry instead of Prometheus, or to restrict the number of services to instrument.
You can override the default OBI configuration options with your own values.
For example, create a helm-obi.yml file with a custom configuration:
config:
data:
# Contents of the actual OBI configuration file
discovery:
instrument:
- k8s_namespace: demo
- k8s_namespace: blog
routes:
unmatched: heuristicThe config.data section contains a OBI configuration file, documented in the
OBI configuration options documentation.
Then pass the overridden configuration to the helm command with the -f flag.
For example:
helm install obi open-telemetry/opentelemetry-ebpf-instrumentation -f helm-obi.ymlor, if the OBI chart was previously deployed:
helm upgrade obi open-telemetry/opentelemetry-ebpf-instrumentation -f helm-obi.ymlIf OBI exports the data using the Prometheus exporter, you might need to
override the OBI Pod annotations to let it be discoverable by your Prometheus
scraper. You can add the following section to the example helm-obi.yml file:
podAnnotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '9090'Analogously, the Helm chart allows overriding names, labels, and annotations for multiple resources involved in the deployment of OBI, such as service accounts, cluster roles, security contexts, etc. The OBI Helm chart documentation describes the diverse configuration options.
If you are submitting directly the metrics and traces to your observability
backend via the OpenTelemetry Endpoint, you might need to provide credentials
via the OTEL_EXPORTER_OTLP_HEADERS environment variable.
The recommended way is to store such value in a Kubernetes Secret and then specify the environment variable referring to it from the Helm configuration.
For example, deploy the following secret:
apiVersion: v1
kind: Secret
metadata:
name: obi-secret
type: Opaque
stringData:
otlp-headers: 'Authorization=Basic ....'Then refer to it from the helm-config.yml file via the envValueFrom section:
env:
OTEL_EXPORTER_OTLP_ENDPOINT: '<...your OTLP endpoint URL...>'
envValueFrom:
OTEL_EXPORTER_OTLP_HEADERS:
secretKeyRef:
key: otlp-headers
name: obi-secret