| title | OBI as OpenTelemetry Collector receiver |
|---|---|
| linkTitle | Collector receiver |
| weight | 75 |
| description | Learn how to use OBI as a receiver component in the OpenTelemetry Collector for centralized telemetry processing. |
| cSpell:ignore | bpftool PERFMON |
Starting with version v0.5.0, OBI can run as a receiver component within the OpenTelemetry Collector. This integration enables you to leverage the Collector's powerful processing pipeline while benefiting from OBI's zero-code eBPF instrumentation.
Running OBI as a Collector receiver combines the strengths of both tools:
From OBI:
- Zero-code instrumentation using eBPF
- Automatic service discovery
- Low overhead observability
From OpenTelemetry Collector:
- Unified telemetry pipeline
- Rich processors (sampling, filtering, transformation)
- Multiple exporters (backends, formats)
- Centralized configuration
- Centralized processing: You want all telemetry to flow through a unified pipeline
- Complex processing: Need advanced sampling, filtering, or enrichment that the Collector provides
- Multiple backends: Sending data to multiple observability platforms
- Compliance requirements: Need telemetry processing for data redaction or PII removal
- Simplified deployment: Single binary instead of separate OBI + Collector processes
- Simple deployments: Direct export to a single backend is sufficient
- Edge environments: Limited resources where running the full Collector is too heavy
- Testing/development: Quick setup without Collector configuration
graph TD
App[Application]
OBI[OBI<br/>eBPF instrumentation]
Backend[Backend]
App --> OBI
OBI -->|OTLP| Backend
graph TD
App[Application]
subgraph Collector[OpenTelemetry Collector]
OBI[OBI Receiver<br/>eBPF instrumentation]
Processors[Processors<br/>sampling, filtering, enrichment]
Exporters[Exporters<br/>multiple backends]
end
Backend1[Backend 1]
Backend2[Backend 2]
Backend3[Backend 3]
App --> OBI
OBI --> Processors
Processors --> Exporters
Exporters --> Backend1
Exporters --> Backend2
Exporters --> Backend3
To use OBI as a Collector receiver, you need to build a custom Collector binary that includes the OBI receiver component. This is done using the OpenTelemetry Collector Builder (OCB), a tool that generates a custom Collector binary with your specified components. If you don't have OCB installed, see the installation instructions.
Requirements:
- Go 1.25 or later
- OCB installed and available on your PATH
- A local checkout of the OpenTelemetry eBPF Instrumentation repository at v0.6.0 or later
- Docker (for generating eBPF files) or a C compiler, clang, and eBPF headers
Build Steps:
-
Generate eBPF files in your local OBI source directory:
cd /path/to/obi make docker-generate # or if you have build tools installed locally: # make generate
This step must be completed before building with
ocb. It generates the necessary eBPF type bindings that the OBI receiver requires. -
Create a
builder-config.yaml:dist: name: otelcol-obi description: OpenTelemetry Collector with OBI receiver output_path: ./dist exporters: - gomod: go.opentelemetry.io/collector/exporter/debugexporter v0.142.0 - gomod: go.opentelemetry.io/collector/exporter/otlpexporter v0.142.0 processors: - gomod: go.opentelemetry.io/collector/processor/batchprocessor v0.142.0 receivers: - gomod: go.opentelemetry.io/obi v0.6.0 import: go.opentelemetry.io/obi/collector providers: - gomod: go.opentelemetry.io/collector/confmap/provider/envprovider v1.18.0 - gomod: go.opentelemetry.io/collector/confmap/provider/fileprovider v1.18.0 - gomod: go.opentelemetry.io/collector/confmap/provider/httpprovider v1.18.0 - gomod: go.opentelemetry.io/collector/confmap/provider/httpsprovider v1.18.0 - gomod: go.opentelemetry.io/collector/confmap/provider/yamlprovider v1.18.0 replaces: - go.opentelemetry.io/obi => /path/to/obi
Replace
/path/to/obiwith the actual path to your OBI source directory. Thereplaces:section tellsocbto use your local OBI source instead of fetching from the public module repository, which is necessary because the published OBI module does not include the generated BPF code.Version selection: You must specify versions for each component. The example above uses versions that are known to be compatible with OBI v0.6.0. If you're using a different OBI version or want to use newer component versions, check your OBI repository's
go.modfile to see which collector component versions it depends on, then update the versions in your builder config accordingly. -
Build the custom Collector:
ocb --config builder-config.yaml
The compiled binary will be in
./dist/otelcol-obi.
Create an OpenTelemetry Collector configuration that includes the OBI receiver:
# collector-config.yaml
receivers:
# OBI receiver for eBPF instrumentation
obi:
# Listen on port 9999 for HTTP traffic to instrument
open_port: '9999'
# Enable metrics collection for network and application features
meter_provider:
features: [network, application]
# Optional: Service discovery configuration
# discovery:
# poll_interval: 30s
processors:
# Batch telemetry for efficiency
batch:
timeout: 1s
send_batch_size: 1024
exporters:
# Export traces locally for debugging
debug:
verbosity: detailed
# Export to generic OTLP backend
otlp:
endpoint: https://backend.example.com:4317
headers:
api-key: ${env:OTLP_API_KEY}
service:
pipelines:
# Traces pipeline with OBI instrumentation
traces:
receivers: [obi]
processors: [batch]
exporters: [debug, otlp]
# Metrics pipeline
metrics:
receivers: [obi]
processors: [batch]
exporters: [debug, otlp]sudo ./otelcol-obi --config collector-config.yamlOBI requires elevated privileges to instrument processes using eBPF. The
collector must run with sudo or have the appropriate Linux capabilities
(CAP_SYS_ADMIN, CAP_DAC_READ_SEARCH, CAP_NET_RAW, CAP_SYS_PTRACE, CAP_PERFMON,
CAP_BPF) to:
- Attach eBPF probes to running processes
- Access process memory and system information
- Set memory locks for eBPF programs
- Capture network and application telemetry
Without these permissions, OBI cannot instrument processes and will fail to start.
| Feature | Standalone OBI | OBI as Receiver |
|---|---|---|
| eBPF instrumentation | ✅ Yes | ✅ Yes |
| Service discovery | ✅ Yes | ✅ Yes |
| Traces collection | ✅ Yes | ✅ Yes |
| Metrics collection | ✅ Yes | ✅ Yes |
| JSON log enrichment | ✅ Yes | ✅ Yes |
| Direct OTLP export | ✅ Yes | ❌ No (via Collector) |
| Collector processors | ❌ No | ✅ Yes |
| Multiple exporters | ✅ Full support | |
| Tail sampling for traces | ❌ No | ✅ Yes |
| Data transformation | ✅ Advanced | |
| Resource overhead | Lower | Moderate |
| Configuration complexity | Simple | More complex |
| Single binary deployment | ✅ Yes | ✅ Yes |
To deploy a Collector with OBI receiver on each node, you first need to package the custom collector binary into a container image:
-
Create a
Dockerfile:FROM alpine:latest # Install required tools RUN apk --no-cache add ca-certificates # Copy the custom collector binary built with OCB COPY dist/otelcol-obi /otelcol-obi # Make it executable RUN chmod +x /otelcol-obi ENTRYPOINT ["/otelcol-obi"]
-
Build and push the image:
docker build -t my-registry/otelcol-obi:v0.6.0 . docker push my-registry/otelcol-obi:v0.6.0 -
Deploy the DaemonSet:
# otel-collector-daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: otel-collector-obi namespace: monitoring spec: selector: matchLabels: app: otel-collector-obi template: metadata: labels: app: otel-collector-obi spec: hostNetwork: true hostPID: true containers: - name: otel-collector image: my-registry/otelcol-obi:v0.6.0 args: - --config=/conf/collector-config.yaml securityContext: privileged: true capabilities: add: - SYS_ADMIN - SYS_PTRACE - NET_RAW - DAC_READ_SEARCH - PERFMON - BPF - CHECKPOINT_RESTORE volumeMounts: - name: config mountPath: /conf - name: sys mountPath: /sys readOnly: true - name: proc mountPath: /host/proc readOnly: true resources: limits: memory: 1Gi cpu: '1' requests: memory: 512Mi cpu: 500m volumes: - name: config configMap: name: otel-collector-config - name: sys hostPath: path: /sys - name: proc hostPath: path: /proc
To use this configuration, you must add the attributes and filter processors
to your builder-config.yaml:
processors:
- gomod:
github.com/open-telemetry/opentelemetry-collector-contrib/processor/attributesprocessor
v0.142.0
- gomod:
github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor
v0.142.0Then use Collector processors to redact PII before export:
receivers:
obi:
discovery:
poll_interval: 30s
processors:
batch:
timeout: 1s
send_batch_size: 1024
# Redact sensitive attributes
attributes:
actions:
- key: http.url
action: delete
- key: user.email
action: delete
- key: credit_card
pattern: \d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}
action: hash
# Remove spans with sensitive operations
filter:
traces:
span:
- attributes["operation"] == "process_payment"
- attributes["internal"] == true
exporters:
debug:
verbosity: detailed
# Export to OTLP backend
otlp:
endpoint: backend.example.com:4317
service:
pipelines:
traces:
receivers: [obi]
processors: [attributes, filter, batch]
exporters: [debug, otlp]Implement intelligent sampling using the Collector. This example requires the
tail_sampling processor from contrib. Add it to your builder-config.yaml:
processors:
- gomod:
github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor
v0.142.0Configuration example:
receivers:
obi:
open_port: '9999'
processors:
batch:
timeout: 1s
send_batch_size: 1024
# Tail-based sampler keeps:
# - All traces with errors
# - Slow traces (> 1s)
# - 5% of successful fast traces
tail_sampling:
policies:
- name: errors
type: status_code
status_code:
status_codes: [ERROR]
- name: slow_traces
type: latency
latency:
threshold_ms: 1000
- name: sample_success
type: probabilistic
probabilistic:
sampling_percentage: 5
exporters:
debug:
verbosity: detailed
otlp:
endpoint: backend.example.com:4317
service:
pipelines:
traces:
receivers: [obi]
processors: [tail_sampling, batch]
exporters: [debug, otlp]Resource usage for OBI as a Collector receiver varies significantly based on:
- Telemetry volume: Number of instrumented services and request rates
- Pipeline complexity: Number and type of processors configured
- Exporter configuration: Batch sizes, queue depths, and number of backends
- Service discovery scope: Number of processes being monitored
Like standalone OBI, the eBPF instrumentation provides minimal overhead. The Collector pipeline adds additional resource requirements that depend on your configuration.
Recommendations:
- Start with the resource limits shown in the Kubernetes deployment example and adjust based on observed usage
- Enable Collector self-monitoring to track actual resource consumption
- Use the performance tuning options to optimize OBI's eBPF component
- Monitor memory and CPU usage in production and adjust resource requests/limits accordingly
-
Use batch processor: Always include the batch processor to reduce export overhead
-
Limit pipeline processors: Each processor adds latency and CPU usage
-
Configure buffering: Adjust queue sizes for high-volume environments:
exporters: otlp: sending_queue: enabled: true num_consumers: 10 queue_size: 5000
-
Monitor Collector metrics: Enable Collector self-monitoring:
service: telemetry: metrics: address: :8888
- Single node only: OBI receiver instruments only local processes (same node as Collector)
- Privileged access required: Collector must run with eBPF capabilities
- Linux only: eBPF is Linux-specific; Windows and macOS not supported
- Collector restart: Changes to OBI configuration require Collector restart
If you encounter API incompatibility errors or "unknown revision" errors during build:
-
Ensure your OBI source directory is up to date:
cd /path/to/obi git pull origin main # or your branch
-
Ensure version pins are not specified in your builder config for collector components, or that they match versions defined in your OBI
go.modfile. -
Check your OBI
go.modfile to see which collector component versions it depends on:grep "go.opentelemetry.io/collector" go.modThen add those same versions to your
builder-config.yamlfor other components.
OBI requires elevated privileges to run. You have two options:
sudo ./otelcol-obi --config collector-config.yamlUse setcap to grant only the required capabilities:
sudo setcap cap_sys_admin,cap_sys_ptrace,cap_dac_read_search,cap_net_raw,cap_perfmon,cap_bpf,cap_checkpoint_restore=ep ./otelcol-obiThen run without sudo:
./otelcol-obi --config collector-config.yamlVerify the capabilities were set:
getcap ./otelcol-obiIn Kubernetes:
Ensure the Pod's security context has the required Linux capabilities:
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_PTRACE
- BPF
- NET_RAW
- CHECKPOINT_RESTORE
- DAC_READ_SEARCH
- PERFMONThis means the Collector doesn't have the required capabilities. Ensure you're
running with sudo or the proper Kubernetes security context shown above.
-
Check OBI receiver configuration:
receivers: obi: discovery: poll_interval: 30s instrument: - exe_path: /path/to/app # Verify path is correct
-
Verify service discovery in Collector logs:
grep "discovered service" collector.log -
Confirm eBPF programs are loaded using bpftool:
# In the Collector container bpftool prog show
Causes: Large telemetry volume or instrumenting too many processes
Solutions:
-
Configure appropriate batch sizes to reduce export overhead:
processors: batch: timeout: 200ms send_batch_size: 512 send_batch_max_size: 1024
-
Be more selective with instrumentation - limit which services OBI instruments:
receivers: obi: instrument: targets: - service_name: 'web-app' - service_name: 'api-service'
This reduces telemetry volume by only instrumenting specific services instead of all processes.
Follow the configuration section to build a Collector with OBI receiver.
Map your standalone OBI configuration to Collector format:
Standalone OBI:
# obi-config.yaml
otel_traces_export:
endpoint: http://backend:4318
open_port: 8080Collector with OBI receiver:
# collector-config.yaml
receivers:
obi:
instrument:
- open_port: 8080
exporters:
otlp:
endpoint: backend:4317
service:
pipelines:
traces:
receivers: [obi]
processors: [batch]
exporters: [otlp]- Stop standalone OBI
- Start Collector with OBI receiver
- Verify telemetry flow in your backend
- Explore Collector processors for data transformation
- Learn about Collector deployment patterns
- Configure sampling strategies for traces
- Set up service discovery to auto-instrument services