| title | Trace-log correlation |
|---|---|
| linkTitle | Trace-log correlation |
| weight | 35 |
| description | Learn how OBI correlates application logs with distributed traces for faster debugging and troubleshooting. |
| cSpell:ignore | BPFFS PYTHONUNBUFFERED ringbuffer |
OpenTelemetry eBPF Instrumentation (OBI) correlates application logs with distributed traces by enriching JSON logs with trace context. OBI does not export logs; it writes enriched logs back to the same stream while traces are exported via OTLP.
Trace-log correlation connects two complementary observability signals:
- Traces: Show the flow of requests across services with timing and structure
- Logs: Provide detailed event information and application state
With OBI trace-log correlation, logs from instrumented processes are automatically enriched with trace context:
- Trace ID: Links a log entry to the distributed trace
- Span ID: Links a log entry to a specific trace span
This enables your observability backend to correlate logs with their originating traces without any code changes to your application.
OBI uses eBPF to inject trace context into application logs at the kernel level:
- Trace capturing: OBI captures trace context (trace ID and span ID) for all traced operations
- Log interception: OBI intercepts write syscalls to capture application logs
- Context injection: For JSON-formatted logs, OBI injects
trace_idandspan_idfields - Trace export: Logs keep flowing through your existing logging pipeline
- Backend linking: Your observability backend links logs to traces using these IDs
OBI performs correlation at the kernel level without modifying application binaries:
- Uses kernel eBPF probes to intercept write operations
- Maintains file descriptor caching for performance
- Works with any logging framework that writes JSON logs
Trace-log correlation is enabled when trace export is configured and log enrichment is enabled for selected services.
# Enable trace export
otel_traces_export:
endpoint: http://otel-collector:4318/v1/traces
# Select services to instrument
discovery:
instrument:
- open_ports: '8380'
# Enable log enrichment for the same services
ebpf:
log_enricher:
services:
- service:
- open_ports: '8380'Log enrichment behavior can be further configured under ebpf.log_enricher:
cache_ttl: time-to-live for cached file descriptorscache_size: maximum number of cached file descriptorsasync_writer_workers: number of async writer shardsasync_writer_channel_len: queue size per shard
OBI enriches JSON logs for services listed under ebpf.log_enricher.services.
Keep service selectors aligned with discovery.instrument so enrichment tracks
the same processes.
Trace-log correlation requires JSON-formatted logs. OBI injects trace_id
and span_id fields into JSON log objects:
Before OBI:
{ "level": "info", "message": "Request processed", "duration_ms": 125 }After OBI enrichment:
{
"level": "info",
"message": "Request processed",
"duration_ms": 125,
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7"
}Plain text logs are passed through unchanged and are not enriched with trace context.
The log enricher only sees trace context when the log write happens on the request-handling thread. Runtimes that buffer stdout asynchronously can break this assumption.
- Python in Docker commonly needs
PYTHONUNBUFFERED=1 - .NET
Console.Outis buffered by default when stdout is a pipe; use aStreamWriterwithAutoFlush = true - ASP.NET Core's default
Microsoft.Extensions.Logging.AddConsole()pipeline is not compatible because it writes from a background thread
Traces must be exported and log enrichment enabled:
otel_traces_export:
endpoint: http://collector:4318/v1/traces # Required
ebpf:
log_enricher:
services:
- service:
- open_ports: '8380' # RequiredTrace-log correlation requires Linux with specific kernel features:
- Linux kernel 6.0+ (required for trace-log correlation)
- Supported architectures: x86_64, ARM64
- BPFFS mount: The kernel must have BPF filesystem mounted at
/sys/fs/bpf - Non-security-locked-down kernel: Requires a kernel that is not running in security lockdown mode (typical for most production distributions)
Applications must use a logging framework configured to output JSON. Examples:
{{< tabpane text=true persist=lang >}} {{% tab header="Python" lang=python %}}
import json
import logging
class JSONFormatter(logging.Formatter):
def format(self, record):
log_entry = {
'timestamp': self.formatTime(record),
'level': record.levelname,
'message': record.getMessage(),
'module': record.module,
}
return json.dumps(log_entry)
logger = logging.getLogger()
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger.addHandler(handler){{% /tab %}} {{% tab header="Go (using zap)" lang=go %}}
import "go.uber.org/zap"
logger, _ := zap.NewProduction() // Outputs JSON by default
defer logger.Sync()
logger.Info("Request processed", zap.Duration("duration", 125*time.Millisecond)){{% /tab %}} {{% tab header="Java (using Logback)" lang=java %}}
<appender name="FILE" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>{{% /tab %}} {{% tab header="Node.js (using pino)" lang=javascript %}}
const pino = require('pino');
const logger = pino();
logger.info({ duration_ms: 125 }, 'Request processed');{{% /tab %}} {{< /tabpane >}}
OBI enriches logs in-place. Use your existing log forwarder or collector to ship logs to your backend.
- Minimal overhead: Correlation uses eBPF kernel probes with efficient file descriptor caching
- Cache limits: File descriptor cache has size and TTL limits to prevent unbounded memory usage
- Async processing: Log enrichment uses asynchronous workers to avoid overflowing the kernel ringbuffer
- JSON only: Plain text logs are not enriched with trace context
- File descriptor cache: Cached for performance, with configurable TTL (default: 30 minutes)
- Span-aligned only: Logs enriched only while a span is active; logs outside span scope are not enriched.
-
Verify JSON format: Ensure application outputs valid JSON logs
# Check for malformed JSON cat app.log | jq empty && echo "Valid JSON" || echo "Invalid JSON"
-
Verify trace export and log enrichment:
otel_traces_export: endpoint: http://collector:4318/v1/traces ebpf: log_enricher: services: - service: - open_ports: '8380'
-
Verify Linux kernel: Trace-log correlation requires Linux
uname -s # Must return "Linux" -
Check log pipeline: Verify your log forwarder is shipping logs to your backend
- Set up export destinations for traces and metrics
- Explore OBI as a Collector receiver for centralized processing