diff --git a/content/en/profiler/_index.md b/content/en/profiler/_index.md
index a9ace1ae357..3553bde8ddf 100644
--- a/content/en/profiler/_index.md
+++ b/content/en/profiler/_index.md
@@ -74,6 +74,10 @@ Profiling your service to visualize all your stack traces in one place takes jus
{{< partial name="profiling/profiling-languages.html" >}}
+### Instrument your host (host profiling)
+
+{{< partial name="profiling/profiling-host.html" >}}
+
## Guide to using the profiler
The [Getting Started with Profiler][1] guide takes a sample service with a performance problem and shows you how to use Continuous Profiler to understand and fix the problem.
diff --git a/content/en/profiler/enabling/_index.mdoc.md b/content/en/profiler/enabling/_index.mdoc.md
index 3f44c07254f..72826cbd61d 100644
--- a/content/en/profiler/enabling/_index.mdoc.md
+++ b/content/en/profiler/enabling/_index.mdoc.md
@@ -9,6 +9,11 @@ content_filters:
label: "Runtime"
show_if:
- prog_lang: ["java"]
+ - trait_id: mode
+ option_group_id: profiler_host_mode_options
+ label: "Mode"
+ show_if:
+ - prog_lang: ["host"]
aliases:
- /tracing/faq/profiling_migration/
- /tracing/profiler/enabling/
@@ -82,6 +87,11 @@ further_reading:
{% partial file="profiler/enabling/ddprof.mdoc.md" /%}
{% /if %}
+
+{% if equals($prog_lang, "host") %}
+{% partial file="profiler/enabling/host.mdoc.md" /%}
+{% /if %}
+
## Not sure what to do next?
The [Getting Started with Profiler][1] guide takes a sample service with a performance problem and shows you how to use Continuous Profiler to understand and fix the problem.
diff --git a/content/en/profiler/enabling/full_host.md b/content/en/profiler/enabling/full_host.md
deleted file mode 100644
index cf0a7300bd2..00000000000
--- a/content/en/profiler/enabling/full_host.md
+++ /dev/null
@@ -1,99 +0,0 @@
----
-title: Enabling the Full-Host Profiler
-private: true
-further_reading:
- - link: 'getting_started/profiler'
- tag: 'Documentation'
- text: 'Getting Started with Profiler'
----
-
-{{< callout url="https://www.datadoghq.com/product-preview/full-host-profiler/" btn_hidden="false" header="Join the Preview!" >}}
-Full-Host is in Preview.
-{{< /callout >}}
-
-The Full-Host Profiler is an eBPF-based profiling solution built on OpenTelemetry that sends profiling data to Datadog using the Datadog Agent. It supports symbolication for compiled languages and is optimized for containerized environments such as Docker and Kubernetes.
-
-### Use cases
-
-The Full-Host Profiler is particularly valuable for:
-
-- Profiling open source software components that aren't instrumented with Datadog SDKs.
-- Analyzing performance across multi-language processes and runtimes.
-
-## Requirements
-
-Supported operating systems
-: Linux (5.4+ for amd64, 5.5+ for arm64)
-
-Supported architecture
-: `amd64` or `arm64` processors
-
-Serverless
-: `full-host` is not supported on serverless platforms, such as AWS Lambda.
-
-Debugging information
-: Symbols should be available locally or can be uploaded in CI with `datadog-ci`
-
-## Installation
-
-
Always set
DD_SERVICE for each service you want to profile and identify separately. This ensures accurate attribution and more actionable profiling data. To learn more, see
Service naming.
-
-The Full-Host Profiler is distributed as a standalone executable.
-
-### Container environments
-For hosts running containerized workloads, Datadog recommends running the profiler inside a container:
-
-- **Kubernetes**: Follow the [running in Kubernetes][7] instructions.
-- **Docker**: Follow the [running in Docker][8] instructions.
-- **Container image**: Available from the [container registry][5].
-
-
-### Non-container environments
-
-For hosts without container runtimes, follow the instructions for [running directly on the host][9].
-
-## Service naming
-When using full-host profiling, Datadog profiles all processes on the host. Each process's service name is derived from its `DD_SERVICE` environment variable.
-
-If `DD_SERVICE` is set, the profiler uses the value of `DD_SERVICE` as the service name. This is the recommended and most reliable approach.
-
-If `DD_SERVICE` is not set, Datadog infers a service name from the binary name. For interpreted languages, this is the name of the interpreter. For example, for a service written in Java, the full-host profiler sets the service name to `service:java`.
-{{< img src="profiler/inferred_service_example.png" alt="Example of an inferred services within Profiling" style="width:50%;">}}
-
-If multiple services are running under the same interpreter (for example, two separate Java applications on the same host), and neither sets `DD_SERVICE`, Datadog groups them together under the same service name. Datadog cannot distinguish between them unless you provide a unique service name.
-
-## Debug symbols
-
-For compiled languages (such as Rust, C, C++, Go, etc.), the profiler uploads local symbols to Datadog for symbolication, ensuring that function names are available in profiles. For Rust, C, and C++, symbols need to be available locally (unstripped binaries).
-
-For binaries stripped of debug symbols, it's possible to upload symbols manually or in the CI:
-
-1. Install the [datadog-ci][12] command line tool.
-2. Provide a [Datadog API key][10] through the `DD_API_KEY` environment variable.
-3. Set the `DD_SITE` environment variable to your [Datadog site][11].
-4. Install the `binutils` package, which provides the `objcopy` CLI tool.
-5. Run:
- ```
- DD_BETA_COMMANDS_ENABLED=1 datadog-ci elf-symbols upload ~/your/build/symbols/
- ```
-
-
-## What's next?
-
-After installing the Full-Host Profiler, see the [Getting Started with Profiler][6] to learn how to use Continuous Profiler to identify and fix performance problems.
-
-## Further reading
-
-{{< partial name="whats-next/whats-next.html" >}}
-
-[2]: https://github.com/DataDog/dd-otel-host-profiler/releases/
-[3]: https://app.datadoghq.com/profiling
-[4]: /getting_started/tagging/unified_service_tagging
-[5]: https://github.com/DataDog/dd-otel-host-profiler/pkgs/container/dd-otel-host-profiler/.
-[6]: /getting_started/profiler/
-[7]: https://github.com/DataDog/dd-otel-host-profiler/blob/main/doc/running-in-kubernetes.md
-[8]: https://github.com/DataDog/dd-otel-host-profiler/blob/main/doc/running-in-docker.md
-[9]: https://github.com/DataDog/dd-otel-host-profiler/blob/main/doc/running-on-host.md
-[10]: https://app.datadoghq.com/organization-settings/api-keys
-[11]: /getting_started/site/
-[12]: https://github.com/DataDog/datadog-ci?tab=readme-ov-file#how-to-install-the-cli
diff --git a/customization_config/en/option_groups/profiler.yaml b/customization_config/en/option_groups/profiler.yaml
index b5db984436d..9f92e2c83fd 100644
--- a/customization_config/en/option_groups/profiler.yaml
+++ b/customization_config/en/option_groups/profiler.yaml
@@ -12,6 +12,14 @@ profiler_language_options:
- id: c
- id: cpp
- id: rust
+ - id: host
+
+# Mode option groups — only shown when Host is selected (via show_if)
+
+profiler_host_mode_options:
+ - id: bundled
+ default: true
+ - id: standalone
# Runtime option groups — only shown when Java is selected (via show_if)
diff --git a/customization_config/en/options/general.yaml b/customization_config/en/options/general.yaml
index acaf1de824a..1236ee07988 100644
--- a/customization_config/en/options/general.yaml
+++ b/customization_config/en/options/general.yaml
@@ -1,6 +1,6 @@
# The list of allowed option IDs for content filter selections.
-# This is to enforce consistency between various selections
-# the user can make, ensuring that the selection
+# This is to enforce consistency between various selections
+# the user can make, ensuring that the selection
# will reliably be applied to all relevant pages.
#
# For example, if the user chooses "linux_apt" as their operating system
@@ -8,7 +8,7 @@
# for that same exact selection ID, rather than a slightly different
# selection such as "linuxapt" or "apt_linux".
#
-# To add a new option, add a new item to the list
+# To add a new option, add a new item to the list
# and give it a unique ID.
options:
@@ -333,7 +333,7 @@ options:
- label: Expo
id: expo
-
+
- label: CodePush
id: codepush
@@ -425,7 +425,7 @@ options:
id: traces
- label: Unity
- id: unity
+ id: unity
- label: Wget
id: wget
@@ -456,7 +456,7 @@ options:
- label: Splunk Forwarders (TCP)
id: splunk_forwarders
-
+
- label: Sumo Logic Hosted Collector
id: sumo_logic_hosted_collector
@@ -528,3 +528,10 @@ options:
- label: GraalVM Native Image
id: graalvm_native_image
+
+- label: eBPF Profiling
+ id: host
+
+- label: Bundled
+ id: bundled
+
diff --git a/customization_config/en/traits/general.yaml b/customization_config/en/traits/general.yaml
index 0b8a359844e..9890441e88e 100644
--- a/customization_config/en/traits/general.yaml
+++ b/customization_config/en/traits/general.yaml
@@ -23,6 +23,11 @@ traits:
type: text
internal_notes: The source of a software library that the customer is trying to use in their code, such as an SDK. This is most often a package manager like NPM. But it could also have a more general value, like "CDN" or "GitHub".
+- id: mode
+ label: "Mode"
+ type: text
+ internal_notes: Deployment mode for a product. For example, bundled (with Datadog Agent) or standalone (OTel-based, no Agent).
+
- id: mobile_os
label: "OS"
type: text
diff --git a/layouts/partials/profiling/profiling-host.html b/layouts/partials/profiling/profiling-host.html
new file mode 100644
index 00000000000..67b0f663237
--- /dev/null
+++ b/layouts/partials/profiling/profiling-host.html
@@ -0,0 +1,15 @@
+{{ $dot := . }}
+
diff --git a/layouts/shortcodes/mdoc/en/profiler/enabling/host.mdoc.md b/layouts/shortcodes/mdoc/en/profiler/enabling/host.mdoc.md
new file mode 100644
index 00000000000..ed871edd2f2
--- /dev/null
+++ b/layouts/shortcodes/mdoc/en/profiler/enabling/host.mdoc.md
@@ -0,0 +1,542 @@
+
+
+{% alert level="warning" %}
+The host profiler is in Preview and subject to change.
+{% /alert %}
+
+Host profiling collects CPU and memory profiles at the OS level across all processes, regardless of language or runtime.
+
+Select the mode that matches your infrastructure:
+
+- **Bundled**: the host profiler runs alongside the Datadog Agent, which forwards profiles to Datadog. Choose this mode if you already run the Agent or are willing to deploy it.
+- **Standalone**: the host profiler exports profiles through OpenTelemetry (OTel) with no Datadog Agent required. Choose this mode if your stack is fully OTel-based (Helm charts or the OTel Operator).
+
+## Requirements
+
+Supported operating systems
+: Linux (kernel 5.10+)
+
+Supported architecture
+: `amd64` or `arm64` processors
+
+Serverless
+: The host profiler is not supported on serverless platforms such as AWS Lambda.
+
+Kubernetes restrictions
+: The host profiler is not supported on GKE Autopilot or GKE GDC.
+
+Debugging information
+: For compiled languages (C, C++, Rust, Go), symbols must be available locally or uploaded with `datadog-ci`. See [Debug symbols](#debug-symbols).
+
+{% if equals($mode, "bundled") %}
+
+## Bundled mode
+
+In bundled mode, the host profiler runs as a sidecar container inside the Datadog Agent DaemonSet. The Agent collects and forwards profiles to Datadog.
+
+### Installation
+
+{% tabs %}
+
+{% tab label="Datadog Operator" %}
+
+1. Add the `agent.datadoghq.com/host-profiler-enabled: "true"` annotation to your `DatadogAgent` resource:
+
+ ```yaml
+ apiVersion: datadoghq.com/v2alpha1
+ kind: DatadogAgent
+ metadata:
+ name: datadog
+ annotations:
+ agent.datadoghq.com/host-profiler-enabled: "true"
+ spec:
+ global:
+ credentials:
+ apiKey:
+ ```
+
+2. Apply the manifest:
+
+ ```bash
+ kubectl apply -f datadog-agent.yaml
+ ```
+
+The Operator configures the host profiler sidecar with the required Linux capabilities (`BPF`, `PERFMON`, `SYS_PTRACE`, `SYS_RESOURCE`, `DAC_READ_SEARCH`, `SYSLOG`, `CHECKPOINT_RESTORE`), mounts `/sys/kernel/tracing`, and installs the seccomp profile.
+
+{% /tab %}
+
+{% tab label="Helm" %}
+
+1. Add the following to your Helm values file:
+
+ ```yaml
+ datadog:
+ apiKey:
+ site:
+ hostProfiler:
+ enabled: true
+ ```
+
+2. Deploy or upgrade the chart:
+
+ ```bash
+ helm upgrade --install datadog-agent datadog/datadog \
+ -f values.yaml
+ ```
+
+The chart deploys the host profiler as a sidecar container in the Agent DaemonSet and configures the required capabilities and seccomp profile.
+
+{% alert %}
+Replace `` with your Datadog site (for example, `datadoghq.com`). See [Datadog sites][1].
+{% /alert %}
+
+{% /tab %}
+
+{% /tabs %}
+
+### Configuration
+
+**Log level**
+
+{% tabs %}
+
+{% tab label="Datadog Operator" %}
+
+```yaml
+spec:
+ override:
+ nodeAgent:
+ containers:
+ host-profiler:
+ env:
+ - name: DD_LOG_LEVEL
+ value: debug
+```
+
+{% /tab %}
+
+{% tab label="Helm" %}
+
+```yaml
+agents:
+ containers:
+ hostProfiler:
+ logLevel: debug
+```
+
+{% /tab %}
+
+{% /tabs %}
+
+**Resource limits**
+
+{% tabs %}
+
+{% tab label="Helm" %}
+
+```yaml
+agents:
+ containers:
+ hostProfiler:
+ resources:
+ requests:
+ cpu: 100m
+ memory: 200Mi
+ limits:
+ cpu: 200m
+ memory: 400Mi
+```
+
+{% /tab %}
+
+{% /tabs %}
+
+{% /if %}
+
+{% if equals($mode, "standalone") %}
+
+## Standalone mode
+
+In standalone mode, the host profiler runs as an OpenTelemetry Collector DaemonSet and exports profiles directly to Datadog without a Datadog Agent.
+
+### Prerequisites
+
+Provision the seccomp profile onto every node before deploying the host profiler. The seccomp profile allows the host profiler to run without `privileged: true`.
+
+1. Create a namespace and a ConfigMap containing the seccomp profile:
+
+ ```yaml
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: ebpf-profiler
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: ebpf-profiler-seccomp
+ namespace: ebpf-profiler
+ data:
+ ebpf-profiler: |
+ {
+ "defaultAction": "SCMP_ACT_ERRNO",
+ "architectures": [
+ "SCMP_ARCH_X86_64",
+ "SCMP_ARCH_AARCH64"
+ ],
+ "syscalls": [
+ {
+ "names": [
+ "accept4", "access", "arch_prctl", "bind", "bpf", "brk",
+ "clone", "clone3", "close", "connect", "dup3",
+ "epoll_create1", "epoll_ctl", "epoll_pwait", "epoll_wait",
+ "eventfd2", "execve", "exit", "exit_group", "faccessat2",
+ "fcntl", "fstat", "fstatfs", "fsync", "futex", "getcwd",
+ "getdents64", "getpeername", "getpid", "getrandom",
+ "getrlimit", "getsockname", "getsockopt", "gettid", "ioctl",
+ "listen", "lseek", "madvise", "mmap", "mprotect", "munmap",
+ "nanosleep", "newfstatat", "openat", "openat2",
+ "perf_event_open", "pidfd_open", "pidfd_send_signal", "pipe2",
+ "prctl", "pread64", "prlimit64", "process_vm_readv", "read",
+ "readlinkat", "recvmsg", "restart_syscall", "rseq",
+ "rt_sigaction", "rt_sigprocmask", "rt_sigreturn",
+ "sched_getaffinity", "sched_yield", "sendto",
+ "set_robust_list", "set_tid_address", "setrlimit",
+ "setsockopt", "sigaltstack", "socket", "statfs", "statx",
+ "sysinfo", "tgkill", "uname", "waitid", "write"
+ ],
+ "action": "SCMP_ACT_ALLOW"
+ },
+ {
+ "names": ["kill"],
+ "action": "SCMP_ACT_ALLOW",
+ "args": [{"index": 1, "value": 0, "op": "SCMP_CMP_EQ"}],
+ "comment": "allow process liveness check via kill(pid, 0)"
+ }
+ ]
+ }
+ ```
+
+ Apply it:
+
+ ```bash
+ kubectl apply -f ebpf-profiler-seccomp.yaml
+ ```
+
+2. Deploy a DaemonSet that copies the seccomp profile to each node:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: DaemonSet
+ metadata:
+ name: ebpf-profiler-seccomp-installer
+ namespace: ebpf-profiler
+ spec:
+ selector:
+ matchLabels:
+ app: ebpf-profiler-seccomp-installer
+ template:
+ metadata:
+ labels:
+ app: ebpf-profiler-seccomp-installer
+ spec:
+ initContainers:
+ - name: install-seccomp
+ image: busybox:1.36
+ securityContext:
+ privileged: true
+ command:
+ - cp
+ - /profile/ebpf-profiler
+ - /host-seccomp/ebpf-profiler
+ volumeMounts:
+ - name: profile
+ mountPath: /profile
+ readOnly: true
+ - name: host-seccomp
+ mountPath: /host-seccomp
+ containers:
+ - name: pause
+ image: gcr.io/google-containers/pause:3.1
+ volumes:
+ - name: profile
+ configMap:
+ name: ebpf-profiler-seccomp
+ - name: host-seccomp
+ hostPath:
+ path: /var/lib/kubelet/seccomp
+ type: DirectoryOrCreate
+ ```
+
+ ```bash
+ kubectl apply -f ebpf-profiler-seccomp-installer.yaml
+ ```
+
+ This DaemonSet copies the seccomp profile to `/var/lib/kubelet/seccomp/ebpf-profiler` on each node.
+
+### Installation
+
+{% tabs %}
+
+{% tab label="OTel Operator" %}
+
+1. Install the OpenTelemetry Operator if not already deployed:
+
+ ```bash
+ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
+ helm install opentelemetry-operator open-telemetry/opentelemetry-operator \
+ --namespace opentelemetry-operator-system \
+ --create-namespace
+ ```
+
+2. Create an `OpenTelemetryCollector` resource in DaemonSet mode:
+
+ ```yaml
+ apiVersion: opentelemetry.io/v1beta1
+ kind: OpenTelemetryCollector
+ metadata:
+ name: datadog-host-profiler
+ namespace: ebpf-profiler
+ spec:
+ mode: daemonset
+ image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-ebpf-profiler:latest
+ hostPID: true
+ securityContext:
+ privileged: false
+ readOnlyRootFilesystem: true
+ allowPrivilegeEscalation: false
+ capabilities:
+ add:
+ - BPF
+ - PERFMON
+ - SYS_PTRACE
+ - SYS_RESOURCE
+ - DAC_READ_SEARCH
+ - SYSLOG
+ - CHECKPOINT_RESTORE
+ drop:
+ - ALL
+ seccompProfile:
+ type: Localhost
+ localhostProfile: ebpf-profiler
+ env:
+ - name: DD_API_KEY
+ valueFrom:
+ secretKeyRef:
+ name: datadog-secret
+ key: api-key
+ - name: DD_SITE
+ value:
+ volumeMounts:
+ - name: tmpdir
+ mountPath: /tmp
+ volumes:
+ - name: tmpdir
+ emptyDir: {}
+ config:
+ receivers:
+ profiling: {}
+ exporters:
+ otlphttp:
+ profiles_endpoint: https://intake.profile.${env:DD_SITE}/v1development/profiles
+ metrics_endpoint: https://otlp.${env:DD_SITE}/v1/metrics
+ headers:
+ dd-api-key: ${env:DD_API_KEY}
+ service:
+ pipelines:
+ profiles:
+ receivers: [profiling]
+ exporters: [otlphttp]
+ ```
+
+3. Apply the manifest:
+
+ ```bash
+ kubectl apply -f host-profiler-collector.yaml
+ ```
+
+{% /tab %}
+
+{% tab label="Helm" %}
+
+1. Add the OpenTelemetry Helm repository:
+
+ ```bash
+ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
+ helm repo update
+ ```
+
+2. Create a values file (`values-host-profiler.yaml`):
+
+ ```yaml
+ mode: daemonset
+
+ image:
+ repository: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-ebpf-profiler
+
+ command:
+ name: otelcol-ebpf-profiler
+ extraArgs:
+ - --feature-gates=+service.profilesSupport
+
+ presets:
+ profiling:
+ enabled: true
+ kubernetesAttributes:
+ enabled: true
+
+ hostPID: true
+
+ securityContext:
+ privileged: false
+ readOnlyRootFilesystem: true
+ allowPrivilegeEscalation: false
+ capabilities:
+ add:
+ - BPF
+ - PERFMON
+ - SYS_PTRACE
+ - SYS_RESOURCE
+ - DAC_READ_SEARCH
+ - SYSLOG
+ - CHECKPOINT_RESTORE
+ drop:
+ - ALL
+ seccompProfile:
+ type: Localhost
+ localhostProfile: ebpf-profiler
+
+ initContainers:
+ - name: install-seccomp
+ image: busybox:1.36
+ securityContext:
+ readOnlyRootFilesystem: true
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
+ command:
+ - cp
+ - /profile/ebpf-profiler
+ - /host-seccomp/ebpf-profiler
+ volumeMounts:
+ - name: seccomp-profile
+ mountPath: /profile
+ readOnly: true
+ - name: host-seccomp
+ mountPath: /host-seccomp
+
+ extraVolumeMounts:
+ - name: tmpdir
+ mountPath: /tmp
+
+ extraVolumes:
+ - name: tmpdir
+ emptyDir: {}
+ - name: seccomp-profile
+ configMap:
+ name: ebpf-profiler-seccomp
+ - name: host-seccomp
+ hostPath:
+ path: /var/lib/kubelet/seccomp
+ type: DirectoryOrCreate
+
+ extraEnvs:
+ - name: DD_API_KEY
+ valueFrom:
+ secretKeyRef:
+ name: datadog-secret
+ key: api-key
+ - name: DD_SITE
+ value:
+
+ config:
+ receivers:
+ profiling:
+ symbol_uploader:
+ enabled: true
+ symbol_endpoints:
+ - site: ${env:DD_SITE}
+ api_key: ${env:DD_API_KEY}
+ exporters:
+ otlphttp:
+ profiles_endpoint: https://intake.profile.${env:DD_SITE}/v1development/profiles
+ metrics_endpoint: https://otlp.${env:DD_SITE}/v1/metrics
+ headers:
+ dd-api-key: ${env:DD_API_KEY}
+ service:
+ pipelines:
+ profiles:
+ receivers: [profiling]
+ exporters: [otlphttp]
+ ```
+
+3. Store your Datadog API key as a Kubernetes secret:
+
+ ```bash
+ kubectl create secret generic datadog-secret \
+ --from-literal=api-key= \
+ --namespace ebpf-profiler
+ ```
+
+4. Deploy the chart:
+
+ ```bash
+ helm install datadog-host-profiler open-telemetry/opentelemetry-collector \
+ --namespace ebpf-profiler \
+ -f values-host-profiler.yaml
+ ```
+
+{% alert %}
+Replace `` with your Datadog site (for example, `datadoghq.com`). See [Datadog sites][1].
+{% /alert %}
+
+{% /tab %}
+
+{% /tabs %}
+
+{% /if %}
+
+## Service naming
+
+The host profiler collects profiles from every process on the host and determines each process's service name from its `DD_SERVICE` environment variable.
+
+If `DD_SERVICE` is not set, the profiler infers the service name from the binary name. For interpreted languages, this is the interpreter name (for example, `java` for a Java process). If multiple services share the same interpreter and none set `DD_SERVICE`, their profiles are grouped under the same inferred name.
+
+Set `DD_SERVICE` on each workload to identify separately:
+
+```yaml
+env:
+ - name: DD_SERVICE
+ value: my-service
+```
+
+Set `DD_ENV` and `DD_VERSION` for richer filtering in the Profiler UI.
+
+## Debug symbols
+
+For compiled languages (C, C++, Rust, Go), the host profiler uploads local debug symbols to Datadog for symbolization. Binaries must include debug symbols (not stripped) for function names to appear in profiles.
+
+To upload symbols from stripped binaries:
+
+1. Install the [datadog-ci CLI][2].
+2. Set your API key and site:
+ ```bash
+ export DD_API_KEY=
+ export DD_SITE=
+ ```
+3. Upload symbols:
+ ```bash
+ DD_BETA_COMMANDS_ENABLED=1 datadog-ci elf-symbols upload /path/to/build/symbols/
+ ```
+
+## Verification
+
+After deploying the host profiler, profiles appear on the [Datadog Profiler page][3] within a few minutes. If profiles do not appear, see the [Profiler Troubleshooting][4] guide.
+
+[1]: /getting_started/site/
+[2]: https://github.com/DataDog/datadog-ci
+[3]: https://app.datadoghq.com/profiling
+[4]: /profiler/profiler_troubleshooting/