Skip to content

New Blog Post: Inside the LLM Call — GenAI Observability with OpenTelemetry #9707

@JamesNK

Description

@JamesNK

Blog post proposal

Title: Inside the LLM Call: GenAI Observability with OpenTelemetry

Author: James Newton-King (Microsoft)

SIG: GenAI Observability

Summary

This post demonstrates GenAI observability in practice by:

  1. Introducing the Semantic Conventions for Generative AI — what they cover (model names, token counts, finish reasons, prompt/completion content, tool calls) and how they build on previous work like the AI Agent Observability post.

  2. Showing how to export GenAI telemetry from tools developers already use. The walkthrough uses VS Code Copilot Chat as the primary example, and also references Claude Code and OpenAI Codex as other tools with OTel support.

  3. Introducing the Aspire Dashboard as a lightweight, free telemetry viewer for local development. The post walks through starting the dashboard via Docker, connecting it to VS Code, and exploring the collected GenAI traces, metrics, and structured logs — including Aspire's GenAI telemetry visualizer for chat-style rendering of prompts, responses, and tool calls.

The goal is to give readers a hands-on path from "I have an LLM-powered app" to "I can see every model call, token count, and tool invocation" in under 10 minutes.

A draft PR will follow shortly.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

Status

No status

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions