Skip to content

Commit 454493e

Browse files
blog: one flag, every chunk — debug logging for TanStack AI (#849)
* blog: one flag, every chunk — debug logging for TanStack AI Announce the new pluggable, category-toggleable debug logging system that ships with @tanstack/ai. Covers the one-flag default, the eight category toggles, piping into a custom Logger (pino etc.), the try/catch guarantee around user loggers, and the design calls behind emoji prefixes + console.dir(depth:null). * ci: apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
1 parent c3ccc2d commit 454493e

2 files changed

Lines changed: 150 additions & 0 deletions

File tree

2.76 MB
Loading
Lines changed: 150 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
---
2+
title: 'One Flag, Every Chunk: Debug Logging Lands in TanStack AI'
3+
published: 2026-04-22
4+
excerpt: "Your AI pipeline is a black box: a missing chunk, a middleware that doesn't fire, a tool call with mystery args. TanStack AI now ships pluggable, category-toggleable debug logging across every activity and adapter. Flip one flag and the pipeline prints itself."
5+
authors:
6+
- Alem Tuzlak
7+
---
8+
9+
![Debug Logging for TanStack AI](/blog-assets/debug-logging-for-tanstack-ai/header.png)
10+
11+
You kick off a `chat()` call. A chunk goes missing. A middleware you wrote last week doesn't seem to fire. A tool gets called with arguments you can't explain. Your stream finishes, the UI looks wrong, and you have no idea which layer lied to you.
12+
13+
Up until now your options were limited. You could wrap the SDK in a tracing platform, spend a day wiring OpenTelemetry, or sprinkle `console.log` into your own code and hope the problem lives where you can see it. Neither helps when the bug is **inside** the pipeline: a raw provider chunk that got dropped, a middleware that mutated config, a tool call the agent loop reissued.
14+
15+
TanStack AI now has a built-in answer. **Flip one flag and the entire pipeline prints itself.**
16+
17+
## Turn it on
18+
19+
Add `debug: true` to any activity call:
20+
21+
```typescript
22+
import { chat } from '@tanstack/ai'
23+
import { openaiText } from '@tanstack/ai-openai/adapters'
24+
25+
const stream = chat({
26+
adapter: openaiText(),
27+
model: 'gpt-4o',
28+
messages: [{ role: 'user', content: 'Hello' }],
29+
debug: true,
30+
})
31+
```
32+
33+
Every internal event now prints to the console, prefixed with an emoji-tagged category so you can scan dense streaming logs without squinting:
34+
35+
```
36+
📤 [tanstack-ai:request] 📤 activity=chat provider=openai model=gpt-4o messages=1 tools=0 stream=true
37+
🔁 [tanstack-ai:agentLoop] 🔁 run started
38+
📥 [tanstack-ai:provider] 📥 provider=openai type=response.output_text.delta
39+
📨 [tanstack-ai:output] 📨 type=TEXT_MESSAGE_CONTENT
40+
🧩 [tanstack-ai:middleware] 🧩 hook=onOutput
41+
🔧 [tanstack-ai:tools] 🔧 tool=getTodos phase=before
42+
```
43+
44+
That is the whole setup. No exporters, no sidecar, no dashboard. Just what your pipeline is actually doing, right now, in the terminal you already have open.
45+
46+
## Eight categories, not a log level
47+
48+
Most logging libraries give you `debug`, `info`, `warn`, `error` and ask you to pick one. That mapping is wrong for an AI pipeline. **The noise isn't a severity, it's a source.** When you're chasing a tool bug you don't want provider chunks. When you're chasing a provider bug you don't want middleware chatter.
49+
50+
So `debug` accepts a config object where every category toggles independently:
51+
52+
```typescript
53+
chat({
54+
adapter: openaiText(),
55+
model: 'gpt-4o',
56+
messages,
57+
debug: { middleware: false }, // everything except middleware
58+
})
59+
```
60+
61+
Omitted categories default to `true`, so the common case is "turn off the one thing that's drowning you." Every category maps to a real pipeline surface:
62+
63+
| Category | What it logs |
64+
| ------------ | -------------------------------------------------------------- |
65+
| `request` | Outgoing call to a provider (model, message count, tool count) |
66+
| `provider` | Every raw chunk or frame from the provider SDK |
67+
| `output` | Every chunk or result yielded to the caller |
68+
| `middleware` | Inputs and outputs around every middleware hook |
69+
| `tools` | Before and after tool call execution |
70+
| `agentLoop` | Agent-loop iterations and phase transitions |
71+
| `config` | Config transforms returned by middleware `onConfig` hooks |
72+
| `errors` | Every caught error anywhere in the pipeline |
73+
74+
Chat-only categories like `tools` and `agentLoop` just don't fire for `summarize()` or `generateImage()`, because they don't exist in those pipelines. You don't have to think about it.
75+
76+
## Pipe it anywhere
77+
78+
`console.log` is the right default for local work. It is the wrong default for production, where you want structured JSON going to a log shipper, not ANSI colors going to stdout.
79+
80+
Pass your own `Logger` and the entire category system routes through it:
81+
82+
```typescript
83+
import type { Logger } from '@tanstack/ai'
84+
import pino from 'pino'
85+
86+
const pinoLogger = pino()
87+
const logger: Logger = {
88+
debug: (msg, meta) => pinoLogger.debug(meta, msg),
89+
info: (msg, meta) => pinoLogger.info(meta, msg),
90+
warn: (msg, meta) => pinoLogger.warn(meta, msg),
91+
error: (msg, meta) => pinoLogger.error(meta, msg),
92+
}
93+
94+
chat({
95+
adapter: openaiText(),
96+
model: 'gpt-4o',
97+
messages,
98+
debug: { logger },
99+
})
100+
```
101+
102+
The `Logger` interface is four methods. Anything that writes a line of text fits. Pino, winston, bunyan, a `fetch` to a logging service, a no-op that forwards to your existing observability layer. All valid.
103+
104+
### Your logger can't break the pipeline
105+
106+
This is the detail we lost sleep over. If your custom logger throws (a cyclic-meta `JSON.stringify`, a transport that rejects synchronously, a typo in a bound `this`), the exception should **not** bubble up and mask the real error that triggered the log call in the first place.
107+
108+
Internally, every user-logger invocation is wrapped in a try/catch. A broken logger silently drops the log line. Your actual pipeline error still reaches you through thrown exceptions and `RUN_ERROR` chunks, exactly where you were looking for it.
109+
110+
If you need to know when your own logger is failing, guard inside your implementation. The SDK will not guess how loud you want logger failures to be.
111+
112+
## Every activity, every provider
113+
114+
Debug logging isn't a chat-only feature. Every activity in TanStack AI accepts the same option:
115+
116+
```typescript
117+
summarize({ adapter, text, debug: true })
118+
generateImage({ adapter, prompt: 'a cat', debug: { logger } })
119+
generateSpeech({ adapter, text, debug: { request: true } })
120+
generateTranscription({ adapter, audio, debug: true })
121+
generateVideo({ adapter, prompt, debug: true })
122+
```
123+
124+
Realtime session adapters (`openaiRealtime`, `elevenlabsRealtime`) take it too.
125+
126+
On the provider side, **every adapter in every provider package is wired through the structured logger**: OpenAI, Anthropic, Gemini, Grok, Groq, Ollama, OpenRouter, fal, and ElevenLabs. 25 adapters total. Zero ad-hoc `console.*` calls remain in adapter source code. Whether you're debugging an Anthropic text stream or an ElevenLabs realtime session, the output shape is the same.
127+
128+
## The small decisions that add up
129+
130+
A few calls that look cosmetic but matter once you're staring at a thousand-line log.
131+
132+
**Emoji prefixes on both sides of the tag.** `📨 [tanstack-ai:output] 📨 ...` reads faster than raw brackets in a dense stream, and your eye can hop categories without parsing text.
133+
134+
**`console.dir` with `depth: null`.** Node's default console formatting stops at depth 2, so nested provider chunks render as `[Object]` and you lose the thing you were trying to see. Debug logs surface the entire structure. In browsers, the raw object still lands in DevTools for interactive inspection.
135+
136+
**Errors log unconditionally.** You don't have to remember to turn them on. If you really want total silence, `debug: false` or `debug: { errors: false }` does it. Otherwise errors flow through even when you haven't asked for any other category.
137+
138+
**Internal devtools middleware is muted.** If you have the TanStack AI devtools middleware installed, its own hooks don't flood the `middleware` category. You see the middleware **you** wrote, not the plumbing.
139+
140+
Each of these is a small call on its own. Together they're the difference between "debug output I actually read" and "debug output I pipe to `/dev/null` within ten seconds."
141+
142+
## Getting it
143+
144+
Debug logging ships in the latest `@tanstack/ai`. It's additive, backward-compatible, and available on every activity today. No config file, no exporter, no platform.
145+
146+
Upgrade, add `debug: true` to the call you can't explain, and read the output.
147+
148+
For the full reference, see the [Debug Logging guide](https://tanstack.com/ai/latest/docs/advanced/debug-logging). And if you want an even faster turnaround: TanStack AI ships an agent skill under `packages/typescript/ai/skills/` so your LLM-powered dev tools can discover the flag on their own.
149+
150+
One flag. Every chunk. Your streams are no longer a black box.

0 commit comments

Comments
 (0)