Tail Consumer Limitations in `wrangler dev`
I'm building a tail consumer worker for observability/tracing and hit some limitations in
wrangler dev that make local development challenging:
Issues with tail() API in local dev:
- item.scriptName is always null → service name shows "unknown-service"
- item.wallTime and item.cpuTime are always 0 → duration shows 0ms (Spectre attack protection)
- No way to identify which worker generated the trace
tailStream() API:
I can see tailStream() is implemented in workerd (src/workerd/api/tail-worker-test.wd-test) and works with the streaming_tail_worker compatibility flag, but wrangler dev explicitly ignores it (found in IGNORED_KEYS in workers-sdk).
Question:
Is there a timeline for tailStream() support in wrangler dev? The streaming API would solve the timing issues since event.timestamp would be populated even in local dev.
For now I'm accepting these limitations in local dev (hoping production will work), but it significantly impacts the local development experience for observability tooling.
Environment: wrangler 4.37.1, workerd compatibility date 2025-09-131 Reply
Found a Workaround!
We solved the local dev limitations without waiting for
tailStream() support!
The Solution
Emit trace metadata via structured logging before Cloudflare sanitizes the TraceItem fields.
Key insight: console.log() happens before field sanitization, and logs are preserved in TraceItem.logs[].
How It Works
Source worker - wrap your fetch handler with middleware:
Tail consumer - extract from logs:
Results
Before:
After:
Works in both local dev AND production!
Note: Make sure to use structured logging (output objects, not JSON strings) so they preserve through console.log(). Also, wrangler dev wraps console.log objects in arrays, so check Array.isArray(log.message).
Sharing in case anyone else hits the same issue!