Agent Network Streaming Issue (AI SDK v5)
Hi guys, I have old production app (50K+ users) that I am trying to convert to AI assistent chat bot. I have built a working versionwith Vercel AI SDK, but decided to migrate to Mastra (greedy). Stuck for 3 weeks on
agent.network()
streaming. Streaming is still broken after 0.21 upgrade. Need guidance urgently - my option is to revert to AI SDK but that wastes 3 weeks of migration work. Really want to stay with Mastra.
Setup
- Mastra 0.21.0 + AI SDK v5, orchestrator with 8 sub-agents (Gmail, SEO, Sheets, Calendar, Ghost, Web, Voting, General), OpenRouter gpt-4o-mini
What We Observe
✅ What Works
- Agent routing is flawless - orchestrator correctly identifies and delegates to appropriate sub-agents
- Sub-agents execute successfully - tools called, operations completed
- Terminal logs show proper execution flow: orchestrator → sub-agent → tool execution → completion
❌ Streaming Issues
1. No text-delta chunks during execution
- Stream only emits data-network
metadata chunks
- Zero text-delta
chunks throughout entire execution
- No real-time text updates to UI
2. Text only available after full completion
- Text appears in networkResult.result.result
path
- Arrives as single blob after all agents finish
- Not streamed incrementally
3. Frontend receives raw metadata
- Without filtering, UI displays raw data-network
JSON
- After page refresh, database-saved chunks appear as [object Object]
in UI
Current Workaround
This works but defeats the purpose of streaming - user sees nothing until full execution completes.
Questions
1. Is agent.network()
designed to stream text-delta chunks in real-time?
- Or is it purely for internal routing with final result extraction?
- Should we expect incremental text updates during sub-agent execution?
2. What should data-network
chunks contain and how should they be handled?
- Should these be filtered from UI display?
- Are they being saved to database incorrectly?
- Should they only be used internally by Mastra?
3. Are we using the right API for UI streaming?
- Should multi-agent streaming use agent.stream()
with manual routing logic instead?
- Is network()
intended for backend orchestration only?
4. What's the expected data flow?
- Should orchestrator stream its own "thinking" text while routing?
- Should sub-agent responses be streamed back through the network stream?
- Or is the pattern: route silently → extract final result → display?
What We Need
Confirmation on the intended streaming behavior of agent.network()
for real-time UI updates. The routing works perfectly - just trying to understand if we're using the right API pattern for streaming multi-agent responses to users.
@Romain @Ward I would really appreciate your assistance on this. 🙏


23 Replies
📝 Created GitHub issue: https://github.com/mastra-ai/mastra/issues/8915
GitHub
[DISCORD:1428243966310088724] Agent Network Streaming Issue (AI SDK...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1428243966310088724 Hi guys, I have old production app (50K+ users) that I am trying to convert to AI assi...
I need to try that out on my end, I haven't had the chance to try the new transformers to see how they work with useChat, but I believe you need to handle the custom parts/chunks manually
Would mean a lot if u can confirm is this something I am doing wrong or its just not supported yet.
Its blocking me to proceed further with other stuff if I cant solve this.
but I believe you need to handle the custom parts/chunksAny examples?
No, I think it's just an oversight, the text-deltas should be streamed as they get generated
checking with the team
We will get this sorted before next release
Thanks man! It means a lot to hear that you care guys!
Thank you for the patience 🙏
Hi,
Ctx: https://discord.com/channels/1309558646228779139/1428004322154647686/1428425396151390348
The problem I was struggling with was that the Assistant UI couldn't render the output from NetworkRoute.
Please take a look at the attached video. You can see that nothing is being displayed, right?
By the way, the app shown in the video was created by combining Assistant UI's "Separate Server Integration" with Mastra's NetworkRoute.
To see what was happening, I output the parts generated by
for await (const part of toAISdkFormat(result, { from: 'network' }))
in network-route.ts
to the terminal using console.log("[network-route] part:", JSON.stringify(part, null, 2));
.
Then, as shown in the screenshot, I discovered several issues. (I'll attach the logs to this post too.)
To put it simply, all the Parts are being bundled into a Custom Data Part named "data-network"
. What's particularly problematic is that you can't tell who generated these Parts and when. For example, you can see that the second-to-last and third-to-last outputs are exactly the same. In reality, these have the following differences, but you can't distinguish them at all from the data alone:
- Part generated when the Routing Agent finished
- Part generated when the Network execution finishedGitHub
GitHub - vercel/ai-chatbot: A full-featured, hackable Next.js AI ch...
A full-featured, hackable Next.js AI chatbot built by Vercel - vercel/ai-chatbot
So I went to look more closely at the upstream processing. I inserted
console.log("[network-route] part:", JSON.stringify(part, null, 2));
just before switch (payload.type)
in the transformNetwork
function in transformer.ts
and output it to the terminal, just like before.
Then I discovered that it contained information that clearly shows who generated the chunk and when.
In other words, as shown in the photo (ignore the Japanese text mixed in), you can identify a kind of namespace.
Who's crushing this information? That's right... it's transformNetwork
.
It could have been easily solved if they had used a naming convention like "data-network-routing-agent-start"
or "data-network-agent-execution-event-start"
, with the pattern "data-network-"
+ the type
from NetworkChunkType
.No
So, to solve my problem, I took the following approach:
- Implement my own "transformNetwork" that converts NetworkChunkType -> Primitive Data Part (not Custom Data Part).
- Implement my own "NetworkRoute" that uses that conversion process.
To summarize:
- 😣
transformNetwork
converts structured data into flat data. Therefore, it lacks the information needed to render on the frontend.
- 🙏 transformNetwork
should be modified to create Custom Data Parts without destroying the original data structure.
That's all 👍so at the end u made it work to stream network responses?
No, all I did was deliver Primitive UIMessages to the frontend.
If you look at the logs I shared earlier, you can see that
agent.network()
doesn't output "text-delta" like agent.stream()
does.
So as you reported, stream response isn't possible 😭
That's what I meant in #general when I said "But I don't think my solution will help with your problem."
https://discord.com/channels/1309558646228779139/1309558648476930100/1428772136075530361I understand
appreciate for sharing such detailed response and ur struggle
lets try what Romain suggested on genera channel, so we can give him feedback if that fixed our problem or not
OK 💪 😤
hello @Romain @shitaro2021 @Ward it seems that is working! 🎉
I need to test more, but this is the first try and it worked.
Nice!!! Thanks for testing it out @! .kinderjaje !
I gonna see now if its possible to stream response, since its giving it all at once or the response is to short.
agent.network() Usage Tracking Not Working
Environment
- Mastra:
0.0.0-ai-sdk-network-text-delta-20251017172601
(snapshot for network streaming fix)
- Provider: @openrouter/ai-sdk-provider
(direct, not through Mastra gateway)
- Models: OpenRouter with GPT-4o-mini, Grok, etc.
- Use case: Credit-based billing system - need accurate token usage for each network call
What We Found
The Problem
agent.network()
returns zero usage data - both networkStream.usage
and all chunk/result objects have no usage information.
What We Tried
1. ✅ Verified OpenRouter config with createOpenRouter({ extraBody: { usage: { include: true } } })
2. ✅ Inspected all stream chunks - finish
chunk is completely empty: {"type":"finish"}
3. ✅ Inspected data-network
chunks - contain agent output but no usage
4. ✅ Checked networkStream.result.steps
- no usage in any step metadata
5. ✅ Tried onRouteComplete
callback - never fires
What We See
The snapshot build doesn't track or expose usage data anywhere.
What We Need
Accurate token usage tracking for agent.network()
calls to bill users correctly.
Right now we can only track direct agent.stream()
calls (those work fine), but network routing usage is invisible.
Questions
1. Is this a known limitation of the snapshot build? Should we wait for a proper release with usage tracking?
2. What's the recommended approach? Should we:
- Instrument individual agents to report usage to a collector?
- Use direct agent.stream()
calls for credit-sensitive operations?
- Wait for Mastra to add network-level usage tracking?
3. Does using OpenRouter directly (not through Mastra gateway) affect this? We're using @openrouter/ai-sdk-provider
with createOpenRouter()
.
4. Is there a hidden API or callback we're missing? We've checked chunks, result, callbacks - usage data doesn't exist anywhere.
We're building a production credit system and can't deploy without accurate usage tracking. Any guidance appreciated!
@Romain or @Ward is there way to track usage data with the Agent Network approach?
@shitaro2021 did u try does it work for you? I am able to get response, but its not streamed one
hey guys, this was long night, trying to make Agent Network stream, but I failed. Did you @Romain or @Ward manage to stream response?
agent.network() streaming issue - chunks batching instead of streaming character-by-character
I'm using the snapshot version 0.0.0-ai-sdk-network-text-delta-20251017172601
to get streaming working with agent.network()
and running into a critical streaming performance issue.
What I'm trying to do:
- Stream responses from agent.network()
to the frontend using AI SDK v5's useChat()
hook
- Get character-by-character streaming like agent.stream()
provides
- Display hundreds of text-delta chunks in real-time as they arrive
What's working:
✅ Backend sends hundreds of text-delta chunks (confirmed in network tab)
✅ agent.stream()
works perfectly - smooth character-by-character streaming
✅ All chunks arrive at the browser (visible in DevTools network tab)
What's NOT working:
❌ agent.network()
chunks appear in 2-3 large batches instead of streaming incrementally
❌ Text appears all at once or in huge chunks, not character-by-character
❌ Frontend shows "2 big chunks" despite backend sending 300+ individual chunks
Current implementation:
Root cause analysis:
The for await
loop batches chunks before writing them. With agent.stream()
, I can just do:
But MastraAgentNetworkStream
doesn't have .toUIMessageStreamResponse()
method (checked the types).
Questions:
1. Is .toUIMessageStreamResponse()
coming to MastraAgentNetworkStream
in a future snapshot?
2. Is there a way to read the MastraAgentNetworkStream
directly without toAISdkFormat()
batching?
3. Should I use networkStream.getReader()
to read chunks synchronously instead of async iteration?
Environment:
- Mastra snapshot: 0.0.0-ai-sdk-network-text-delta-20251017172601
- AI SDK: v5
- Next.js 15 (App Router)
- Frontend: useChat()
hook with streamProtocol: 'data'
The network tab confirms chunks are arriving perfectly - this is purely a backend transformation/batching issue. Any guidance would be hugely appreciated! 🙏
In simple words, I am able to get responce at once, but not streaming one.Hey @! .kinderjaje ! Could you share a sample of the chunks you're receiving frontend side?
Thanks! Could you try this way and let me know if that works better?
Hey @Romain Here are the findings below:
I implemented it exactly as you recommended:
This works - the response appears in the UI and the HTTP connection stays open properly. However, I'm still experiencing the batching issue where text appears in 2-3 large chunks instead of streaming character-by-character like
agent.stream()
does.
What I'm Seeing
Backend logs:
- ✅ 631 total chunks processed
- ✅ Hundreds of individual text-delta
chunks sent
- ✅ All chunks arrive at browser (confirmed in Network tab)
Frontend behavior:
- ❌ Text appears in 2-3 large batches instead of character-by-character
- ❌ Not the smooth streaming experience like agent.stream()
provides
Network tab shows individual text-delta chunks arriving:
But frontend receives data-network
chunks instead:
So toAISdkFormat()
is buffering all the individual text-delta
chunks and aggregating them into data-network
metadata chunks. The useChat()
hook doesn't render data-network
chunks as streaming text - it expects text-delta
chunks for character-by-character streaming.
My frontend config:
- experimental_throttle: 0
(no throttling)
- Using DefaultChatTransport
with AI SDK v5
- useChat()
hook from @ai-sdk/react
The Core Issue
The frontend receives data-network
chunks (which contain aggregated network metadata), but useChat()
expects text-delta
chunks to render streaming text. The data-network.data.output
contains the full aggregated text, not incremental chunks.
Question
How do I get text-delta
chunks from agent.network()
for character-by-character streaming?
agent.stream()
sends text-delta
chunks perfectly, but I need the intelligent routing from agent.network()
. Is there a different approach or a flag I should use with toAISdkFormat()
to emit text-delta
instead of data-network
?
Environment:
- Mastra: 0.0.0-ai-sdk-network-text-delta-20251017172601
- AI SDK: v5
- Next.js: 15 (App Router)
@Romain maybe you missed my previous question, because I asked 2 things in the same thread. They are all related to the Agent Network, so that is why I kept here. But if u want, I can open thread for it.
Edit: I actually created new thread https://discord.com/channels/1309558646228779139/1429448714484846703