Real-time progress streaming from Mastra tools
Problem: Need real-time progress streaming from Mastra tools (15+ min execution). Currently, agent responses only appear after tool completes.
Setup:
- Using
agent.stream()
with format: 'aisdk'
- Tool uses writer.write()
for progress events
- Frontend: Vercel AI SDK's useChat
hook
What we tried:
1. writer.write({ type: "custom-progress", ... })
- events don't reach frontend (filtered by AI SDK conversion)
2. @mastra/ai-sdk
's toAISdkFormat()
- errors with undefined chunks
3. Custom stream wrapping - breaks existing streaming
Terminal logs show: Tool execution happens AFTER HTTP response closes (POST 200), so progress events can't stream.
Questions:
1. Is there a way to stream custom tool progress events with AI SDK format?
2. Should we use Mastra native streaming (streamVNext()
) instead? What do we lose (memory, useChat integration)?
3. Is PR #8672 the official solution? When will it merge?
4. How do other Mastra users handle long-running tool progress?
Constraint: Must maintain conversational AI responses and memory integration. Can't just return raw tool outputs.12 Replies
📝 Created GitHub issue: https://github.com/mastra-ai/mastra/issues/8777
GitHub
[DISCORD:1427185080144498709] Real-time progress streaming from Mas...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1427185080144498709 Problem: Need real-time progress streaming from Mastra tools (15+ min execution). Curr...
@Romain sorry for pinging mate, but I need your confirmation on this issue. Do you think it's related to PR #8672 and upcoming release?
In simple terms, I have a process that is running lets say for 10+ minutes. Once the agent start it, its just simple process, script executing something. I would like to show real time progress update while its working, so the user doesnt stare at blank screen for 10 minutes, before the process is over.
Any suggestion?
Hey sorry for the late response. We are pushing up a fix with documentation to support your use case. We should go out today in an alpha
We are adding a new function called
custom
to the writer to support your use case
we have a PR open for it here now https://github.com/mastra-ai/mastra/pull/8922
GitHub
Support writing custom top level stream chunks by TheIsrael1 · Pul...
Today, writer.write emits tool-ouptut chunks, which don't translate well with UI frameworks e.g AI SDK.
This PR adds Support for writing custom top level stream chunks with new writer.custo...
Hey Ward, thank you for the response mate, it means a lot to not seek for workaround solution. I am keeping an eye on it
Appreciate the link to the PR Dero!
@Ward do you know if this PR is included in that snapshot I am testing, the one that Romain created? https://discord.com/channels/1309558646228779139/1309558648476930100/1428797677788987414
Sorry, it isn't 🫤 We can only trigger a snapshot for a specific PR. You would need an alpha to get all the fixes at once, but these PRs aren't merged into main yet, so that won't work...
ok, no problem Romain, just wanted to know, so I dont try for nothing.
thanks
well get you an alpha in an hour or 2
awesome! I saw some PR from you recently. Or it was about https://discord.com/channels/1309558646228779139/1428243966310088724 .
anyway, I am here for a while, then will sleep and test
Ok it's published
if you use @alpha you'll see these changes
for the core package or AI SDK?
You should do all of the mastra packages 😉