M
Mastra•4w ago
Vulcano

chatRoute + useChat stream unstable

I am using ai-sdk's useChat and mastra's chatRoute together, and sometimes the SSE stream closes without an error, but also without sending the [DONE] event. Could this be an ai-sdk issue or a mastra issue? I will work on a repro repository, as the problem seems to happen under specific circumstances (never happens on my first message + agent's tool call, but happens always on the second) Short summary of my setup: - Frontend uses useChat. - The frontend provides client tools to the mastra agent for specific types of messaging to the client (custom HITL React components we made) - Backend has chatRoute, routing directly to an agent that can start or resume a workflow with a tool. - The agent calls a workflow, which suspends and the agent calls one of the client tools. - One of the tools requires quite a large payload of inputData, and the stream sometimes (read: very often, but at random spots) breaks and the SSE HTTP request connection closes during the streaming of tool-input-delta parts.
11 Replies
Mastra Triager
Mastra Triager•4w ago
šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10211 šŸ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. šŸ™ Thank you @Vulcano for helping us improve Mastra!
Abhi Aiyer
Abhi Aiyer•4w ago
hi @Vulcano how long is the stream open do you think? Every handler in the mastra server has a default timeout of 3 minutes
Vulcano
VulcanoOP•4w ago
For a single request it is definitely under 3 minutes, and I have seen this issue - let's say - 40 seconds after the HTTP request is opened After experimentation today, I do feel like this is an AI SDK issue, though. Making use of the experimental throttling option in their beta release makes this much less likely to happen. I will follow the GitHub issue and keep your team updated there if I cannot reproduce it with standalone AI SDK.
Abhi Aiyer
Abhi Aiyer•4w ago
Thanks! keep me posted!
_roamin_
_roamin_•3w ago
Hey @Vulcano ! Just curious, what model provider are you using?
Vulcano
VulcanoOP•3w ago
@roamin openai in this case gpt 4o
Ward
Ward•2w ago
@Vulcano could it be that the stream contains a chunk that the useChat api does not understand and stops reading?
Vulcano
VulcanoOP•7d ago
@Ward could be, but I wouldn't think that would happen on random text-delta chunks.
_roamin_
_roamin_•6d ago
I know that sometimes the streams get unexpectedly closed by the LLM providers and you'll get some cryptic error, like "error: terminated". This is one of the reasons we had to switch our "generate" implementation from using the stream API endpoints to the non-stream ones. At first we thought it was coming from our implementation, but we were able to reproduce this behaviour using ai-sdk directly. How often are you running into this issue?
Vulcano
VulcanoOP•6d ago
We put a full pause on developing our frontend a while ago, and it was related to slow rendering so probably not a mastra issue. I believe it was fully a frontend issue With 'frontend issue' I mean: not a mastra client issue but smth deeper. I haven't seen this issue since I improved the rendering/memoization efficiency of all my components. When I was inspecting my traces, I saw full generations, so it definitely was not the LLM provider closing the stream. If you want you could close this for now and if I ever see it again I will link to this discussion
_roamin_
_roamin_•5d ago
Sounds good, I appreciate you sharing those additional details! Definitely don't hesitate to open a new thread if you ever run into that issue again!! Thanks!

Did you find this page helpful?