Error in LLM Execution Step TypeError: terminated

After upgrading to the Mastra versions and AI SDK v5 : "@mastra/core": "^0.17.1", "@mastra/langfuse": "^0.0.9", "@mastra/libsql": "^0.14.2", "@mastra/loggers": "^0.10.12", "@mastra/memory": "^0.15.2", "@mastra/pg": "^0.16.0", My agents in the workflows keep failing with Error in LLM Execution Step TypeError: terminated. Before upgrading, I never had seen these issues. I am assuming that there is an issue with the model not being able to abide by the output schema. This is weird because I am using the same model I was using before upgrading. It would be great to be able to see the reasoning steps for the agent call to see why the schemas aren't matching, as I do not have any contradictory language in my prompt. Would be great to understand if upgrading caused these issues as well, as it seems unexpected that upgrading the versions would cause something like this.
25 Replies
tommy
tommyOP4w ago
_roamin_
_roamin_4w ago
Hi @tommy ! Could you try updating to the latest Mastra packages?
tommy
tommyOP4w ago
Still facing the same issues after upgrading @Romain
_roamin_
_roamin_4w ago
It looks like a connectivity issue. How are you getting this error?
tommy
tommyOP4w ago
I thought it was a connectivity error as well, but after testing it across multiple different networks (4 so far) and experiencing no other connection errors I am confident it is not a network issue The error is coming from agent execution in a workflow
_roamin_
_roamin_4w ago
Could you share a small repro example?
tommy
tommyOP4w ago
If you configure any workflow with generate v next, ai-sdk 5 and the latest mastra packages, then add a series of agents using gpt-5 of reasonable complexity you should see similar issues. I can think about how to adjust our current workflow such that we feel comfortable sharing. @Romain its hard for me to adapt our current workflow into something that would make sense as a small reproducible error. If you have any example repositories with semi-complex tasks you should be able to see the errors if you are on the same versions.
_roamin_
_roamin_4w ago
Sorry, but I can't reproduce this error. Where is this error coming from? Client? Mastra server?
tommy
tommyOP3w ago
@Romain Its coming from mastra when running locally and triggering the workflow using the playground. It is possible to reproduce. I will attempt to do so by building a new workflow that does something different. @Romain Is there a default timeout for agents running in parallel for some sort of promise resolution? Unsure, but seems to be strongly related to the agents that are failing with this error. Its hard to recreate a task that requires a significant amount of reasoning in parallel. I was able to create a workflow that fails with similar errors ~ 10% of the time, working on trying to create a situation where the error occurs more frequently, but because I am unsure of the root cause it is difficult.
tommy
tommyOP3w ago
@Romain Here is the example repository that I have seen the error occur in a few times. Its not super consistent but you will be able to see it with enough persistence: https://github.com/tommyOtsai/timeout-example
GitHub
GitHub - tommyOtsai/timeout-example
Contribute to tommyOtsai/timeout-example development by creating an account on GitHub.
Unknown User
Unknown User3w ago
Message Not Public
Sign In & Join Server To View
_roamin_
_roamin_3w ago
Thanks for sharing @tommy , will test this out! Hey @Iodine ! Yes, everything is a "stream" now under the hood. It could be coming from structuredOutptut which we are going to rework pretty soon, but meanwhile, if you could share a small repro example that'd be awesome 🙏 (I know it's not always simple to create one though...)
Unknown User
Unknown User3w ago
Message Not Public
Sign In & Join Server To View
tommy
tommyOP3w ago
@Romain I believe our issues are stemming from inconsistency with stream. Wondering if its possible to add generate back in as the only change we made was moving to generateVNext, which wraps streaming, and has caused many of our previously viable workflows to fail. We do still need functionality thats only available in ai-sdk v5, otherwise we would just rollback. Also, for structured output, is output deprecated for generateVNext? In any case, I have tested using both structuredOutput and plain output and faced similar issues. @Romain This is also an issue on mastra cloud, not just locally in the playground Noticing in the cloud sometimes the workflows will just hang rather than returning the type error. Sometimes the type error is returned. @Romain any update on this? Now, in addition to the previous error I am seeing this error: ERROR [2025-10-02 12:55:04.963 -0400] (Mastra): Error in agent stream runId: "fe989371-d483-4b13-8276-5b1e7d3a8a01" error: {}
_roamin_
_roamin_3w ago
Hey @tommy ! I was not able to repro yesterday, but I will spend more time on this issue later this afternoon 😉
Unknown User
Unknown User3w ago
Message Not Public
Sign In & Join Server To View
tommy
tommyOP3w ago
@Romain Switching to the ai-sdk to generate directly has temporarily fixed our issues. Not sure, but we believe it could be connected to this: https://github.com/mastra-ai/mastra/commit/623ffaf2d969e11e99a0224633cf7b5a0815c857 Not entirely familiar with the codebase though, so don't want to be misleading.
_roamin_
_roamin_3w ago
Hmm, not sure it's that because it was released in 0.19.x and you seem to be using 0.17.x 🤔 Do you get the error with 0.19.x ?
tommy
tommyOP3w ago
Yes, I get the errors across versions. Yea was just looking for commits that effected both generate and generateVNext
_roamin_
_roamin_3w ago
I still have not been able to reproduce, but you mentioned that you have a project in Mastra Cloud, does this project reproduces the error?
tommy
tommyOP3w ago
Yes it should Heres a slug numerous-raspy-parrot
_roamin_
_roamin_3w ago
I created this GH issue to track this problem, feel free to add anything that could help the team debug https://github.com/mastra-ai/mastra/issues/8430 I'm also going to share your slug with the cloud team to see if they can find anything on their end.
tommy
tommyOP3w ago
Thank you!
Ville
Ville3w ago
I am also getting these "terminated" errors quite a bit. I think they might be due to a connection error to OpenAI but not sure. My question is: How do I catch these errors and pipe them to the frontend?
seb7wake
seb7wake3d ago
Also getting issues with these "terminated" errors stochastically every ~100 model requests. I do also see the same error when calling openai directly (not using mastra) with structured outputs, so maybe an underlying model provider issue.

Did you find this page helpful?