A subsequent call generates an error about reasoning being omitted from `function_call`
I do a agent.stream, get the results.. all works great, the very next message the user creates, this error is logged.
it's like past messages aren't being populated correctly with a reasoning value on calls to openai?
this particular agent is configured to use openai o3 models.
9 Replies
Hey @randyklex ! Would you mind sharing a small repro example? Thanks 🙏
📝 Created GitHub issue: https://github.com/mastra-ai/mastra/issues/8738
GitHub
[DISCORD:1426296723952635965] A subsequent call generates an error ...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1426296723952635965 I do a agent.stream, get the results.. all works great, the very next message the user...
Here's a repo for this issue..
https://github.com/randyklex/mastra_error
The tools I have there - they just make database calls. So you could stub in whatever you want for those.
The first question I ask is
user: "who are the stakeholders"
that returns a good result.
followed up with
user: "list the collections"
GitHub
GitHub - randyklex/mastra_error: a reproducible files for mastra er...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
@randyklex I've been trying for a bit to repro this but unable to, are you able to provide an actual reproduction rather than just code snippets? That would help us a lot with fixing it!
yeah let me stitch that together in a working app..
no dice on the repo.. don't ya love coding.
it's about as exact as we have in production, but..
I'll keep at it to see if I can figure out what's happening
Still can't isolate to exactly the setting or switch or combination of library versions etc.
I'm unable to run this in the playground because of the Monorepo issues and typescript transpile issues.
But I also think this has to do with the vercel ai-sdk message formatting - so that's probably why you don't see this on the playground if that's the only place you've tried reproducing?
I'm using openAI o3 model. I do think Reasoning has to do with it. Because with gpt-4o do not have the problem.
Have to generate reasoning parts.
If you get a response that generated reasoning parts, and then sent another message, I almost guarantee you'll step on this error?
I even tried deleting
step-starts
because I was reading that was a problem with the order of the messages. That did not seem to fix it.
IMHO this is a problem in the message translation to and from vercel ai sdk.
The client side is "@ai-sdk/react": "^2.0.68",
I really want to work with you on this - just really lack the knowledge of the internals, so I can only report the observed effect.
I will try to build a more robust nextjs frontend and use the vercel Ai SDK.. but that will take time.
I can also show exactly the UI message parts that are being sent back - perhaps that would help see how those messages are being converted back into model calls?Yeah it would help to give as much info as possible
This is what the message parts look like coming up from the client.
These generate the error.
https://github.com/randyklex/mastra_error/blob/master/client-message-parts.json
Then if I refresh, and try again, these are the message parts that are sent up (loaded from the database)
https://github.com/randyklex/mastra_error/blob/master/client-message-parts-from-db.json
The big difference here is the missing
reasoning
parts.
Finally just for reference, this is the error message;
https://github.com/randyklex/mastra_error/blob/master/error.txtGitHub
mastra_error/client-message-parts.json at master · randyklex/mastr...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
GitHub
mastra_error/client-message-parts-from-db.json at master · randykl...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
GitHub
mastra_error/error.txt at master · randyklex/mastra_error
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
we're working on scaffolding a nextjs app with vercel AI SDK v5 calling this agent.. but in the mean time - are these UI messages helpful at all?
We are going with a workaround.
Instead of passing all messages from the chat client, we're just getting the last user message - which is probably the most correct thing to do anyway.
Oh yes, you shouldn't be passing conversation history as messages to the agent, memory should handle that for you