A subsequent call generates an error about reasoning being omitted from `function_call`

I do a agent.stream, get the results.. all works great, the very next message the user creates, this error is logged.
Error creating stream [Error [AI_APICallError]: Item 'fc_0a038568becf80fb0068e9630e5fd081978df905e0edaed49a' of type 'function_call' was provided without its required 'reasoning' item: 'rs_0a038568becf80fb0068e9630ad5248197a0938f405d5fad2e'.] {
cause: undefined,
url: 'https://api.openai.com/v1/responses',
requestBodyValues: [Object],
statusCode: 400,
responseHeaders: [Object],
responseBody: '{\n' +
' "error": {\n' +
` "message": "Item 'fc_0a038568becf80fb0068e9630e5fd081978df905e0edaed49a' of type 'function_call' was provided without its required 'reasoning' item: 'rs_0a038568becf80fb0068e9630ad5248197a0938f405d5fad2e'.",\n` +
' "type": "invalid_request_error",\n' +
' "param": "input",\n' +
' "code": null\n' +
' }\n' +
'}',
isRetryable: false,
data: [Object]
}
Error creating stream [Error [AI_APICallError]: Item 'fc_0a038568becf80fb0068e9630e5fd081978df905e0edaed49a' of type 'function_call' was provided without its required 'reasoning' item: 'rs_0a038568becf80fb0068e9630ad5248197a0938f405d5fad2e'.] {
cause: undefined,
url: 'https://api.openai.com/v1/responses',
requestBodyValues: [Object],
statusCode: 400,
responseHeaders: [Object],
responseBody: '{\n' +
' "error": {\n' +
` "message": "Item 'fc_0a038568becf80fb0068e9630e5fd081978df905e0edaed49a' of type 'function_call' was provided without its required 'reasoning' item: 'rs_0a038568becf80fb0068e9630ad5248197a0938f405d5fad2e'.",\n` +
' "type": "invalid_request_error",\n' +
' "param": "input",\n' +
' "code": null\n' +
' }\n' +
'}',
isRetryable: false,
data: [Object]
}
it's like past messages aren't being populated correctly with a reasoning value on calls to openai? this particular agent is configured to use openai o3 models.
9 Replies
_roamin_
_roamin_2w ago
Hey @randyklex ! Would you mind sharing a small repro example? Thanks 🙏
Mastra Triager
GitHub
[DISCORD:1426296723952635965] A subsequent call generates an error ...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1426296723952635965 I do a agent.stream, get the results.. all works great, the very next message the user...
randyklex
randyklexOP7d ago
Here's a repo for this issue.. https://github.com/randyklex/mastra_error The tools I have there - they just make database calls. So you could stub in whatever you want for those. The first question I ask is user: "who are the stakeholders" that returns a good result. followed up with user: "list the collections"
GitHub
GitHub - randyklex/mastra_error: a reproducible files for mastra er...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
Daniel Lew
Daniel Lew6d ago
@randyklex I've been trying for a bit to repro this but unable to, are you able to provide an actual reproduction rather than just code snippets? That would help us a lot with fixing it!
randyklex
randyklexOP5d ago
yeah let me stitch that together in a working app.. no dice on the repo.. don't ya love coding. it's about as exact as we have in production, but.. I'll keep at it to see if I can figure out what's happening Still can't isolate to exactly the setting or switch or combination of library versions etc. I'm unable to run this in the playground because of the Monorepo issues and typescript transpile issues. But I also think this has to do with the vercel ai-sdk message formatting - so that's probably why you don't see this on the playground if that's the only place you've tried reproducing? I'm using openAI o3 model. I do think Reasoning has to do with it. Because with gpt-4o do not have the problem. Have to generate reasoning parts. If you get a response that generated reasoning parts, and then sent another message, I almost guarantee you'll step on this error? I even tried deleting step-starts because I was reading that was a problem with the order of the messages. That did not seem to fix it. IMHO this is a problem in the message translation to and from vercel ai sdk. The client side is "@ai-sdk/react": "^2.0.68",
const { messages, roomId, viewingEntity } = await req.json()

runtimeContext.set([...])

const agent = mastra.getAgent('chatAgent')
const stream = await agent.stream(messages, {
runtimeContext,
providerOptions: {
openai: { reasoningEffort: 'medium', reasoningSummary: 'auto' },
},
// maxSteps: 20, // I had to comment this out because it doesn't exist (Luke)
memory: { thread: `${userId}-${roomId}`, resource: userId },
})

return createUIMessageStreamResponse({
stream: toAISdkFormat(stream),
})
}
const { messages, roomId, viewingEntity } = await req.json()

runtimeContext.set([...])

const agent = mastra.getAgent('chatAgent')
const stream = await agent.stream(messages, {
runtimeContext,
providerOptions: {
openai: { reasoningEffort: 'medium', reasoningSummary: 'auto' },
},
// maxSteps: 20, // I had to comment this out because it doesn't exist (Luke)
memory: { thread: `${userId}-${roomId}`, resource: userId },
})

return createUIMessageStreamResponse({
stream: toAISdkFormat(stream),
})
}
I really want to work with you on this - just really lack the knowledge of the internals, so I can only report the observed effect. I will try to build a more robust nextjs frontend and use the vercel Ai SDK.. but that will take time. I can also show exactly the UI message parts that are being sent back - perhaps that would help see how those messages are being converted back into model calls?
Daniel Lew
Daniel Lew5d ago
Yeah it would help to give as much info as possible
randyklex
randyklexOP5d ago
This is what the message parts look like coming up from the client. These generate the error. https://github.com/randyklex/mastra_error/blob/master/client-message-parts.json Then if I refresh, and try again, these are the message parts that are sent up (loaded from the database) https://github.com/randyklex/mastra_error/blob/master/client-message-parts-from-db.json The big difference here is the missing reasoning parts. Finally just for reference, this is the error message; https://github.com/randyklex/mastra_error/blob/master/error.txt
GitHub
mastra_error/client-message-parts.json at master · randyklex/mastr...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
GitHub
mastra_error/client-message-parts-from-db.json at master · randykl...
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
GitHub
mastra_error/error.txt at master · randyklex/mastra_error
a reproducible files for mastra error. Contribute to randyklex/mastra_error development by creating an account on GitHub.
randyklex
randyklexOP3d ago
we're working on scaffolding a nextjs app with vercel AI SDK v5 calling this agent.. but in the mean time - are these UI messages helpful at all? We are going with a workaround. Instead of passing all messages from the chat client, we're just getting the last user message - which is probably the most correct thing to do anyway.
// Extract only the last user message - Mastra's memory will handle conversation history
// This prevents function_call/reasoning mismatch errors with reasoning models (o3)
const lastUserMessage = messages[messages.length - 1]

const agent = mastra.getAgent('chatAgent')
const stream = await agent.stream(lastUserMessage, {
runtimeContext,
providerOptions: {
openai: { reasoningEffort: 'medium', reasoningSummary: 'auto' },
},
// maxSteps: 20, // I had to comment this out because it doesn't exist (Luke)
memory: { thread: `${userId}-${roomId}`, resource: userId },
})
// Extract only the last user message - Mastra's memory will handle conversation history
// This prevents function_call/reasoning mismatch errors with reasoning models (o3)
const lastUserMessage = messages[messages.length - 1]

const agent = mastra.getAgent('chatAgent')
const stream = await agent.stream(lastUserMessage, {
runtimeContext,
providerOptions: {
openai: { reasoningEffort: 'medium', reasoningSummary: 'auto' },
},
// maxSteps: 20, // I had to comment this out because it doesn't exist (Luke)
memory: { thread: `${userId}-${roomId}`, resource: userId },
})
Daniel Lew
Daniel Lew3d ago
Oh yes, you shouldn't be passing conversation history as messages to the agent, memory should handle that for you

Did you find this page helpful?