Streaming reasoning w/ 0.14.1 and ai v5 SDK
I'm using o3 and added
sendReasoning: true to the toUIMessageStreamResponse and do not get any actual reasoning text.
See the screenshot for the in-browser console of the part for reasoning.
I did see this post, but it makes it seem the issue is resolved.
https://discord.com/channels/1309558646228779139/1399493184358318274
6 Replies
the UI MessageContent components.
Hey! Have your tried passing the reasoning option in the
providerOptions when calling stream(...)?
thanks Romain - had not tried that.. Will def give it a try.
adding
providerOptions did not get me streaming in the reasoning part in the UI.š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/7060
GitHub
[DISCORD:1409624739739074570] Streaming reasoning w/ 0.14.1 and ai ...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1409624739739074570 I'm using o3 and added sendReasoning: true to the toUIMessageStreamResponse and do...
Hi @randyklex we found the issue, providerOptions was not being propagated properly. Fix is released in the latest @mastra/core@alpha going out Tuesday
thank you @Abhi Aiyer
š tested this today w/ the
0.16.0 version just released.
given the same setup above w/ providerOptions and using o3 model, I still do not see any text in the reasoning parts streamed.
I do see the "thinking" part display in the chat dialog, and see it end, I just never get the actual reasoning text.
š just a poke. I still do not see any reasoning text.
Maybe I'm misunderstanding what "thinking" is supposed to be?
I kind of expect to see tokens generated - but perhaps this is internal model thinking time, and I'm not supposed to see tokens?
Here's what I expect.
the "Thinking" part shows in the UI.
The model generates tokens, and those show up
"thinking ends" and collapses that part.
What I get.
"thinking" part is displayed in the UI.
- no tokens are displayed
"thinking" component finishes and the model generates tool calls or final response.