M
MastraAI•2mo ago
randyklex

Streaming reasoning w/ 0.14.1 and ai v5 SDK

I'm using o3 and added sendReasoning: true to the toUIMessageStreamResponse and do not get any actual reasoning text. See the screenshot for the in-browser console of the part for reasoning. I did see this post, but it makes it seem the issue is resolved. https://discord.com/channels/1309558646228779139/1399493184358318274
No description
6 Replies
randyklex
randyklexOP•2mo ago
the UI MessageContent components.
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <Response key={`${message.id}-${i}`}>{part.text}</Response>
case 'reasoning':
return (
<Reasoning key={`${message.id}-${i}`} isStreaming={part.state === 'streaming'}>
<ReasoningTrigger/>
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
)
default:
if (part.type.startsWith('tool-')) {
const toolCall = part as ToolUIPart
return (
<Tool key={`${message.id}-${i}`}>
<ToolHeader type={toolCall.type} state={toolCall.state} />
<ToolContent>
<ToolInput input={toolCall.input} />
<ToolOutput
errorText={toolCall.errorText}
output={JSON.stringify(toolCall.output, null, 2)}
/>
</ToolContent>
</Tool>
)
}
return null
}
})}
</MessageContent>
</Message>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<Conversation>
<ConversationContent>
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent>
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <Response key={`${message.id}-${i}`}>{part.text}</Response>
case 'reasoning':
return (
<Reasoning key={`${message.id}-${i}`} isStreaming={part.state === 'streaming'}>
<ReasoningTrigger/>
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
)
default:
if (part.type.startsWith('tool-')) {
const toolCall = part as ToolUIPart
return (
<Tool key={`${message.id}-${i}`}>
<ToolHeader type={toolCall.type} state={toolCall.state} />
<ToolContent>
<ToolInput input={toolCall.input} />
<ToolOutput
errorText={toolCall.errorText}
output={JSON.stringify(toolCall.output, null, 2)}
/>
</ToolContent>
</Tool>
)
}
return null
}
})}
</MessageContent>
</Message>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
_roamin_
_roamin_•2mo ago
Hey! Have your tried passing the reasoning option in the providerOptions when calling stream(...)?
const stream = myAgent.streamVNext(messages, {
providerOptions: {
openai: {
reasoningEffort: "low",
},
},
});
const stream = myAgent.streamVNext(messages, {
providerOptions: {
openai: {
reasoningEffort: "low",
},
},
});
randyklex
randyklexOP•2mo ago
thanks Romain - had not tried that.. Will def give it a try.
const stream = await agent.streamVNext(messages, {
format: "aisdk",
//abortSignal: abortController.signal,
runtimeContext,
providerOptions: {
openai: { reasoningEffort: "low" }
}
//maxSteps: 10,
//memory: { thread: `${userId}-${roomId}`, resource: userId },
})
const stream = await agent.streamVNext(messages, {
format: "aisdk",
//abortSignal: abortController.signal,
runtimeContext,
providerOptions: {
openai: { reasoningEffort: "low" }
}
//maxSteps: 10,
//memory: { thread: `${userId}-${roomId}`, resource: userId },
})
adding providerOptions did not get me streaming in the reasoning part in the UI.
Mastra Triager
Mastra Triager•2mo ago
GitHub
[DISCORD:1409624739739074570] Streaming reasoning w/ 0.14.1 and ai ...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1409624739739074570 I&#39;m using o3 and added sendReasoning: true to the toUIMessageStreamResponse and do...
Abhi Aiyer
Abhi Aiyer•2mo ago
Hi @randyklex we found the issue, providerOptions was not being propagated properly. Fix is released in the latest @mastra/core@alpha going out Tuesday
randyklex
randyklexOP•2w ago
thank you @Abhi Aiyer šŸ‘‹ tested this today w/ the 0.16.0 version just released. given the same setup above w/ providerOptions and using o3 model, I still do not see any text in the reasoning parts streamed. I do see the "thinking" part display in the chat dialog, and see it end, I just never get the actual reasoning text. šŸ‘‹ just a poke. I still do not see any reasoning text. Maybe I'm misunderstanding what "thinking" is supposed to be? I kind of expect to see tokens generated - but perhaps this is internal model thinking time, and I'm not supposed to see tokens? Here's what I expect. the "Thinking" part shows in the UI. The model generates tokens, and those show up "thinking ends" and collapses that part. What I get. "thinking" part is displayed in the UI. - no tokens are displayed "thinking" component finishes and the model generates tool calls or final response.

Did you find this page helpful?