How to persist partial step output to memory for single step agent invocations
Hi Folks,
I was wondering if its possible to stop agent stream midway while also preserving the tokens that were streamed till the time user stopped the stream.
So essentially:
User sends a message.
Lets say agent streams about 200 tokens and user clicks on stop.
Agent stops generating and we save these 200 tokens to the memory.
Currently, if I stop the stream from frontend and abort using abort controller on the backend, it does not persist the message to the backend.
I had asked this before and someone recommended me to use
savePerStep but that only saves completed steps (where each step is an agent to llm call afaik)
But my agents are only making single LLM calls and I want to save the response to memory as it streams.1 Reply
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10344
š If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly.
š Thank you @Kartik for helping us improve Mastra!
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10345
š If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly.
š Thank you @Kartik for helping us improve Mastra!
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10346
š If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly.
š Thank you @Kartik for helping us improve Mastra!
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10347
š If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly.
š Thank you @Kartik for helping us improve Mastra!
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10348
š If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly.
š Thank you @Kartik for helping us improve Mastra!