MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Workflow runId in logs

Hello everyone, I was wondering if you have to do anything special to get logs correctly annotated with the workflow run ID. I am using using the PinoLogger and whilst logs are working, none of the logs generated during a workflow run have a runId. This makes retrieving logs for a specific run impossible via the getLogsByRunId function....

calling default*Options dynamic configuration option is not logical

my tests with console.log, that demonstrates that only defaultVNextStreamOptions is called nevertheless agent.generate or agent.stream were called. ``` gitlabAgent.generate defaultVNextStreamOptions called ...

Integration Home Assistant with Mastra?

Does anyone know if there is better way to integrate Home Assistant's Voice Assistant with Mastra better than building Mastra MCP server and connecting HA with it? Most probably some deeper knowledge regarding HA will be needed here but that's the only way I found out to have it working… Do we have any Home Assistant users here? ...

There is no way to use RuntimeContext as actual DI with playground

I was trying to use RuntimeContext as DI container to put there Gitlab client instance to simplify mocking and testing tools.
const gitlabClient = runtimeContext.get("gitlabClient") as GitLabClient;
const gitlabClient = runtimeContext.get("gitlabClient") as GitLabClient;
...

createTool only uses inputSchema for validation

Tool class accepts 4 schemas and it seems only inputSchema is used for actual validation. My expectation was that that all schemas is used to enforce runtime safety and better error reporting....

runtimeContext type safety

Documentation claims:
Pass runtime configuration variables to tools through a type-safe runtimeContext.
I may be missing something but I can't see how it is type safe. Type definition for execute option of createTool is defined as RuntimeContext without any generics making it affectively to store everything as unknown....

How to achieve LLM structured output in Mastra ai-sdk compatible streaming.

https://ai-sdk.dev/docs/ai-sdk-ui/streaming-data#streaming-data-from-the-server I want to pass the structured in response to useChat along with streaming output. Even though mastra agent call generates the right structured output, but only content key is streaming to client side. even though .getFullOutput().response has the messages along with ``` messages: [ [Object] ],...

Tools getting called sequentially

I've noticed that sometimes our tools are getting called sequentially when running an agent inside a workflow step, and I'm unsure why this is. We're really trying to cut down on latency and this is a big blocker. In the cases where we run tools sequentially, in our datadog traces, I see multiple workflow.agentic-loop.step.executionWorkflow with each calling a tool (see screenshot). In cases where the same agent runs the tools in parallel, I see a single workflow.agentic-loop.step.executionWorkflow that calls multiple tools (workflow.executionWorkflow.step.toolCallStep). I was hoping there was a flag I could set to force parallel tool calls within my workflow step. I know maxSteps is an option on an agent, but it defaults to 1 anyways so that probably isn't it. Any advice here would be amazing!...
No description

Stream order issue in nested agent via tool call

``` execute: async ({ context, runtimeContext }) => { const messageId = runtimeContext.get("messageId") as NewChatRuntimeContext["messageId"]; const stream = await SandboxAgent.stream( [...

Playground incorrect rendering for workflow execution

There are 2 major issues with rendering workflow execution that I am facing. All .map() step calls are rendered as mapping_undefined. If workflow has several mapping steps they render same execution state at the same time. Let's imagine I have workflow: ``` workflow...

Playground mastra 0.14.1-alpha.0 resets the state of agent page every time I switch back to it

Whenever I switch between tabs in my browser and return back to mastra playground it resets the state of the agent. ``` "dependencies": { "@ai-sdk/amazon-bedrock": "^3.0.30",...

I'm not sure auth, middleware and input/output processors difference

hi, team! I'm bit curious about auth feature. https://mastra.ai/ja/docs/auth do you have some plan to provide a user guilde or customizable auth class like input/output processors? now, i know mastra provides some auth providers feature via auth package and some more providers are being added. In some senario, I should create another auth feature like aws cognito, microsoft something, next-auth etc. Roughly speaking, we have to add jwt token verification or cookie decryption to verify whether api requester has the right to access our service or not, and additionally add runtime context by using jwt token claim information like custom attributes or something. ...

Duplicate responses coming from Mastra

I sometimes notice my streamVNext agent outputting two text outputs on the same run. I have verified this by confirming that two text-start chunk types get sent in the stream with the same runId but different message IDs. For context, I'm on the latest mastra version. Is there any flag I can set on streamVNext so this doesn't happen? Or is it completely up to the model provider and Mastra has no control here? Please let me know, as I'm currently manually filtering out the rest of the stream after I detect the first text-start....

`.dowhile` and `.dountil` `runCount` is stuck at `-1`

I don't know why this is a thing but it is. How long has this been broken for? or is this a recent regression? Is this a known issue? also dowhile and dountil are the same function, just reverse the true to false, vice versa, why have 2? I was under the impression that dowhile ran its condition check before the loop whereas dountil ran its check after the loop, im that case it would warrant seperate functions. ```typescript...

"Internal agent did not generate structured output"

I am getting: ``` [StructuredOutputProcessor] Structuring failed: Internal agent did not generate structured output [StructuredOutputProcessor] Structured output processing failed: [StructuredOutputProcessor] Structuring failed: Internal agent did not generate structured output...

Better errors please!!

Hey guys, Can we better errors in mastra!! Though we are novice to webstreams, but can work on the information in errors. But this does not help me at all. ```ERROR [2025-10-03 21:10:37.593 +0530] (Mastra): Error in agent stream runId: "1a6b58be-52d0-4c76-b3d4-a1708f1d69d8"...

Removing the expected functionality for generate broke my workflow

I'm curious why generate is now just the lightest weight wrapper around stream (https://github.com/mastra-ai/mastra/blob/01605282391c6be2c570f9fdcc808e2b063806cd/packages/core/src/agent/agent.ts#L3307). This feels like an opinionated choice, but it was both not made transparent that when you use generate you are now actually using stream (which ended up breaking our workflow) and not justified as to why this opinion is held. By using the function call generate one would expect a different underl...

Langfuse tracing not working

I'm getting errors from the langfuse SDK when trying to enable the langfuse tracing: [Langfuse SDK] Unknown error: SyntaxError: Failed to parse JSON at json (null) at processTicksAndRejections (null)...

Updated to Mastra 0.19 everything breaks

This is starting to feel like a reoccuring pattern. I am avoiding updating mastra now because every-time a new release is pushed it feels like it is not tested, and it introduces new issues that break our existing workflows. There is no context here, this is just the output we are getting. Worked fine in 0.16.0, and now its broken. Version 0.17.0 and 0.18.0 were also broken so we omited updating to thoes versions, but now we are missing out on core features because they get added in future versions, that have regressive behaviors. ```...