MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Input processor can't add non user role message

I have tried to add system role message in my input processor and it led to the error: ``` Error executing step prepare-memory-step: Error: [Agent:gitlab-agent] - Input processor error at Agent.__runInputProcessors (file:///Users/[REDACTED]/node_modules/@mastra/core/dist/chunk-DQISKQDE.js:672:2179)...

Dependecy update inconsistency

We are using Mastra since versio 15 core or something and basically ever time there is a update we run into issues doing npm install .. always leads to depency confilcts with core and libsql as soon as a new version is out...

Playground throws use of streamVNext.

I am not using streamVNext for stream with my packages updated in my codebase, but playground throws the error
ERROR [2025-10-08 11:49:24.392 +0530] (Mastra): Error in streamVNext generate: streamVNext has been renamed to stream. Please use stream instead.
ERROR [2025-10-08 11:49:24.392 +0530] (Mastra): Error in streamVNext generate: streamVNext has been renamed to stream. Please use stream instead.
It's an bug in playgound tool calling and agent calling import...

@mastra/ai-sdk chatRoute with middleware context

Hey all, I'm trying to use the chatRoute from the mastra ai-sdk and I have middleware configured that is grabbing some additional metadata my backend sends and adding it to the runtime context. When the agent is called up with a users chat my middleware sees and sets the runtimeContext successfully but the agent runtimeContext is empty. Is this a known issue or perhaps something I'm missing with this setup? This is from the docs here https://mastra.ai/en/docs/frameworks/agentic-uis/ai-sdk ```...

Message Persistence using CopilotRuntime

How does this work when handling conversation history? How can we properly load previous conversations without continuously accumulating them as new inputs? I’m not entirely sure where to find more information about this....
No description

Workflow execution strangeness

Hi I am currently working on a Mastra workflow and I am having an issue where the workflow progresses fine, but then mid way through, it appears to restart seemingly for no reason. The really strange thing is that step before the restart doesn't error, and actually completes later on. It's like two branches of the worklow are running in parallel. If I look at traces I can see that there are in fact two traces that get created with the same runId. The first traces where the workflow runs all the way through successfully, and then a second execution trace that starts midway through the first trace execution and fails (the first step checks some state in a CMS and fails because it has already been created by the first execution). Has anyone else seen this behaviour? I am running the workflow with the workflow.startAsync from the client SDK perhaps that could have something to do with it....
No description

Lazy load Tools

Hey everyone, I’ve been experimenting with Mastra’s Agent setup and ran into something unexpected. Even when I send a very simple message like "hello", the token usage is extremely high - for example:...

Error saveMessagetoMemory

[Error: HTTP error! status: 400 - {"error":"Memory is not initialized"}] when calling
await client.saveMessageToMemory()
await client.saveMessageToMemory()
from mastra client sdk...

Why in @ag-ui/mastra has no exported member 'registerCopilotKit'?

I install the latest @ag-ui/mastra version to test it out with the set context and used it in agents, tools, instructions and workflows. I am following the docs here: https://mastra.ai/en/docs/frameworks/agentic-uis/copilotkit Is the docs outdated?...
No description

Can you wait to update docs until you cut a release

The docs are out of sync because you guys merged in a PR that updated the docs but haven't cut the release yet. (https://github.com/mastra-ai/mastra/pull/8542) <-- The docs reflect this PR but there hasn't been a release for 3 days. The out of sync docs: https://mastra.ai/en/reference/workflows/workflow#constructor-parameters...

Workflow runId in logs

Hello everyone, I was wondering if you have to do anything special to get logs correctly annotated with the workflow run ID. I am using using the PinoLogger and whilst logs are working, none of the logs generated during a workflow run have a runId. This makes retrieving logs for a specific run impossible via the getLogsByRunId function....

calling default*Options dynamic configuration option is not logical

my tests with console.log, that demonstrates that only defaultVNextStreamOptions is called nevertheless agent.generate or agent.stream were called. ``` gitlabAgent.generate defaultVNextStreamOptions called ...

Integration Home Assistant with Mastra?

Does anyone know if there is better way to integrate Home Assistant's Voice Assistant with Mastra better than building Mastra MCP server and connecting HA with it? Most probably some deeper knowledge regarding HA will be needed here but that's the only way I found out to have it working… Do we have any Home Assistant users here? ...

There is no way to use RuntimeContext as actual DI with playground

I was trying to use RuntimeContext as DI container to put there Gitlab client instance to simplify mocking and testing tools.
const gitlabClient = runtimeContext.get("gitlabClient") as GitLabClient;
const gitlabClient = runtimeContext.get("gitlabClient") as GitLabClient;
...

createTool only uses inputSchema for validation

Tool class accepts 4 schemas and it seems only inputSchema is used for actual validation. My expectation was that that all schemas is used to enforce runtime safety and better error reporting....

runtimeContext type safety

Documentation claims:
Pass runtime configuration variables to tools through a type-safe runtimeContext.
I may be missing something but I can't see how it is type safe. Type definition for execute option of createTool is defined as RuntimeContext without any generics making it affectively to store everything as unknown....

How to achieve LLM structured output in Mastra ai-sdk compatible streaming.

https://ai-sdk.dev/docs/ai-sdk-ui/streaming-data#streaming-data-from-the-server I want to pass the structured in response to useChat along with streaming output. Even though mastra agent call generates the right structured output, but only content key is streaming to client side. even though .getFullOutput().response has the messages along with ``` messages: [ [Object] ],...

Tools getting called sequentially

I've noticed that sometimes our tools are getting called sequentially when running an agent inside a workflow step, and I'm unsure why this is. We're really trying to cut down on latency and this is a big blocker. In the cases where we run tools sequentially, in our datadog traces, I see multiple workflow.agentic-loop.step.executionWorkflow with each calling a tool (see screenshot). In cases where the same agent runs the tools in parallel, I see a single workflow.agentic-loop.step.executionWorkflow that calls multiple tools (workflow.executionWorkflow.step.toolCallStep). I was hoping there was a flag I could set to force parallel tool calls within my workflow step. I know maxSteps is an option on an agent, but it defaults to 1 anyways so that probably isn't it. Any advice here would be amazing!...
No description

Stream order issue in nested agent via tool call

``` execute: async ({ context, runtimeContext }) => { const messageId = runtimeContext.get("messageId") as NewChatRuntimeContext["messageId"]; const stream = await SandboxAgent.stream( [...

Playground incorrect rendering for workflow execution

There are 2 major issues with rendering workflow execution that I am facing. All .map() step calls are rendered as mapping_undefined. If workflow has several mapping steps they render same execution state at the same time. Let's imagine I have workflow: ``` workflow...