Mastra

M

Mastra

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

When using reasoning model : Invalid 'input[23].id': 'rs_050b68fa644c883c01692ee29668b0819a9e6cfedb1

Error in agent stream { ai:mastra:dev: error: APICallError [AI_APICallError]: Invalid 'input[23].id': 'rs_050b68fa644c883c01692ee29668b0819a9e6cfedb1f6011b5'. Expected an ID that begins with 'msg'....

beta version

When do we get latest 0.24.6 stable branch fixes on mastra beta version, last version beta got was "@mastra/core": "1.0.0-beta.5",

Browser compatibility issues when importing types from @mastra/ai-sdk in Vite frontend

Hi! After tinkering with basic examples over the last few months, I'm building a mastra project proper! I have a monorepo with a separate Vite + React frontend and a Mastra server (not using Next.js or another full-stack framework). I'm running into browser compatibility issues when trying to use Mastra types in my frontend code. My setup: server - Mastra backend with inbuilt Hono server...

Workflow run stuck in `running` state

We have a workflow run that appears to be orphaned - it's stuck in a running state, but it's not actually running. We started this workflow at Mon, 01 Dec 2025 00:49:22 GMT This workflow is still currently returning "status": "running" over 24 hours later. This workflow with this input typically takes between 1-5 minutes at most....

Can I make `mastra` CLI transpile additional entrypoints?

I have the following setup ``` apps |- mastra |- src...

Agent Network Understanding - Structured Outputs

Iam trying to understand if agent networks can create - structured otuput - use output processors in the sub agents adn aceess subagents messages response ( which is what output processors use ) ...

Recreating Anthropic's Tool Search Tool

Hi, As our tools balloon in size, we were wondering if theres a good way to recreate Anthropic's Tool Search Tool with Mastra. Blog post: https://www.anthropic.com/engineering/advanced-tool-use We are limited to using open source models so having a good way to create this search without overloading context would be amazing. ...

Support for passing `projectId` for Langsmith Integration

Hi Mastra team, we're using the LangSmith AI Tracing integration and have a large project with traces from multiple sources. Is there a way to specify a custom project ID or set environment variables to control which LangSmith project the traces are recorded under? I haven't found this option in the exporter config or documentation. Is this supported, or is there a recommended approach for multi-project tracing? Appreciate any guidance! Package versions used: ```json {...

Disable Colors for PinoLogger

I am using pinoLogger and I wonder how can I disable the colors and modify the formatting (such as adding filename:linenumber)

Resuming a suspended workflow in a parallel branch while the other branch is still running

Hi. I have a situation where there are two branches, one that i can suspend and resume in a loop (branch B), and the other that ends right after completing its only step (branch A). I attached a screen to make it easier to understand. I am encountering the following problem: - if branch B suspends before branch A finishes, and I resume it before branch A finishes, if branch A ends while branch B is running the second iteration, such iteration is stopped/lost - if branch B suspends before branch A finishes, I wait for branch A to finish, then I can resume branch B without problems. ...
No description

Scorer doesn't use custom gateway.

We have to use a custom gateway internally, so I created a https://mastra.ai/models/v1/gateways/custom-gateways eg. when creating my own Agent it works perfectly, but I found that using prebuilt Scorer doesn't resolve the model from the gateway - I traced it to here, I think it's because it doesn't receive the mastra instance maybe: https://github.com/mastra-ai/mastra/blob/b0f3c9861c05ef07f282d5e70bfa2111b9f109c1/packages/core/src/evals/base.ts#L547 I'm using;...

Can we store user feedback with the response generated by the agent in the storage?

Scenario: User ask a question, we generate a response & send to the user and that response is also stored in the DB by default. Now, the response is not what user wanted and we want to capture those so that we can understand the efficiency and accuracy of our agent. Is it possible in some way to store user feedback with the response/workflow output in the DB. So we can get that data back from DB later and do analysis on it?...

Any reason why I cannot create a base configuration and instanciate mastra with it?

This works fine ``` export const mastra = new Mastra({ workflows: { weatherWorkflow, vetPetPlanWorkflow },...

Make tool validation async to allow async refinements

Having the ability to run async zod refinements as part of tool calls would be a good get. Since working memory is also modeled as a tool call this makes it easy to guide how the agents update memory and enforce domain specific validations without inventing another abstraction Raised a PR showing what I am looking for https://github.com/mastra-ai/mastra/pull/10678 ...

V1 Beta Bug? Fresh Quickstart Fails — ./a2a/store Not Exported by @mastra/server

Message Hey! I think there’s a packaging issue in the V1 beta. A fresh quickstart using bun create mastra@beta fails immediately when running bun run dev / mastra dev:...

What is the better approach for STT and TTS at Mastra level with Voice on mobile?

We have a React Native CLI (without Expo) app, where we are sending every 5-10s of audio chunks after the user clicked on the Voice mode button on the mobile app. We are not going for STS (speech-to-speech) Do we go for implementing Web Socket and connecting it to Mastra (backend) or do we have to go with node stream only method? Don't you think each chunk will have HTTP overhead?...

Pass custom fetch to mastra client

Hi I would like to request a parameter to the mastra client which would be a "fetch" mostly like in ai-sdk. My use case is that I'm using tauri and i need to pass tauri custom fetch in order to not get timeout error on macos. Tauri uses macos webview which is based on safari and safari stablishes a default timeout which can't be changed unless using Tauri fetch plugin. ```ts public chat = new Chat({ transport: new DefaultChatTransport({...

How to handle MCP elicitations using MCPClient?

Hey team, I am not able to handle elicitation requests using MCPClient even after sending it from the tool on server. I am initialising the MCPClient as follows:...

MongoDb storage error in beta release

Hello everyone! I have a doubt when using Mondogb storage in beta version. When creating the storage using MongoDBStore: ```...

Structured output breaks for gemini flash 2.5

I get the following error Function calling with a response mime type: 'application/json' is unsupported I don't get this error for flash 2.0. When using mastra agents....
Mastra Community - Answer Overflow