Mastra

M

Mastra

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

how to access sub-agent tool results using ai sdk.

Hi, We have several agents whose tool results (structured outputs) are displayed on the frontend as intermediate steps. We’re now building a mega-agent that orchestrates these agents and should continue to surface their tool results in the UI. However, the AI SDK stream transformer is dropping those tool results. It currently only forwards data-* custom events from sub-agents, which prevents the tool outputs from coming through. Here are the sub-agent level events (https://mastra.ai/docs/agents/networks#agent-output) and wondering if there is any way of getting access to them. (I am open to writing a custom transformer for the time being)...

Entry/Wrapper needs memory so that other agent dowstream can have memory, why?

I have a setup where we had a entryWorkflow with 2 agents being called. AgentA has memory and AgentB does not. Can we have conversational chat with this setup? From what I have found, we cannot have previous context in a workflow with agent having or not having memory in it. Is that right? Now, to resolve that I had to introduce a wrapperAgent with memory which now calls my entryWorkflow. Now my wrapperAgent does not just call and return raw output from worflow execution. After few chat, it starts interpreting and sometimes appending previous answers with the new result. It breaks the stability. If I don't give memory to this wrapperAgent so as to stablise it, where it give the execution output and do not have previous, then the downstream agentA for some reason does have context of previous conversation....

I want to assign agent_id to the span of `chat {model}` as well.

Agent Runs Span has agent_id but not gen_ai.usage. LLM Operations `chat {model} has gen_ai.usage but not agent_id. What is the best way to aggregate gen_ai.usage by agent_id?...

Streaming from a workflow step when using Inngest

I'm currently struggling with streaming an agent response from a workflow when running it through the Inngest Dev Server. For the Mastra workflow I'm using the workflowRoute with patched streaming support as described in this post. This works fine when using Mastra standalone and consuming it with AI-SDK on the frontend. ...

Access AI Tracing of a particular trace id programmatically without mastra studio

Is it possible to access the AI traces directly via the mastra instance, instead of using mastra client sdk or mastra studio or mastra cloud. I want to export the traces as json and perform some analysis on them to create charts like token usage, etc

The resourceId and threadId for chatRoute and copilotkit integration should be able to contain authe

In the integration with the AI SDK, memory information is passed from the frontend. However, when using session authentication, it becomes possible to view other users' messages. https://mastra.ai/docs/frameworks/agentic-uis/ai-sdk#streaming With copilotkit's integration, the resourceId is fixed, so in applications used by multiple users, all users can view the same message history. https://mastra.ai/docs/frameworks/agentic-uis/copilotkit...

Mastra wasn't able to build your project. Please add ... to your externals

Even though I added the packages to externals, I still get the error ``` const newMastra = new Mastra({ ...mastraConfig, bundler: { externals: ["mssql", "ioredis"], } }); #16 12.68 INFO [2025-11-09 03:14:25.628 +0000] (Mastra CLI): Optimizing dependencies......

Dynamically Reload Prompt/Agents

Hi team, is there a way to dynamically reload the agent instructions (or the agent itself)? We're using Langfuse for prompt management and would love to automate the propagation of prompt changes from Langfuse to Mastra. Our current workaround is to simply trigger an infra-level restart of the pod (we don't have much volume yet so it's fine). I'm wondering if there was a way to do this without manual intervention? I'm picturing a simple custom route implementation: ``` apiRoutes: [...

chatRoute + useChat stream unstable

I am using ai-sdk's useChat and mastra's chatRoute together, and sometimes the SSE stream closes without an error, but also without sending the [DONE] event. Could this be an ai-sdk issue or a mastra issue? I will work on a repro repository, as the problem seems to happen under specific circumstances (never happens on my first message + agent's tool call, but happens always on the second) Short summary of my setup: - Frontend uses useChat....

Langfuse integration via mastra/langfuse - tags and cached token count

Current versions: "@mastra/core": "0.24.1", "@mastra/langfuse": "0.2.3", "@mastra/memory": "0.15.11",...

chatRoute + useChat not working for tool suspension in @mastra/ai-sdk@beta

When using chatRoute with @ai-sdk useChat hook, tool suspension events (tool-call-suspended) are not emitted, causing the frontend to never receive them. This makes HITL (Human-in-the-Loop) flows impossible with the...

Tracing prompts with Langfuse

I am using Langfuse for prompt management, and I can consume the prompts just fine. However, I cannot have the tracing link to the prompts to the ones created in langfuse, and versions. Here is an example of how I set it up ```typescript const headerAuditing = createStep({ id: "header-auditing", description: "Diagnose the header",...

Mastra node-audio

Error when trying to use mastra node-audio :




...

GeminiLiveVoice: Tool calls work but args always empty - Expected behavior?

Hi Mastra team! 👋 I'm having trouble getting tool arguments to populate in GeminiLiveVoice. Tools are called but args are always empty. Am I missing some configuration? Current Setup:...

Cannot send base64 images to Gemini in Mastra v1.0 Beta [Solved]

I scaffolded a new project with pnpm create mastra@beta. ``` "dependencies": { "@mastra/core": "1.0.0-beta.2", "@mastra/evals": "1.0.0-beta.0",...

Nested Branching

Does anyone have a good example of how to do nested branching? I am seeing (probably old) info out there that says to create nested workflows, but its not working at all and I spent a good deal of my afternoon on this. My flow is simple and I would think this would be simple too. Agent calls tool that wraps a workflow. First step as an A or B path. If A it needs to run another tool and then it has another A B branch. .branch([ [async ({ inputData }: { inputData: { intent: string } }) => inputData.intent === 'create', fullStep], [async ({ inputData }: { inputData: { intent: string } }) => inputData.intent === 'refine', individualStep],...

Parallel Steps `writer.custom` throwing writer is locked

Works for sequencial steps, but getting error for parallel

PineconeVector not assignable to MastraVector<VectorFilter> in Mastra configuration

Hi, I’m not sure if this is intended or a bug, but passing a PineconeVector instance into the main Mastra instance under vectors throws this TypeScript error:
Type 'PineconeVector' is not assignable to type 'MastraVector<VectorFilter>'.
Type 'PineconeVector' is not assignable to type 'MastraVector<VectorFilter>'.
Original Code:...

Firebase Auth - Dev Bypass

https://mastra.ai/docs/auth/firebase Need A way to keep Auth in Development ( user auth data to keep getting added to context ) but still disable auth in a sense that it doesnt block access ... This is in order to let things through like local scripts that dont use auth... while still keeping the auth working for things that do...