Mastra

M

Mastra

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Clerk Auth w/ Public Routes

This might be a dumb question (sorry if it is), but I'm trying to figure out how to implement public routes with the MastraAuthClerk class. I see that in the experimental_auth config there is a public field but the type makes me think it's either the MastraAuthConfig or the MastraAuthProvider, and I don't see the same option to make a route public. Basically just need to expose a health route. Appreciate any help 🙏

Mastra won't build due to "ZodOptional" is not exported by ".mastra/.build/zod.mjs"

Hey guys, is there a way to get around this error? ```sh λ bun run build $ mastra build INFO [2025-11-19 22:44:14.986 +0800] (Mastra CLI): Analyzing dependencies......

Missing Qdrant for Mastra v1

When can I expect @mastra/qdrant 1.0.0-beta.0? https://www.npmjs.com/package/@mastra/qdrant?activeTab=versions Now I'm unable to work on v1 version and need to downgrade 😭...

Client SDK for AI-SDK: Chunk Transformers not handling workflow-step-output

Source: https://github.com/mastra-ai/mastra/blob/main/client-sdks/ai-sdk/src/transformers.ts. Is there any reason workflow-step-output is not explicitly handled here? I believe writer.write(...) chunks from (nested) workflow steps do not reach the frontend because of this.

Can we make use of google cloud spanner as a vector db in mastra?

I am currently using pgsql with pgvector extension to manage my vector db. My org, has an active instance of spanner in gcp. i wanted to check if i can make use of spanner for this purpose.

Studio didn't show external providers

Hey! Quick question Studio isn’t showing my external providers. I’m using multiple model providers (AWS Bedrock, Azure, and Google Vertex), but none of them appear in the model list inside Mastra Studio. Is there something special I need to do to make external provider models visible in Studio?...
No description

GeminiLiveVoice Vertex AI WebSocket Connection Fails - @mastra/voice-google-gemini-live

Hi team! 👋 I'm experiencing WebSocket connection failures when using Vertex AI mode with @mastra/voice-google-gemini-live. The Problem...

Does mastra build includes the playground(studio) into the path?

I would like to expose the studio to our SMEs, but when I hit a deployed mastra build service the playground is not into main path (/) is there any flags I need to add so it gets bundled? Thank you...

how to access sub-agent tool results using ai sdk.

Hi, We have several agents whose tool results (structured outputs) are displayed on the frontend as intermediate steps. We’re now building a mega-agent that orchestrates these agents and should continue to surface their tool results in the UI. However, the AI SDK stream transformer is dropping those tool results. It currently only forwards data-* custom events from sub-agents, which prevents the tool outputs from coming through. Here are the sub-agent level events (https://mastra.ai/docs/agents/networks#agent-output) and wondering if there is any way of getting access to them. (I am open to writing a custom transformer for the time being)...

Entry/Wrapper needs memory so that other agent dowstream can have memory, why?

I have a setup where we had a entryWorkflow with 2 agents being called. AgentA has memory and AgentB does not. Can we have conversational chat with this setup? From what I have found, we cannot have previous context in a workflow with agent having or not having memory in it. Is that right? Now, to resolve that I had to introduce a wrapperAgent with memory which now calls my entryWorkflow. Now my wrapperAgent does not just call and return raw output from worflow execution. After few chat, it starts interpreting and sometimes appending previous answers with the new result. It breaks the stability. If I don't give memory to this wrapperAgent so as to stablise it, where it give the execution output and do not have previous, then the downstream agentA for some reason does have context of previous conversation....

I want to assign agent_id to the span of `chat {model}` as well.

Agent Runs Span has agent_id but not gen_ai.usage. LLM Operations `chat {model} has gen_ai.usage but not agent_id. What is the best way to aggregate gen_ai.usage by agent_id?...

Streaming from a workflow step when using Inngest

I'm currently struggling with streaming an agent response from a workflow when running it through the Inngest Dev Server. For the Mastra workflow I'm using the workflowRoute with patched streaming support as described in this post. This works fine when using Mastra standalone and consuming it with AI-SDK on the frontend. ...

Access AI Tracing of a particular trace id programmatically without mastra studio

Is it possible to access the AI traces directly via the mastra instance, instead of using mastra client sdk or mastra studio or mastra cloud. I want to export the traces as json and perform some analysis on them to create charts like token usage, etc

The resourceId and threadId for chatRoute and copilotkit integration should be able to contain authe

In the integration with the AI SDK, memory information is passed from the frontend. However, when using session authentication, it becomes possible to view other users' messages. https://mastra.ai/docs/frameworks/agentic-uis/ai-sdk#streaming With copilotkit's integration, the resourceId is fixed, so in applications used by multiple users, all users can view the same message history. https://mastra.ai/docs/frameworks/agentic-uis/copilotkit...

Mastra wasn't able to build your project. Please add ... to your externals

Even though I added the packages to externals, I still get the error ``` const newMastra = new Mastra({ ...mastraConfig, bundler: { externals: ["mssql", "ioredis"], } }); #16 12.68 INFO [2025-11-09 03:14:25.628 +0000] (Mastra CLI): Optimizing dependencies......

Dynamically Reload Prompt/Agents

Hi team, is there a way to dynamically reload the agent instructions (or the agent itself)? We're using Langfuse for prompt management and would love to automate the propagation of prompt changes from Langfuse to Mastra. Our current workaround is to simply trigger an infra-level restart of the pod (we don't have much volume yet so it's fine). I'm wondering if there was a way to do this without manual intervention? I'm picturing a simple custom route implementation: ``` apiRoutes: [...

chatRoute + useChat stream unstable

I am using ai-sdk's useChat and mastra's chatRoute together, and sometimes the SSE stream closes without an error, but also without sending the [DONE] event. Could this be an ai-sdk issue or a mastra issue? I will work on a repro repository, as the problem seems to happen under specific circumstances (never happens on my first message + agent's tool call, but happens always on the second) Short summary of my setup: - Frontend uses useChat....

Langfuse integration via mastra/langfuse - tags and cached token count

Current versions: "@mastra/core": "0.24.1", "@mastra/langfuse": "0.2.3", "@mastra/memory": "0.15.11",...

chatRoute + useChat not working for tool suspension in @mastra/ai-sdk@beta

When using chatRoute with @ai-sdk useChat hook, tool suspension events (tool-call-suspended) are not emitted, causing the frontend to never receive them. This makes HITL (Human-in-the-Loop) flows impossible with the...
Mastra Community - Answer Overflow