MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Publish Mastra Agent as OpenAI API endpoints?

Is there anything inherently wrong in idea of sharing Mastra's Agent functionality as OpenAI endpoints? It's super simple. I wanted just plugin frontend part from another library or use one of these agentów chat solutions an tiger workflows and utilize these awesome Mastra features. How you would approach that?...

Memory Debugging - Is there a way to inspect what is stored in memory easily?

Is there a argument or flag or example with one of the loggers showing how to see what sits in memory?

Mastra Cloud analyze failures with ESM/CJS interop (fetch-blob, formdata-polyfill, Pinecone v6)

Summary Our project deployed successfully on 2025-10-13. Since then, Mastra Cloud deployments fail during “Analyze dependencies” due to ESM/CJS interop in transitive dependencies (not our app code). Local mastra build succeeds; only Cloud analyze fails. Environment - Mastra Cloud dynamic deployer banner: “fix-cloud-peer-deps-loggers for core version 0.18.0”...

Shared DB pool

Does Mastra support using a shared DB pool for memory rather than its own pool?

How to configure stopWhen conditions in Mastra agents?

Built a web automation agent with 4 tools (navigate, observe, act, extract) and it's working great actions execute smoothly but the issue is that the agent stops after exactly 5 tool calls even when the task isn't finished. For complex workflows (like multi-step booking flows), I need it to keep going until the task is actually complete. What I need is to configure stopWhen conditions or max steps so the agent can complete longer tasks. Looking for config options when creating the agent, I've tried these two but nothing works. ...

save message to memory

I've got a setup where a user hits a chat interface (ai sdk useChat setup). I'm using initial messags to populate the feed BUT, those messages aren't saved to memory. I generate them from an agent.generate call. I tried to add memory to that but I don't want the prompt to land in memory, just the response from the LLM, as if it was starting the conversation. Does anyone have a solution for this using Mastra sdk or do I need to create my own function to write to memory?...

build passes but runtime failing -- maybe mongodb?

I have been running mastra without issue in a lambda for awhile but I'm stuck in some kind of esm build look hell now and have no idea why. I went from .13 => .21 oddly, it works fine with npm run dev but if I run build and then run start it has a ton of issues with what seem to be mongodb driver optional packages....

Passing Parameters to Agent API

We are building an agent that would need to have some parameters passthrough the API call /generate, RuntimeContext and using the middleware seems to do the trick, though is this the right approach? All the interactions are being intercepted for this parameters that would only be needed for one agent, ...

Support for GoogleVertex AI

Hello, i see docs related to google generative API, are there any docs for google vertex AI?...

Workflow Snapshots

I am basically trying to reconnect to a running workflow. I use the watch and pass it the runID, but the issue is that when I reconnect, I am not aware of the current state of the workflow, so my ui can not refelect the state of the workflow until the next stream comes through. I tried to make a workaround where I get the snapshot and that can tell me where I am so I can initally show that state, but the snapshots do not handle parallel steps well from what I did. i did this test to find the problem: Here was the test workflow: export const testSimpleWorkflow = createWorkflow({...
No description

npm start on Azure deployment fail

Hello i got this error when deploying to azure

Conversation history without user messages

Recently, i've faced an issue where the list of messages that were pased to the AI model consists of only assistant messages + the last message from a user. Thus, AI looses the context as there are no previous user-related messages in the conversation history. I am using the upstash storage (all user messages successfully stored to the database - I've verified that). I've attached screenshots from langfuse: the first one is the current state where only the last question is related to the user. On. the second screenshot is the expected history, which includes user-related messages....
No description

Duplicate Assistant Messages with agent.network() + Memory

Summary I'm experiencing duplicate assistant messages being saved to the database when using agent.network() with memory enabled. For each network interaction, TWO assistant messages are saved: 1. ✅ The correct user-friendly response...

agent.network() Usage Tracking Not Working

Environment - Mastra: 0.0.0-ai-sdk-network-text-delta-20251017172601 (snapshot for network streaming fix) - Provider: @openrouter/ai-sdk-provider (direct, not through Mastra gateway) - Models: OpenRouter with GPT-4o-mini, Grok, etc. - Use case: Credit-based billing system - need accurate token usage for each network call...

Should I host my own postgresql pgvector with mastra cloud

I'm using pgvector locally, my agents use that plus drizzle orm to store data, does mastra cloud have postgres available, or should I host myself, and if I host myself what region is a good options to host at?

How to use GPT-5 mini with tools? Getting "required reasoning item" errors with Memory

Hey! Getting this error when using GPT-5 mini with tools + memory: ``` Item 'fc_...' of type 'functioncall' was provided without its required 'reasoning' item: 'rs...'....

Multi-Index Strategy for Mastra semanticRecall?

I’ve been reviewing Mastra’s semanticRecall implementation and from the createEmbeddingIndex logic it appears that, per embedding dimension, a single shared vector index is used. In a multi-tenant SaaS scenario or as data volume grows, the following needs typically emerge: - Separating indexes by tenant or even by thread...

Advice needed for Mastra AI "backend" and Convex on frontend

Hi! We’re currently testing different frameworks to see what fits our setup best. Our MVP was built entirely with Convex — covering the backend, agent components, and frontend. The nice thing about Convex is how easy it is to manage everything while keeping the frontend reactive. For example, we store threads, messages, logs, and tool call results directly in Convex, which lets us display them in a chat-like UI. Now that we’re experimenting with Mastra, we’re running into a few architectural decisions — one of them being where and how to store and display messages, tool call statuses, and results. One idea I’m considering is connecting a Convex HTTP endpoint (if possible) to Mastra’s workflow run stream endpoint. This way, we could listen to the stream, save the run results and steps into a Convex table, and automatically persist and display everything in the UI....

having problem with Configure your Mastra instance to include CopilotKit’s runtime endpoint

Hello Everyone, Not sure, if this is the right place, but i tried following this guide https://mastra.ai/en/docs/frameworks/agentic-uis/copilotkit but when i try to include the copilot runtime to my Mastra server,...

Can't use OpenAI WebSearch tool

Hi there. I can't seem to use OpenAI's WebSearch tool from the AI SDK. I get this error (cut off due to Discord's text limit): ``` ......