MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Passing Parameters to Agent API

We are building an agent that would need to have some parameters passthrough the API call /generate, RuntimeContext and using the middleware seems to do the trick, though is this the right approach? All the interactions are being intercepted for this parameters that would only be needed for one agent, ...

Support for GoogleVertex AI

Hello, i see docs related to google generative API, are there any docs for google vertex AI?...

Workflow Snapshots

I am basically trying to reconnect to a running workflow. I use the watch and pass it the runID, but the issue is that when I reconnect, I am not aware of the current state of the workflow, so my ui can not refelect the state of the workflow until the next stream comes through. I tried to make a workaround where I get the snapshot and that can tell me where I am so I can initally show that state, but the snapshots do not handle parallel steps well from what I did. i did this test to find the problem: Here was the test workflow: export const testSimpleWorkflow = createWorkflow({...
No description

npm start on Azure deployment fail

Hello i got this error when deploying to azure

Conversation history without user messages

Recently, i've faced an issue where the list of messages that were pased to the AI model consists of only assistant messages + the last message from a user. Thus, AI looses the context as there are no previous user-related messages in the conversation history. I am using the upstash storage (all user messages successfully stored to the database - I've verified that). I've attached screenshots from langfuse: the first one is the current state where only the last question is related to the user. On. the second screenshot is the expected history, which includes user-related messages....
No description

Duplicate Assistant Messages with agent.network() + Memory

Summary I'm experiencing duplicate assistant messages being saved to the database when using agent.network() with memory enabled. For each network interaction, TWO assistant messages are saved: 1. ✅ The correct user-friendly response...

agent.network() Usage Tracking Not Working

Environment - Mastra: 0.0.0-ai-sdk-network-text-delta-20251017172601 (snapshot for network streaming fix) - Provider: @openrouter/ai-sdk-provider (direct, not through Mastra gateway) - Models: OpenRouter with GPT-4o-mini, Grok, etc. - Use case: Credit-based billing system - need accurate token usage for each network call...

Should I host my own postgresql pgvector with mastra cloud

I'm using pgvector locally, my agents use that plus drizzle orm to store data, does mastra cloud have postgres available, or should I host myself, and if I host myself what region is a good options to host at?

How to use GPT-5 mini with tools? Getting "required reasoning item" errors with Memory

Hey! Getting this error when using GPT-5 mini with tools + memory: ``` Item 'fc_...' of type 'functioncall' was provided without its required 'reasoning' item: 'rs...'....

Multi-Index Strategy for Mastra semanticRecall?

I’ve been reviewing Mastra’s semanticRecall implementation and from the createEmbeddingIndex logic it appears that, per embedding dimension, a single shared vector index is used. In a multi-tenant SaaS scenario or as data volume grows, the following needs typically emerge: - Separating indexes by tenant or even by thread...

Advice needed for Mastra AI "backend" and Convex on frontend

Hi! We’re currently testing different frameworks to see what fits our setup best. Our MVP was built entirely with Convex — covering the backend, agent components, and frontend. The nice thing about Convex is how easy it is to manage everything while keeping the frontend reactive. For example, we store threads, messages, logs, and tool call results directly in Convex, which lets us display them in a chat-like UI. Now that we’re experimenting with Mastra, we’re running into a few architectural decisions — one of them being where and how to store and display messages, tool call statuses, and results. One idea I’m considering is connecting a Convex HTTP endpoint (if possible) to Mastra’s workflow run stream endpoint. This way, we could listen to the stream, save the run results and steps into a Convex table, and automatically persist and display everything in the UI....

having problem with Configure your Mastra instance to include CopilotKit’s runtime endpoint

Hello Everyone, Not sure, if this is the right place, but i tried following this guide https://mastra.ai/en/docs/frameworks/agentic-uis/copilotkit but when i try to include the copilot runtime to my Mastra server,...

Can't use OpenAI WebSearch tool

Hi there. I can't seem to use OpenAI's WebSearch tool from the AI SDK. I get this error (cut off due to Discord's text limit): ``` ......

Why the memory property in Agent is unable to get the context from runtime?

I logged the runtime context from both Memory and Instructions. It’s being received correctly by Instructions, but not by Memory. Please see my attachments...
No description

Getting internal server error in Mastra cloud instance. No error logs.

Hi folks — I’m getting an error when attempting to call a custom endpoint on our mastra cloud instance. It 500s but there are no error logs in the dashboard. Happy to share more info in a private dm. I don’t see a Cloud tag. Is this the right spot?

get workflow status on frontend

what's the best way to query a workflow's status / steps from the frontend if you just have the run id?

Human in the Loop Workflows

Issues & Learnings from Building Human-in-the-Loop Workflow in Mastra What We’re Trying to Do We’re building human-in-the-loop workflows that can: ...

Agents/Workflows not appearing in Mastra Cloud

Despite appearing and building locally, my agents and workflows are not appearing in the playground on Mastra Cloud. When making this request curl --location 'https://<deployment>.mastra.cloud/api/agents' get the response: ``` { "error": "Internal Server Error" }...

Get threadId associated with agent memory

I have a workflow that calls an agent with memory, configured to generate a thread title. How can I get access to the threadId automatically created for that agent once it's been used?...

Images are not kept in the thread context

Hey everyone, I’m running into an issue with Mastra when using it for image generation. I’m building an agent that generates images through a chat interface, using the Nano Banana image-to-image model. The problem is that image generation only works correctly if the image generation tool is called within the same message where the user uploads the image. Has anyone faced this before or found a way to make it work across multiple messages (e.g., when the image is uploaded in one message and processed in the next)?...
Next