MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

RAG pipeline example - workflow

Is it just me or all of the rag pipelines are better to be represented as mastra workflows? That way you can run it from wherever with strong typings. Is it ok or is it anti pattern?...

Anthropic System Prompt Caching Regression

It seems that Anthropic Prompt caching is being regressed from the AI-SDK implementation. I read the source code for the Agent class in mastra and it seems like in multiple places the providerOptions are not being passed for system role messages. The work around would of been to pass the system role messages as part of the message list when .streamVNext is called, as below: ```typescript...

How to address several running instances of a workflow

Suppose, I have a workflow with waitForEvent step: ``` import { createWorkflow, createStep } from "@mastra/core/workflows"; import { z } from "zod";...

Mastra RAG with sources advice needed:

I have a bunch of Q&A questions which i want to do a simple RAG agent on them, I am using Postgresql + PGVector + Mastra. Bottom line i want to get the sources as well, meaning i want to show in my UI (assistant-ui) the citations or the original Q&A question. It is super important and with mastra i dont see how it is possible right now, is it true? ...

Mastra products

is there going to be some extra mastra products such as agent builder (i saw in github) or workflow builder no-code or something extra? Or something i can embed in my website like a no-code builder? or maybe some thing extra just any extra product which is not mastra library and not mastra cloud...

Mastra serverless concurrency

Is mastra serverless? Meaning I can deploy my own mastra server docker container with 100 instances that all connected to same DB and it's ok? (Postgresql)

Working Memory Updates Not Always Additive

Hi there, while testing a fairly simple use case of building a profile like schema in the agent working memory, I noticed most the time when the memory update is invoked (in the playground) I am seeing information being lost from previous turns or chat. It seems the agent turn making the update is replacing the working memory that is already there. I was looking for a setting that may be PUT/PATCH style logic (full replace vs upsert) but I could not find any such setting. is this intended behavior? IN. ayoutube video demo from mastra I saw that a template working memory was being addited to in a thread scoped working memory. Wondering if schema bound resource scoped working memory maybe doesn't function the same? ...

Why persisting Message Thread implementation ignored the messages for role - System?

For example, I access the Mastra Agents via POST: /agents/<agentId>/stream { // if these parameters are present - messages for role - system is ignored "threadId": "sample-tread-id123433345435555", "resourceId": "sample-tread-id123433345435555",...
No description

Web Socket Connection Errors in Prod Environment

We are receiving the following error when using Drizzle to connect to Neon via a websocket: Error: All attempts to open a WebSocket to connect to the database failed. We never see this issue locally, so wondering if there is an issue with the environment....

workflow inputs resulting from a zod intersection are not rendered in the playground

`const type1 = z.object({ type: z.string().description("the type") }); const workflowInput = z.intersection(type1, z.object({ prop: z.string().description("the prop")...

Mastra deploy error - cannot find @mastra/cloud

Hello, i'm getting the following error on: https://cloud.mastra.ai/lorenzos-team/dashboard/projects/enough-sparse-engine/runtime/deployment/2f571fbb-1478-44bd-9b59-8d7a57668dff ``` [INFO] [09/09/2025, 06:04:59 PM] - node:internal/modules/package_json_reader:266...

experimental_ouput support for other models

I would love to see this being supported for other models than openai I am not sure but I suspect under the hood it depends on the structured_ouput from openai models option so natuarelly it doesnt work with other models like gemeni . ...

Memory (RAM) issues

I have a workflow where one step is to enrich every page on a website (sometimes 600 or more) with wider business context information, then after that, a forEach step that handles each page of the website and examines the HTML / attributes. With a very large site, I got this error in the Mastra logs (attached). I have a few questions:...

How do the Traces work ? (mastra cloud/local)

I never see traces, is it normal, do i have to do anything ? i only have logs
No description

Mastra vNext Network Stream Format Incompatibility with AI SDK

Problem: When using Mastra's new vNext networks with the AI SDK's useChat hook, the stream parsing fails with the error: Failed to parse stream string. Invalid code {"type". Root Cause:...

Logging in the console

Hey folks, moving the discussion here to avoid spamming general. What's the easiest path logging HTTP requests emitted by Mastra agents? As a default the playground would log traces but the LLM call seems tied to a "getMostRecentUserMessage" event which only shows the model message, not the full HTTP request. A log drain should work but I don't see docs for a console drain, and as a default on a new app I don't see logs in the console. The default recommended logger is Pino which I think will stick to outputting JSON data and not HTTP requests. The lack of HTTP logging is the #1 pain point I feel when using anything that wraps HTTP requests (looking at n8n namely 😬 huge pain to debug) and I'd be eager to get rid of this issue in Mastra's context. Using mastra@latest (11.0.3 alpha) the log drain is not recognized despite Pino being set up. What is the expected behaviour, having logs in the console ?...

Tool calling & Structured output

When calling my agent through the mastra client sdk, it does not use the tools i provided, with the same prompt in the chat in the playground it does call the tool, why ? ```ts...

Support for format: 'aisdk' in streamVNext for mastra/client-js

It appears that { format: 'aisdk' } is not supported in client-js.

Streaming reasoning w/ 0.14.1 and ai v5 SDK

I'm using o3 and added sendReasoning: true to the toUIMessageStreamResponse and do not get any actual reasoning text. See the screenshot for the in-browser console of the part for reasoning. I did see this post, but it makes it seem the issue is resolved....
No description

Mastra as Remote Agent for AG UI using Copilotkit seems not working?

here's my dependencies: "@ag-ui/mastra": "^0.0.8", "@copilotkit/react-core": "1.9.3", "@copilotkit/react-ui": "1.9.3", "@copilotkit/runtime": "1.9.3",...
No description