MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Token consumption observability

I am still struggling to understand how I can measure token consumption for a given thread per model in use, especially when we have workflows/tools that call other agents. Can we outline some documentation about this topic?

Response.response.messages uses output processor changes. Response.steps[0].response.messages doesnt

I use a couple of custom output processors. I recently noticed that when I redact certain things, the redactions only show up in certain parts of the response object returned from the agent.

MASTRA_STORAGE_POSTGRES_STORE_INIT_FAILED + MASTRA_STORAGE_PG_STORE_CREATE_TABLE_FAILED

I am all of a sudden getting these errors when trying to fetch resources using the following: ``` const oa = mastra.getAgent("orchestratorAgent"); const memory = await oa.getMemory(); const thread = await memory?.query({...

How to dictate assistant message ID when using Agent + Memory?

We have a use case where we need to persist some data after generation in our DB, and want to associate it with the assistant message ID. It would be best if we could dictate that ID to begin with. But it would be okay if we could also just retrieve it from agent.stream. We're on 0.21.0. Using generateMessageId does not seem to work for persistence. I could not find a Memory class property that does this either....

Saving context on verbose tool calls

I had an idea to save some tokens on tool calling. I created a workflow that maps the output of verbose tool calls to a more compact and friendly text format. Apparently, the model receives the entire workflow execution log with all intermediate values passed. Issues identified: 1. Lack of documentation clarity: It is not clear from the documentation that the model receives the full execution log....

"No traces found" in Workflows' Traces menu

I enabled observability as described in the docs' OTEL Tracing page (https://mastra.ai/en/docs/observability/otel-tracing) and I can browse workflow traces in the Observability tab of the Mastra UI, but when I am looking at a specific workflow and I click the traces button at the top of the screen, it says "No traces found" I am under the impression that these two traces should represent the same thing – is this not the case? If so, how do I enable traces in the workflow tab as well?...
No description

memory metadata

I've got a setup where I'm doing conversation agents with different personalities. The user can switch personalities mid conversation. I'm wondering, is there a way to persist this information with the memory on messages so that I can display the personalilty name and maybe an avatar? I can't find any documentation (please point me there if there is some) on how to add metadata to messages and then use it with AI SDK useChat. Thanks!...

Possible to restart completed workflow from specific step?

Hi I am just wondering if its possible to restart a workflow that has already finished (without suspending) from a specific step? My use case is I have a worfklow that is a pipeline with each step building upon the output of the previuos step. Sometimes the initial 4 steps produce great outputs, but the last 2 might be unsatisfactory for whatever reason, and it would be nice to be able to restart the workflow from the last two steps rather than running the entire thing again. I tried just calling resume workflow with the specific step but it crashses the server with the error "This workflow run was not suspended"....

Issue adding metadata to memory threads for user/agent separation

Hi, we’re experiencing an issue when trying to add metadata to the memory threads. Our goal is to separate threads for each agent based on the user who is using it, so that every agent maintains isolated context per user. ...

Can't access auth when calling an agent from MCP

Hello, I am facing a problem happy to get your help! 🙂 Working with express, I have created the following MCP ``` export const startMcp = async (app: INestApplication) => { const mastraService = app.get(MastraService, { strict: false });...

Playground renders streaming response out of order

While streaming output with tool calls playground renders regular chat output into initial message even if it happen after tool call. It clearly seen on thread reload, when it eventually rendered according to the real order. Note: We are missing Playground in tags...

Playground when mastra is embedded

When I run mastra embedded in an api route like the assistant-ui docs suppose https://www.assistant-ui.com/docs/runtimes/mastra/full-stack-integration I can not start the playground. It complains with mastra dev INFO [2025-10-16 12:57:58.803 +0200] (Mastra CLI): [Mastra Dev] - Starting server... file:///.../.mastra/output/index.mjs:27...

playground tries to load not existent workflow

My agent is defined as following: ``` export const gitlabAgent = new Agent({ name: 'gitlabAgent',...

Agent Network Streaming Issue (AI SDK v5)

Hi guys, I have old production app (50K+ users) that I am trying to convert to AI assistent chat bot. I have built a working versionwith Vercel AI SDK, but decided to migrate to Mastra (greedy). Stuck for 3 weeks on agent.network() streaming. Streaming is still broken after 0.21 upgrade. Need guidance urgently - my option is to revert to AI SDK but that wastes 3 weeks of migration work. Really want to stay with Mastra. Setup - Mastra 0.21.0 + AI SDK v5, orchestrator with 8 sub-agents (Gmail, SEO, Sheets, Calendar, Ghost, Web, Voting, General), OpenRouter gpt-4o-mini ...
No description

Progressive Text Streaming Broken in Workflow Steps After stream() Migration

Problem: After migrating from streamVNext() to stream(), text streaming from agents within workflow steps no longer propagates to the workflow's stream. Evidence:...

Why does LMStudio have documentation and OIllama does not?

I like the idea behind the project and I’ve started implementing it in my NextJS project, but I can’t find any documentation explaining how to properly integrate Ollama. There’s even an example showing how LMStudio is invoked in the Mastra agent (“lmstudio/openai/gpt-oss-20b”). I don’t understand why Ollama doesn’t get the same treatment....
No description

Mastra Cloud deployment failing with `ERR_MODULE_NOT_FOUND` for `instrumentation.mjs`

BLUF:
Mastra Cloud deployment failing with ERR_MODULE_NOT_FOUND for instrumentation.mjs — file not being generated during cloud build despite working locally. --- ...

updateWorkingMemory tool parameter mismatch

Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
...

How to customize tracing metadata for Agents invoked with ChatRoute?

Is it possible to add metadata to the tracingContext when a agent is invoked using the ChatRoute supplied by @mastra/ai-sdk? Docs here mention how to update the span metadata but I don't see how I could do this for an agent invoked this way....