MastraAI
The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
JoinMastraAI
The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.
JoinToken consumption observability
Response.response.messages uses output processor changes. Response.steps[0].response.messages doesnt
MASTRA_STORAGE_POSTGRES_STORE_INIT_FAILED + MASTRA_STORAGE_PG_STORE_CREATE_TABLE_FAILED
How to dictate assistant message ID when using Agent + Memory?
agent.stream
. We're on 0.21.0
.
Using generateMessageId
does not seem to work for persistence. I could not find a Memory class property that does this either....Saving context on verbose tool calls
"No traces found" in Workflows' Traces menu

memory metadata
Possible to restart completed workflow from specific step?
Issue adding metadata to memory threads for user/agent separation
Playground observability renders broken link for workflow id

Can't access auth when calling an agent from MCP
Playground renders streaming response out of order
Playground when mastra is embedded
playground tries to load not existent workflow
Agent Network Streaming Issue (AI SDK v5)
agent.network()
streaming. Streaming is still broken after 0.21 upgrade. Need guidance urgently - my option is to revert to AI SDK but that wastes 3 weeks of migration work. Really want to stay with Mastra.
Setup
- Mastra 0.21.0 + AI SDK v5, orchestrator with 8 sub-agents (Gmail, SEO, Sheets, Calendar, Ghost, Web, Voting, General), OpenRouter gpt-4o-mini
...
Progressive Text Streaming Broken in Workflow Steps After stream() Migration
Why does LMStudio have documentation and OIllama does not?

Mastra Cloud deployment failing with `ERR_MODULE_NOT_FOUND` for `instrumentation.mjs`
Mastra Cloud deployment failing with
ERR_MODULE_NOT_FOUND
for instrumentation.mjs
— file not being generated during cloud build despite working locally.
---
...updateWorkingMemory tool parameter mismatch
Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
How to customize tracing metadata for Agents invoked with ChatRoute?
tracingContext
when a agent is invoked using the ChatRoute supplied by @mastra/ai-sdk
?
Docs here mention how to update the span metadata but I don't see how I could do this for an agent invoked this way....