MastraAI

M

MastraAI

The TypeScript Agent FrameworkFrom the team that brought you Gatsby: prototype and productionize AI features with a modern JavaScript stack.

Join

Cannot use networks in playground due to resourceId missing

I have a simple network and I'm not able to use it in the playground due to this error: Unknown Error: TypeError: Cannot read properties of undefined (reading 'resourceId') The docs say mastra will pass resourceId automatically but I am stuck...

AI tracing instance already registered

Using Mastra with Next and getting the above issue whenever I hot reload in a development environment. Super annoying as I have to restart the server every time I make any changes. Is these any way to disable this behaviour? Ideally don't want to resort to patching the package.

Distinguish Step Level Retries / Workflow Level Retries

iam trying to find a way when inside a step iam using retries how to distinguish the - workflow level retries - step level retries ...

Disable thinking/reasoning with Ollama provider (+ Ollama cloud)

Hey, I want to disable thinking/reasoning using ollama-ai-provider-v2 for an agent (new Agent(... I am using Ollama cloud + gpt-oss:20b-cloud I tried the Multiple CoreSystemMessages thing but it's not working https://mastra.ai/en/reference/agents/agent#multiple-coresystemmessages...

type annotation error and property 'suspend' missing

upgraded from mastra "@mastra/core": "0.17.1", to "@mastra/core": "0.18.0", and getting the errors ``` Property 'suspend' is missing in type '{ context: { industry: s tring; }; runtimeContext: RuntimeContext<unknown>; }' but require d in type 'ToolExecutionContext<ZodObject<{ industry: ZodString; ...

Has anyone had success setting up Sessions & Users in Langfuse using Mastra

I'm trying to create observability with Sessions (i.e. Mastra threads) and Users (i.e Mastra Resources). Has anyone had success with this so far? I was able to configure it properly to get basic LLM traces, but ideally I'd like to view these as threads so its easier to track my user's conversations (internal company use case). I've found a bunch of different packages/frameworks for doing this (ie langfuse-vercel, vs mastra/langfuse vs opentelemetry) and it's difficult to reconcile the documentation together If any one else is also trying to solve for this, let's collab...

How is writer.write supposed to be used?

Hey, I am trying to stream more progress data with Mastra using writer.write, but facing these issues: https://github.com/mastra-ai/mastra/issues/7782 Are there some streaming best practices or example repo I could follow?...

Can't configure model settings when using GPT-5

Hi everyone, I have a question. Previously, when I was using GPT-4.1, I was able to configure Agent Settings in the Mastra playground. However, after switching to GPT-5, the option to configure agent settings is no longer available (I can only see the regular model settings like temperature, top-p, etc.). ...
No description

useChat tool streaming

I'm using the AI SDK v5 useChat to setup an interface to Mastra agents. I'm trying to also use the client side tools to show React components based on tool actions. What I'm finding when I console log the messages is taht all I get is type == text and I see no tool calls making it from the mastra agent. I've got console logs in the tools and all are working fine and I see the results in the output from the agent but I have no tool calls coming across I tried to follow Tool Streaming and expose them via the writer but I just get type errors in the client....

Traces ofstructured output parser end up in different tracing context

This report is about Langfuse AI tracing specifically. I have a workflow step, and use the steps tracingContext to call an agent's generateVNext, the agent's span is nicely put within the workflow tracing context in Langfuse. However, if you use structured output, the trace ends up as a seperate trace in Langfuse. I would prefer (and I think most if not all people would) if the structured output parser trace ended up within the context of the agent run span

Can I assign provider options for input/output processors?

HI, I often use some provided input/output processors.those are quite useful in offering simple guard features. then, I have a question of processors. Do you have some options to assign provider options for that provided processors? https://mastra.ai/ja/docs/agents/input-processors https://mastra.ai/ja/docs/agents/output-processors I already checked main branch processor implementations, ...

Typescript Error while creating MCPServer with workflows

I have mcp server defined in server.ts then I'm registering everything in index.ts ...

Mastra Cloud Deployment Failing Due to @mastra/cloud Version Incompatibility

I can't deploy my project to Mastra Cloud because of a version conflict that might be caused by your deployment system. The Problem: Your packages have incompatible version requirements: - @mastra/cloud requires @mastra/core versions 0.10.7-0.14.0 ...

AI Tracing Storage Issues

In the documentation, it says that you should use PostgresSQL in production. When using AI tracing, you get the following error: "AI tracing is not supported by this storage adapter (PostgresStore)" when trying to set your storage to a PostgresSQL solution. Is it ok to use LibSQL in production for AI tracing?...

AI tracing issue with Langfuse- prompt tracing not working

Running into an issue where the tracing is not properly correlating with the prompts, so I can't follow the linked generations in langfuse. Here is the relative documentation for Langfuse for using the Vercel AI SDK, which does work. I have recreated the issue here: https://github.com/tommyOtsai/test-ai-tracing...

Put Agent Memory on a Different Model than Response?

Is there a way to put agent memory on a different model than the response? For example, I may want my agent to respond to a customer using a high quality / expensive / longer running model but I'd ideally like it to write to memory with a fast cheap model.

streamVNext converting image URLs to embedded content - need URLs as strings for tools

Problem: When I pass image URLs as file parts, the agent receives only the embedded image content for visual analysis (it treats it as “embedded directly” ?) instead of also retaining the original URL string. While embedding the content is fine for visual analysis, losing the URL is problematic. The agent should still have access to the original URL so it can pass it to other tools, rather than only seeing the embedded content and losing the source URL. What I need: ...

Mastra client js SDK types export missing

i am using this code: const myWorkflow: Workflow = await mastraClient.getWorkflow(workflowId); But Workflow type is not exported by mastra client js...

Mastra React SDK

It is complicated to connect to mastra instance only via client js SDK, i need to write my own react hooks and manage that, it should be simple for mastra to create a react SDK as well above the existing client js sdk. this would REALLY help, please...