MastraM
Mastra4mo ago
Raid55

Anthropic System Prompt Caching Regression

It seems that Anthropic Prompt caching is being regressed from the AI-SDK implementation.

I read the source code for the Agent class in mastra and it seems like in multiple places the providerOptions are not being passed for system role messages.
The work around would of been to pass the system role messages as part of the message list when .streamVNext is called, as below:
const stream = await agentDouble0Seven.streamVNext(
  [
    {
      role: "system",
      providerOptions: {
        anthropic: { cacheControl: { type: "ephemeral" } }, // what enables the "checkpoint" caching that anthropic uses
      },
      content: `...Long System Prompt over 1.2k tokens (minimum required for caching)...`,
    },
    {
      role: "user",
      content: "Do the cool AI thing you do",
    },
  ],
  {
    format: "aisdk",
  },
);

but it seems that Mastra has some proprietary logic wrapped over the AI-SDK that omits the added providerOptions.

to be sure this was not an AI SDK issue i tried the following code and it succeeded at caching my system prompt:
const result = await streamText({
  model: anthropic("claude-sonnet-4-20250514"),
  messages: [
    {
      role: "system",
      providerOptions: {
        anthropic: { cacheControl: { type: "ephemeral" } },
      },
      content: `...Long System Prompt over 1.2k tokens (minimum required for caching)...`,
    },
    {
      role: "user",
      content: "Do the cool AI thing you do",
    },
  ],
});


I am using mastra version 0.16.0
with AI SDK v5 5.0.33

I am be doing something wrong, or just miss-read the source code, there is a lot of things pertaining to memory, title generation and other features of Mastra in the Agent Class which may have lead to my conclusion.

as a sidenote it would also be nice to provide this providerOption somewhere when initializing the Agent class. Since instructions is a required param, I can't leave it blank doing the method above.
Was this page helpful?