updateWorkingMemory tool parameter mismatch

Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
Tool Parameter Mismatch
LLM sends: { personalInfo: {...}, jobPreferences: {...} }
Validation expects: { memory: { personalInfo: {...}, jobPreferences: {...} } }
Hey there I am trying to use working memory schema and although my agent tried to make tool call 4 times each time it gives the same error. Here is my memory config:
const stream = await agent.stream([
{ role: "user", content: userMessage }
], {
memory: {
thread: generateIdWithDate(phone),
resource: phone,
},
tracingContext,
runtimeContext,
tracingOptions: {
metadata: {
userId: phone,
sessionId: generateIdWithDate(phone),
messageId: "<REDACTED_MESSAGE_ID>",
agentName: "OnboardingAgent",
routeType: "onboarding",
userMessage,
hasMediaContext: Boolean(hasMedia && mediaResult),
},
},
});
const stream = await agent.stream([
{ role: "user", content: userMessage }
], {
memory: {
thread: generateIdWithDate(phone),
resource: phone,
},
tracingContext,
runtimeContext,
tracingOptions: {
metadata: {
userId: phone,
sessionId: generateIdWithDate(phone),
messageId: "<REDACTED_MESSAGE_ID>",
agentName: "OnboardingAgent",
routeType: "onboarding",
userMessage,
hasMediaContext: Boolean(hasMedia && mediaResult),
},
},
});
and here is agent memory config:
memory: new Memory({
storage: getPostgresStore(),
vector: getQdrantVectorStore(),
embedder: openai.embedding("text-embedding-3-small"),
options: {
lastMessages: 20,
semanticRecall: {
topK: 3,
messageRange: 1,
scope: 'resource',
},
workingMemory: {
enabled: true,
scope: 'resource',
schema: userMemorySchema,
},
},
}),
memory: new Memory({
storage: getPostgresStore(),
vector: getQdrantVectorStore(),
embedder: openai.embedding("text-embedding-3-small"),
options: {
lastMessages: 20,
semanticRecall: {
topK: 3,
messageRange: 1,
scope: 'resource',
},
workingMemory: {
enabled: true,
scope: 'resource',
schema: userMemorySchema,
},
},
}),
Any help is appreciated 🙏🏻
11 Replies
Mastra Triager
GitHub
[DISCORD:1428127724446159000] updateWorkingMemory tool parameter mi...
This issue was created from Discord post: https://discord.com/channels/1309558646228779139/1428127724446159000 Tool Parameter Mismatch LLM sends: { personalInfo: {...}, jobPreferences: {...} } Vali...
_roamin_
_roamin_7d ago
Hi @souvikinator ! Are you using the latest mastra packages?
souvikinator
souvikinatorOP7d ago
using:
"@mastra/core": "^0.20.0",
"@mastra/memory": "^0.15.5",
"@mastra/core": "^0.20.0",
"@mastra/memory": "^0.15.5",
_roamin_
_roamin_6d ago
Thanks! What model/llm are you using?
souvikinator
souvikinatorOP6d ago
gpt-5-chat-latest Also wanted to know what's the recommended and reliable way to keep building working memory..noticed that in structured working memory if I add some information and then later agent tried adding information into working memory it removes the info that I added. I don't think this happens in templated working memory. Checked logs again and this validation error is quite frequent:
WARN [2025-10-17 12:27:11.820 +0530] (Mastra-Worker): Tool input validation failed for 'updateWorkingMemory'
toolName: "updateWorkingMemory"
errors: {
"_errors": [],
"memory": {
"_errors": [
"Required"
]
}
}
args: {
"personalInfo": {
"location": "New York, USA"
},
"jobPreferences": {
"workSetup": [
"Remote"
]
}
}
WARN [2025-10-17 12:27:11.820 +0530] (Mastra-Worker): Tool input validation failed for 'updateWorkingMemory'
toolName: "updateWorkingMemory"
errors: {
"_errors": [],
"memory": {
"_errors": [
"Required"
]
}
}
args: {
"personalInfo": {
"location": "New York, USA"
},
"jobPreferences": {
"workSetup": [
"Remote"
]
}
}
it did manage to make correct tool call to update the memory on 2nd try
_roamin_
_roamin_6d ago
That's interesting, I'm surprised gpt-5 isn't able to handle populating the json properly. I'll check with the team When you say you add information to the working memory, you mean manually?
souvikinator
souvikinatorOP6d ago
Yes So I want to route to different agents in workflow based on whether user is onboarded or not. The onboarding agent is supposed to mark onboarded in working memory.
_roamin_
_roamin_6d ago
yeah, I've seen people running into this issue before, unfortunately we don't support updating the working memory manually very well at the moment, there are sometimes weird race conditions that make it so that the agent doesn't receive the correct version of the current state of the memory. We have plans on improving our memory, it was mainly designed to be used by the llm, but then we saw people want to also be able to manually update it. When you get this error, usually llms retry the tool call, is it not doing this right now?
souvikinator
souvikinatorOP6d ago
It does most of the times, rare ocassions when it doesn't. Hard to recreate that. Understood. What are the chances of llm replacing existing working memory while updating with new information? From what I understand using templated working memory seems to be more reliable.
_roamin_
_roamin_6d ago
yes, that's because there is no "validation" happening on the template memory, it's just a "string" whereas when you use a schema there is json validation involved
souvikinator
souvikinatorOP4d ago
Any idea on this one? Hey there I am constantly facing this problem where the agent while updating the working memory removed other data in the wokring memory. Not sure how to go about it, any help is appreciated 🙏🏻 One possible solution to this (using structured working memory in my case) and create dedicated tools for updating specific parts of the schema. Each tool would either add or modify existing information never remove it and the prompt would explicitly instruct the agent to use these tools for all memory updates.

Did you find this page helpful?