Accessing messages inside a tool callback

Hey everyone, I’m new to Mastra and have a quick question. From what I understand, Mastra automatically passes messages to the agent (like in the config below). What I’m trying to figure out is: can I access those messages from within a tool callback? Here’s a simplified example:
const agent = new Agent({
...

tools: async ({ runtimeContext, mastra }) => {
// How can I read the messages here that were passed to createCompletion?
return {};
},

memory: new Memory({
storage: new PostgresStore({
connectionString: process.env.DATABASE_URL,
}),
vector: new PgVector({
connectionString: process.env.DATABASE_URL,
}),
embedder: openai.embedding("text-embedding-3-small"),
options: {
lastMessages: 2,
semanticRecall: {
topK: 3,
messageRange: 2,
scope: "thread",
indexConfig: {
type: "hnsw",
metric: "dotproduct",
m: 16,
efConstruction: 64,
},
},
threads: {
generateTitle: true,
},
},
}),
});
const agent = new Agent({
...

tools: async ({ runtimeContext, mastra }) => {
// How can I read the messages here that were passed to createCompletion?
return {};
},

memory: new Memory({
storage: new PostgresStore({
connectionString: process.env.DATABASE_URL,
}),
vector: new PgVector({
connectionString: process.env.DATABASE_URL,
}),
embedder: openai.embedding("text-embedding-3-small"),
options: {
lastMessages: 2,
semanticRecall: {
topK: 3,
messageRange: 2,
scope: "thread",
indexConfig: {
type: "hnsw",
metric: "dotproduct",
m: 16,
efConstruction: 64,
},
},
threads: {
generateTitle: true,
},
},
}),
});
Any hints on how to tap into the messages inside the tool context would be appreciated!
6 Replies
_roamin_
_roamin_5d ago
Hi @Adnan A. ! You can actually access the messages in the second param of the execute handler, like this:
tools: async ({ runtimeContext, mastra }, options) => {
// messages are in the options params
options?.messages

return {};
},

tools: async ({ runtimeContext, mastra }, options) => {
// messages are in the options params
options?.messages

return {};
},

Adnan A.
Adnan A.OP5d ago
Hi @Romain! thanks for getting back to me. I tried what you suggested like this:
tools: async ({ runtimeContext, mastra }, options) => {
console.log('options', options?.messages);
return {};
},
tools: async ({ runtimeContext, mastra }, options) => {
console.log('options', options?.messages);
return {};
},
But in the console, options?.messages is undefined - actually, options itself is completely undefined. Then I tried:
tools: async (...args) => {
console.log('args', args);
return {};
},
tools: async (...args) => {
console.log('args', args);
return {};
},
And I got this output:
args [
{
runtimeContext: RuntimeContext { registry: Map(0) {} },
mastra: Mastra {}
}
]
args [
{
runtimeContext: RuntimeContext { registry: Map(0) {} },
mastra: Mastra {}
}
]
Any idea how I can access the messages from here? I think there’s a bit of a misunderstanding - you’re referring to executing from tool/createTool, but I’m talking about the tools property inside an Agent instance definition.
const agent = new Agent({
tools: async ({ runtimeContext, mastra }) => {
// i mean here..
return {};
},
}
const agent = new Agent({
tools: async ({ runtimeContext, mastra }) => {
// i mean here..
return {};
},
}
Guria
Guria5d ago
place you are asking is evaluated at agent creation time, there is no messages on that step yet am I understand correctly that you want to dynamically override tools available based on messages passed to the agent?
Adnan A.
Adnan A.OP5d ago
Hi @Guria! That’s correct - I need access to both the messages coming from the request body and the ones retrieved by the memory instance (semantic matches). This would help me implement dynamic tool selection, since I have a large set of tools and passing them all normally costs a lot of input tokens. Do you have any suggestions or possible solutions for this?
Guria
Guria5d ago
try to check model middleware. it allows you manipulate context on low level before sending params to actual model instance. but be careful
Guria
Guria5d ago
Language Model Middleware
Learn how to use middleware to enhance the behavior of language models

Did you find this page helpful?