Way to fire and forget the tool call

Hi team, wanted to know if there is a way to fire and forget the tool calls. We can do this with tools that we design but say for tool like updateWorkingMemory it slows down the whole workflow. Is there a way to fire and forget?
5 Replies
Mastra Triager
Mastra Triager•2w ago
šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10435 šŸ” If you're experiencing an error, please provide a minimal reproducible example whenever possible to help us resolve it quickly. šŸ™ Thank you for helping us improve Mastra!
Grayson
Grayson•2w ago
Hey @souvikinator ! I don't think there is a built-in 'fire and forget' API, but if you have multiple independent tools that don't depend on each other, you can use .parallel() in workflows to execute them concurrently:
const step1 = createStep({
id: "step-1",
execute: async () => {
// Some operation
}
});

const step2 = createStep({
id: "step-2",
execute: async () => {
// updateWorkingMemory or other tool
}
});

const workflow = createWorkflow({...})
.parallel([step1, step2]) // Both run concurrently
.commit();

const step1 = createStep({ id: "step-1", execute: async () => {
// Some operation
}});
const step2 = createStep({ id: "step-2", execute: async () => {
// updateWorkingMemory or other tool
}});

const workflow = createWorkflow({...})
.parallel([step1, step2]) // Both run concurrently
.commit();
const step1 = createStep({
id: "step-1",
execute: async () => {
// Some operation
}
});

const step2 = createStep({
id: "step-2",
execute: async () => {
// updateWorkingMemory or other tool
}
});

const workflow = createWorkflow({...})
.parallel([step1, step2]) // Both run concurrently
.commit();

const step1 = createStep({ id: "step-1", execute: async () => {
// Some operation
}});
const step2 = createStep({ id: "step-2", execute: async () => {
// updateWorkingMemory or other tool
}});

const workflow = createWorkflow({...})
.parallel([step1, step2]) // Both run concurrently
.commit();
Alternatively (but I think less desirable) within a step's execute function, you could trigger an async operation without awaiting it:
const step = createStep({ execute: async ({ inputData }) => {
// Fire and forget - don't await
Promise.resolve().then(async () => {
await updateWorkingMemory(...);
}).catch(error => {
// Handle errors appropriately
console.error('Background operation failed:', error); });
// Continue immediately with the main flow
return { result: "continued without waiting" };
}});
const step = createStep({ execute: async ({ inputData }) => {
// Fire and forget - don't await
Promise.resolve().then(async () => {
await updateWorkingMemory(...);
}).catch(error => {
// Handle errors appropriately
console.error('Background operation failed:', error); });
// Continue immediately with the main flow
return { result: "continued without waiting" };
}});
āš ļø Warning: This approach has trade-offs: No guarantee the operation completes Errors may not be properly handled Memory operations may not persist before workflow completion Definitely open to API feedback if the parallel solution isn't a good fit!
Abhi Aiyer
Abhi Aiyer•2w ago
updateWorkingMemory is auto executed by an Agent, it doesn't fire and forget though. I'm not sure it's a good idea to do so cause we need the operations to complete to correctly update working memory. @souvikinator what's your agent setup look like ?
souvikinator
souvikinatorOP•2w ago
@Abhi Aiyer totally understandable. We have workflows with some pre check steps and we are using whatsapp as our interface. Few things I have noticed: 1. during tool calls, the agent gets really slow. I understand it depends on the model. (using sonnet 4 here) 2. the update working memory takes a bit of time as it generates the whole input which takes some time. Maybe it can update only specific parts (but I guess that comes with the overhead of using tool to finding the right place to update and then generating with what to update) Also another problem is the working memory prompt that gets injected is super huge so if we can control what goes in as prompt I guess we can save some input tokens there.
Abhi Aiyer
Abhi Aiyer•2w ago
I can see if i can find anything about 1 Do you have a large working memory template? haha yeah the prompt is bigger but honestly not that big much of the size comes from the working mem template. There was a world where retrieving working memory was a tool call but that was even slower

Did you find this page helpful?