Is working memory essentially just an LLM call that is in a tool that is run at the start of every LLM call? I have been looking into Mem0 and their integration with Mastra and the pros and cons of using that/supermemory compared to Mastra memory @_roamin_