memory_messages table. However, there is no way to specify which index to use and always defaults to ivfflat with cosine distance, even though the vector code itself can take these options.setupIndex regularly and tries to rebuild the index with different size. Unfortunately, we have so many messages that this times out and the chat regularly breaks down because of this issue. It would be great if we could configure which index the memory should use, so we could use hnsw and also inner product because OpenAI embeddings are normalized and performance would be better.