How much RAM should I allocate to Janusgraph while using built-in Lucene index?
Apologies if this has been asked before. I am running a minimalist setup of ScyllaDB + Janusgraph using the built-in Lucene index on a single machine with 32gb of RAM (the database is small, roughly 30gb). I am trying to figure out the optimal memory split between Janus and Scylla primarily for maximum read performance.
The problem I've run into is despite setting -Xmx to 8gb, I see Janusgraph using up to 12.5gb and running out of memory.
I am currently using Janusgraph 1.1 with db-cache-size=0.25, lucene as index backend, Xmx=8g, and all other settings default.
I'm wondering: when using built in Lucene, does it share heap space with Janusgraph and does increasing Xmx help? Is db-cache-size=0.25 too large if I have doubled the default Xmx setting?
If you have experience with such a setup, how much extra memory should I reserve on top of Xmx as a safety margin and what would be the optimal configuration to give Janus and Lucene the best performance?
Thank you!
EDIT:
Slightly increasing Xmx and setting db-cache-size to a fixed value in bytes has helped me improve stability.
JanusGraph still exceeds Xmx according to systemctl, but it seems to never exceed it by more than 6GB, which I can account for.
Solution:Jump to solution
Lowering down
db-cache-size
config a bit sounds reasonable and worth trying2 Replies
Yes, built-in Lucene shared the same heap size with JanusGraph
The problem I've run into is despite setting -Xmx to 8gb, I see Janusgraph using up to 12.5gb and running out of memory.This is weird - JVM should throw OOM error right when JanusGraph requests more than 8gb How do you know JanusGraph was using 12.5 gb? Anyways, increasing -Xmx shall help
Solution
Lowering down
db-cache-size
config a bit sounds reasonable and worth trying