Memory leak?

Multiplayer, 1.21.8 "DistantHorizons-2.3.4-b-1.21.8-fabric-neoforge.jar" Hi, I'm not an expert at debugging specific memory issues but I only, on the surface, know how to well.. execute the heapdump command and put it into a tool like Eclipse Memory Analyzer. I recently came across a memory leak issue when upgrading from 1.21.5 -> 1.21.8. Initially, I had no issues with memory but I did do a clean install of Windows right before upgrading to 1.21.8. I did not test if this issue is present on 1.21.5 with a clean install. I ran /sparkc heapdump and imported it into Eclipse Memory Analyzer. Eclipse told me the suspect was DH. See attached screenshot. I disabled the mod (by removing it, disabling rendering did nothing) and now Minecraft runs smoothly without any memory leak. Could other mods be interfering with DH or is this a DH issue?
No description
31 Replies
spookysbottoms
spookysbottomsOP2w ago
Please also let me know if you want me to provide a list of server and client mods. I also did notice that the text in-game did keep saying DH was loading chunks.
hardester
hardester2w ago
Cc @BackSun, @Jckf, @Builderb0y In the meantime, send over game logs while we wait for the mentioned users to give detailed answers into Java inner workings. (sorry for ping in advanced, need someone knowledgeable in Java stuff)
Builderb0y
Builderb0y2w ago
I wonder if concurrent hash maps ever shrink... probably not tbh. regular hash maps don't either last time I checked. this caused a lot of problems for me back in 1.10.2 as well. well, concurrent hash map is still just as unreadable as I remember it being, and its javadocs say the table can grow, but don't say it can shrink. so I'm assuming it can't shrink. thanks java.
hardester
hardester2w ago
It can't shrink? Or more like it won't shrink?
Builderb0y
Builderb0y2w ago
"no one ever implemented shrinking logic". if you fill the table with a million elements, and then empty it again, the backing table will still have a size of 1 million, even when every element in the table is null. fastutil's hash maps will shrink the table when it gets under a quarter full, to save memory. but they don't have a concurrent variant of their hash maps.
hardester
hardester2w ago
Oh, so effectively, if the element is null, new data can overwrite it or something? Or you're forced to append to the map?
Builderb0y
Builderb0y2w ago
are you familiar with how hash maps work in general?
hardester
hardester2w ago
I guess not, hence why I request for your explanation.
Builderb0y
Builderb0y2w ago
I wrote a tl;dr and it was 2633 characters long. I guess I'll paste it in multiple parts. tl;dr: you have a backing "table" which is just an array. let's say it starts at size 16. in java the default element for a newly-allocated array is null (or 0 or false, depending on the type of array, but in this case it's an array of objects, so the default element is null), which means that the array is basically a pointer to 16 null pointers in a row, somewhere in memory. to insert a key/value pair, you first compute the "hash code" of the key. the hash code is an effectively random number which depends on the key in some way. importantly, the same key always produces the same hash code. the key doesn't need to be a number, it just needs to be possible to derive a number from it in some way. since we said our table had a size of 16, we want to compute the hash code modulo 16 next, and whatever number we land on, that's where we insert the key and value into the table. this will replace one of the null pairs with a non-null pair. and to remove a pair, we do the same thing: compute the hash code of the key, modulo 16, and set the pair at that location in the table back to null again. setting one of the elements in the table to null or non-null does not change the size of the table. it's still 16 pointers in a row, just some of them are the null pointer, and others aren't. there are many other nuances that arise from the question "what happens when 2 distinct keys happen to have the same hash code, modulo 16?". this is called a "collision", and there are a few different ways to deal with those cases, but this is a tl;dr, so I'm ignoring all such keys for now. under this assumption, if you insert 16 pairs into the table, every element in the table is now a non-null pointer, and you can't insert a 17'th pair. to solve this, you allocate a new table (typically twice as large as the old table), and re-insert every key at the position of its hash code modulo 32 this time. in practice, the table size typically doubles when it gets about 75% full. this ties into the collision thing from earlier. and fastutil's hash maps have an additional constraint that if the table is ever less than 25% full, the backing table halves in size to save memory. but concurrent hash maps don't do that. so if you insert a million pairs, the table will grow to a size of 1 or 2 million. and if you remove all the pairs, the table stays at size 1 or 2 million, only now each pair in the table is null. future inserts can of course overwrite these null pairs with non-null pairs, but once again, this doesn't change the table size. it's still 1 or 2 million entries long, and it's still wasting a huge amount of memory. the table in the screenshot above is taking up nearly 5 GB, so I reckon it had vastly more than a million pairs in it at some point. maybe a few hundred million.
spookysbottoms
spookysbottomsOP2w ago
oh wow I did not expect an answer to the thread lol
spookysbottoms
spookysbottomsOP2w ago
https://mclo.gs/Xx4oGQ6 Here's my logs. This was after I generated the heap dump.
mclo.gs
Fabric 1.21.8 Client Log [#Xx4oGQ6]
1422 lines | 1 error
spookysbottoms
spookysbottomsOP2w ago
https://mclo.gs/vHTCwm4 This is when I generated the heap dump through Spark.
mclo.gs
Fabric 1.21.8 Client Log [#vHTCwm4]
1470 lines | 1 error
hardester
hardester2w ago
The log is spammed by Distant Horizons warning saying that it receives a height map that's outside of the limit. Is the server have a world generator or datapack that changes the world height?
BackSun
BackSun2w ago
There's been a few reports that some of the ConcurrentHashMap's DH uses are never cleared and can cause leaks. I haven't had time to look into the issue any more than that but believe it is accurate. I think the issue is specifically due to how level references are saved when connecting to a multiplayer server.
Builderb0y
Builderb0y2w ago
well, it won't matter if it's cleared or not (see above). you need to literally construct a new map and let the old one get GC'd to reduce the memory of a concurrent hash map.
Jckf
Jckf2w ago
If the expected peak size is low, that might be fine
Builderb0y
Builderb0y2w ago
clearly the peak size is not low :P
Jckf
Jckf2w ago
Possibly only because of the leak
Builderb0y
Builderb0y2w ago
though the expected peak size might be low.
Jckf
Jckf2w ago
AFAIK the internal capacity of a ConcurrentHashMap only grows to fit the peak number of items currently in use. If you keep removing items once they're no longer needed, then the internal capacity is likely to remain relatively small
Builderb0y
Builderb0y2w ago
if you keep the peak size small, the table won't grow beyond what the peak requires.
Jckf
Jckf2w ago
Exactly I can't really think of a scenario where recreating the hash map to reduce the footprint would be the proper way to deal with it Unless you have a known bursty load that stays low for most of the time, but some times peaks high
Builderb0y
Builderb0y2w ago
yeah. ideally though you'd just keep the peak size small to begin with. re-allocating the map is only really an option when keeping the peak size small isn't an option.
BackSun
BackSun2w ago
To my knowledge the peak size isn’t the issue, just that items aren’t properly removed. But either way this is all conjecture until I have time to actually look into the problem.
spookysbottoms
spookysbottomsOP2w ago
If I remember correctly, I had installed Tectonic v3 and I really wanted increased heights and it turned out there was an option in Tectonic's settings to enable it so I enabled it. I tried regenerating chunks with Chunky in that specific place but it did not work. The only that changed was that the red text message that pops up when you can't place blocks above a certain y is no longer there. I can't place blocks (beyond the original y-level) either because it'll just "rubberband" me. I'm not sure how to regenerate heightmaps, if that's the correct term for it.
Sandman
Sandman2w ago
VRAM also has similar issues, also connected to the server. I suspect that the memory leak also caused a VRAM leak.
spookysbottoms
spookysbottomsOP3d ago
Is there any updates on the issue? Would love to use DH, but unfortunately I am limited because of this. 🥲
BackSun
BackSun3d ago
I have been unable to reproduce any memory leak issues.
spookysbottoms
spookysbottomsOP3d ago
I believe I've identified the source of my memory leak. I did a binary search and discovered that Chunks fade in mod was causing the issue. So far, no issues yet for the last 5-6 minutes on both singleplayer (I was able to reproduce the issue on singleplayer) and multiplayer. Even with the mod disabled in Mod Menu/config it did nothing, I had to remove it. 10 minutes later, it is still normal. I wonder why chunks fade in is causing issues.

Did you find this page helpful?