✅ High memory consumption issue
Hey!
Scenario:
- Loop
- Download a blob (zip file) from an Azure Blob Storage container (download file size: 5mb max.) (via a stream which gets disposed)
- Unzip the zip file (unzipped file size: 50mb max.) (via a stream which gets diposed)
- Deserialize the file content (JSON) (via a stream which gets disposed)
- Do sth with the deserialized data
Observation:
We noticed that this loop leads to a high memory consumption. This is fine per se but the memory usage is not really going down to a reasonable level. Even after several hours. First guess: We are missing some
Dispose()
calls. Wasn't able to find any missing ones. Even used 3rd party Roslyn analyzers. Nothing. Interesting other fact: The memory usage stays quite high but it does not look like a leak because it's not really growing over time. It just stays at the high level.
I also profiled the scenario with dotMemory: It's a lot of unmanaged memory which is causing the issue.
What does help? The GC is being triggered when taking a memory snapshot which leads to a lot of the unmanaged memory to be released.
Actual question:
Are we doing sth wrong or can this be just normal behavior? Is this a valid case for calling GC.Collect()
?14 Replies
What GPT says:

TIL: ASP.NET is using the server GC by default -> Optimized for throughput, not necessarily minimal memory usage
The GC should be triggered when being under memory pressure. Idk if the app actually gets the correct memory information because it is deployed as a pod in a k8s cluster. I'll figure that out. We are already seeing OOM exceptions but idk if the GC is taking them into account

This is also valuable information
I think this one is solved then. I'll leave it open for a bit in case someone wants to add some thoughts :>
Do you know what the pods memory limit looks like? Is the stream ever realized into memory by chance?
If yes, then these sizes might end up in the LOH
Oh, I just saw the remark regarding unmanaged memory
Do you know what the pods memory limit looks like?The current limit is around 1.4gb and I see a memory usage of like 3-4gb max.
We did face issues with unmanaged memory and kafka at work a while ago, and ended up looking at the heap in windbg
In our case we potentially got lucky because we actually found some readable segments pointing to a libkafka issue
Yeah, that's a good point. A bug in the 3rd party nuget packages could have been an alternative explanation 😅
To be fair: The operations are quite complex so you wouldn't be surprised if there is an issue in the implementation somewhere
If you're still trying to dig in, I'd definitely get a dump first and start exploring in windbg with
!heap -s
With UST enabled you should also get stacktracesThanks! GPT also recommended some tools
Windows Performance Toolkit (WPA + WPR)
Not familiar with those sadly 😄
Me neither
dotMemory can display the amount of unmanaged memory but not more
They suggest you to use dotTrace with "native memory allocation" apparently
Learned a lot in the last 2 hours 😅
Thanks a lot Mr. Sossenbinder! Caae closed:>