C
C#6d ago
Spiced

✅ Is there a more efficient way other than ConcurrentQueue?

https://github.com/NoubarKay/MiniOps.Nucleus I have provided the code. You can find exactly the usage of it inside of RequestStore. It basically stores a list of requests, until i flush them out after a second. Any help would be appreciated!
GitHub
GitHub - NoubarKay/MiniOps.Nucleus: MiniOps.Nucleus is a high-perfo...
MiniOps.Nucleus is a high-performance, ultra-lightweight request metrics collector and near real-time dashboard for .NET applications. With a footprint of just 26 KB, it provides a minimal yet powe...
51 Replies
mtreit
mtreit6d ago
If you want to support multiple producer / single consumer semantics (like, you can have a dedicated background task for flushing the in-memory queue), then BlockingCollection or Channels might be an even nicer fit.
Spiced
SpicedOP6d ago
Currently i have background jobs that run every x seconds and flush the memory into db (Insert the rows from memory into db). Would channels be a better fit? More efficient and fast?
mtreit
mtreit6d ago
Also your file name is IRequestStore.cs but there's no interface and instead it's the actual RequestStore class implementation.
Spiced
SpicedOP6d ago
Yes that will be fixed in the upcoming release
mtreit
mtreit6d ago
It would probably be faster, yes, because there is no polling.
Spiced
SpicedOP6d ago
hmm, but what if i actually do want to poll like every second to send data to the streaming line graph? how would that be
mtreit
mtreit6d ago
Why not send data as soon as it's available instead of waiting for one second? I don't know your requirements.
Spiced
SpicedOP6d ago
Okay but what would i do if i get like 500 requests a second? Wouldnt that be too much ?
mtreit
mtreit6d ago
I mean, if what you currently have works fine I wouldn't spend a lot of time trying to optimize it until you know it's a bottleneck. If you are doing proper batching you should be able to handle 500 requests per-second easily I think. If anything, processing the results faster is less likely to cause issues at high RPS because you don't let a giant queue build up.
Spiced
SpicedOP6d ago
I found the bottleneck around 50 requests a minute. There signal R starts to lag primarily and the queue lags too Which i know for a fact isnt from my inserts cuz they take less than or approximately 2ms
mtreit
mtreit6d ago
50 requests per minute is so little that if you have a bottleneck there...you have a bug somewhere.
Spiced
SpicedOP6d ago
I think its more of a signalR thing honestly? Cuz i know its not my inserts or fetches What do you honestly think?
mtreit
mtreit6d ago
I have not used SignalR but I thought it supports something like a few thousand requests per second per CPU core. That is, it should be able to support a huge number of RPS. If you are achieving less than one request per-second I don't think it's SignalR. You have some fundamental bug or flaw in your design. You need to profile the code or instrument it to log where all of the time is being spent. I would start with a performance profiler like PerfView or the one Visual Studio has to see if it shows any obvious hot spots.
Spiced
SpicedOP6d ago
I can use dotProfile But what would be the most propably issue ?
mtreit
mtreit6d ago
Impossible to say off the top of my head. You need to answer the question: where is the time actually being spent? I would write a stress test app and use that to help investigate.
Spiced
SpicedOP6d ago
Okay so what i can do is just shut off signal r and see if there is another bottle neck ?
mtreit
mtreit6d ago
Give it a try.
Spiced
SpicedOP6d ago
So it was definately not from my inserts or signalR What i mainly changed was keeping my DB connection open instead of creating a new one each second That made the system in all very mych more stable in handling HUGE amounts of data
mtreit
mtreit6d ago
Database connections should use pooling already. You generally shouldn't have to deal with that yourself.
Spiced
SpicedOP6d ago
Hmm, that was an oversight from my end since im using dapper But in general using a single connection, it made everything much much more stable
mtreit
mtreit6d ago
Glad you tracked it down 🙂
Spiced
SpicedOP6d ago
That was one of many issues i want to improve Althgouh we are at 400mb of memory. That isnt good no ?
mtreit
mtreit6d ago
Nah it's probably totally fine.
Spiced
SpicedOP6d ago
Well 10k Requests a second, with 400mb of memory with absolutely zero lah
mtreit
mtreit6d ago
GC tends to hold onto memory to re-use it. It makes the memory usage look higher in tools like Task Manager but it's often the case that either the actual memory is "free" as far as GC is concerned, or you have objects that survived to Gen2 and GC hasn't bothered to run a full Gen2 collection since there is not memory pressure. Again, I recommend stress testing the service. Hammer it with a hundred parallel requests in a loop for an hour. If you don't see memory usage growing unbounded you probably don't have any memory "leaks" or the like to worry about.
Spiced
SpicedOP6d ago
So just keep pushing it with 10k requests for an hour and checking if it will pass 400mbs ?
mtreit
mtreit6d ago
Basically. You'll probably see the memory usage go up and then back down when GC kicks in. You shouldn't see it growing in a linear line that never goes down until you run out of memory. If you are worried about memory usage. It's a good test.
Spiced
SpicedOP6d ago
I want to make this lightweight AND efficient so yes, i am kind of :p
Spiced
SpicedOP6d ago
So this is normal ?
No description
mtreit
mtreit6d ago
Is what normal? Using 384 MB of memory? Really depends on what your service is doing.
Spiced
SpicedOP6d ago
No i meant the ramp up then down
mtreit
mtreit6d ago
Yes that usually shows when GC kicks in.
Spiced
SpicedOP6d ago
Okay so what i did notice, is after a solid 5 mins at 7k requests a minute, SOMETIMES it lags for just a couple of hundred milliseconds but other than that it seems stable im guessing at that point its the ConcurrentQueue that is the bottleneck
mtreit
mtreit6d ago
Don't guess. Measure 🙂
Spiced
SpicedOP6d ago
Wait how can i measure the concurrent queue ?
mtreit
mtreit6d ago
Use a a profiler. See if operations on the queue show up as a bottleneck. Or instrument the code with your own stopwatch measurements and log if it's ever above some threshold. Again, measure where the time is actually being spent. Either with a profiler, or with instrumented performance measurements inside the code.
Spiced
SpicedOP6d ago
Im using stopwatch right now to check where the slowness is
mtreit
mtreit6d ago
I once shared an anecdote in a blog post: https://mtreit.com/programming,/performance,/c%23/2021/12/10/FakeOptimization.html The anecdote goes like this
Notes to self
Fake Optimization
The start: an online discussion I recently participated in an online conversation between a few C# programmers who were discussing the difference between jagged arrays and multidimensional arrays in C#.
No description
mtreit
mtreit6d ago
It's surprising what you find when you start actually measuring things.
Spiced
SpicedOP6d ago
Oh wow, and did you use the profiler or stopwatch in this specific scenario ?
mtreit
mtreit6d ago
I use stopwatch inside the code and kept adding more and more measurements inside more and more inner methods inside the code until I found it.
Spiced
SpicedOP6d ago
Ah damn, that's smart Okay i put smartwatches on where i Think the issue might be. Im going to REALLY push it for a solid hour and see where the bottleneck is
mtreit
mtreit6d ago
A profiler might have also found it but I find profilers are usually good when you have a CPU bottleneck, but often the bottleneck is actually in I/O or other things like blocking on a lock that doesn't necessarily show up in a CPU profiler. If you are just trying to measure where the time is spent, an hour is probably massive overkill 🙂
Spiced
SpicedOP6d ago
Well the thing is, the logging itself is being late sometimes but a couple hundres ms, but in the logs, it says there is no delay. Could it be the background job itself?
mtreit
mtreit6d ago
That sounds like a GC pause.
Spiced
SpicedOP6d ago
Thank you
mtreit
mtreit6d ago
You can run PerfView to get a GC trace to see how long the GC pauses are taking.
Spiced
SpicedOP6d ago
Now, what i did was i started sending my signal R before my dapper inserts even though my dapper pauses are like 100ms at 20k requests a second. So that means the Dashboard gets realtime updates in a way that doesnt cause lag at all which honestly feels amazing
Unknown User
Unknown User6d ago
Message Not Public
Sign In & Join Server To View
Spiced
SpicedOP4d ago
But in that case @TeBeCo How will i batch aggregates for the graph realtime ?
Unknown User
Unknown User4d ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?