Asymmetric latency from local client to CF worker
I have a very simple Worker running as a load balancer between a two origins that are serving the same app. It's very simple but I don't think the implementation details matter too much, my issue seems to lie outside of the worker.
After getting very bad latency from my test script that's sending 100 requests in succession to the worker, on a whim I tried measuring the latency between when my client sends the request and the cloudflare worker receiving the request (before any processing). And did the same on the way back (after processing) - measure the latency between the worker sending a response and my script receiving the response.
The results were quite surprising, the latency on send (from script -> worker) and receive (from worker -> script) are almost an order of magnitude apart:
I've never seen this kind of asymmetry before so it's quite concerning. The timers do not measure any processing time, so it really should just be reporting 'over the wire' latency. Any ideas for how to debug this?
My first thought was cold start issues, but the requests are sent one after another from the exact same machine, I expect they'd probably be processed by the same worker (although I don't know if there's a way to tell?). All in it's about 1 request per second for 100 seconds.
Some context about the runtime:
I'm in South Africa, running a python script locally that sends off requests and collects some metrics so I can do some end to end performance testing. The worker is very simple - on receiving a request, does a lookup against a KV Store for routing rules, then chooses between two origins to route the request to. Every request is identical.
0 Replies