Yes. I just added logging and its

Yes. I just added logging and its
saving to queue time is 294ms
saving to queue time is 294ms
started fluctuating too, I tried multiple requests randing from 60ms to 700ms
13 Replies
John Spurlock
John Spurlock4w ago
hmm, yea it sounds like maybe the sending worker is not as close to that queue as would be ideal, perhaps a/b test with a non-smart worker and/or a new queue?
usualdev
usualdevOP4w ago
Sure doing that now
John Spurlock
John Spurlock4w ago
you've made me curious about what kind of distribution on those calls is normal. I'm tempted to put some measuring code around my highest-traffic queues, but also do want to touch the code around my highest-traffic queues : )
usualdev
usualdevOP4w ago
Here is what I did: - Remove the smart replacement. started improving but not sure if this was related to it or not as I still can see longer response time
No description
usualdev
usualdevOP4w ago
The message size is 1kb, I will try a reduced size to see if this will make a difference
John Spurlock
John Spurlock4w ago
you doing this from a DO? otherwise if you are just using wall time keep in mind that the entry workers might be completely different, best to snip the time before/after the .send and log to the console or AE
usualdev
usualdevOP4w ago
what is a DO? and I am logging the time to send to queue which is inline with the response time I see in the dashboard
saving to queue time is 198 ms
saving to queue time is 198 ms
code snippet
const startTime = Date.now()

await env.TRACKING_QUEUE.send(data);
console.log(`saving to queue time is ${Date.now() - startTime} ms`);
const startTime = Date.now()

await env.TRACKING_QUEUE.send(data);
console.log(`saving to queue time is ${Date.now() - startTime} ms`);
John Spurlock
John Spurlock4w ago
oh a durable object: https://developers.cloudflare.com/durable-objects/ these are kind of singleton workers that are kind of like mini-servers in the cf architecture - if you happened to always be sending from a DO that would isolate out any cold-start for entry workers, but of course sometimes that's not possible/desired - one of the nice things about the queue bindings in the entry workers is that you can publish from any worker anywhere, no matter where it's running. might be interesting to log the colo/instanceid of the entry worker as part of your analysis
usualdev
usualdevOP4w ago
ah no I'm not using Durable Object, it is a simple producer with API endpoint POST data => Save to queue Nothing else If I disable saving to queue, response time is <30ms
John Spurlock
John Spurlock4w ago
yea I mean you're really never going to able to do high frequency trading on these - cf queues are optimized for batched/delayed processing, and I believe are built on top of DOs which have similar cold-start request times - once they are running and connected they are good, but they are aggressively hibernated if not in use DO websockets are pretty good for low latency stuff on the cf stack, but you then have to write your own persistence logic if needed to act more like a durable queue
usualdev
usualdevOP4w ago
Thanks @John Spurlock that was very helpful. Much appreciated!
John Spurlock
John Spurlock4w ago
np, I'm sure the team is constantly looking to optimize this - I'm hoping it ends up being similar to amazon sqs, which is both incredibly old, but still very good/predictable
usualdev
usualdevOP4w ago
Brilliant. That what I had in mind (simplicity of CF with the performance of SQS).

Did you find this page helpful?