Also, unrelated, is there a chance we can get a sort of dynamic key as part of queues? There's still only one queue and one consumer script but batching & concurrency is done per-key? Seems like an easy way to enable much more reuse. Creating a new queue for every log file that needs appending or for every API region etc. is annoying and feels unnecessary, they are all exact duplicates just with a different name to make them work in parallel
It’s not terribly clever… I just watch for it to return headers saying it’s out of quota and await scheduler.wait() with whatever duration it gives in another header. Also sleeps if I get back a 429 (based on the retry_after value it sends.) I do a bit of extra sleeping after like 4x 429s in a short period to be nice, but it’s not entirely necessary. Discord has good documentation for the rate limit headers and 429 responses it sends: https://discord.com/developers/docs/topics/rate-limits
ah ok :/ I have much tighter & complex rate limits and have so far failed to implement them on top of cloudflare without also paying a lot. Like 10k+ outstanding requests but only 200 / 1h allowed & any overrun disqualifies me from getting higher limits so
Maybe you could specify the file it should append to in the Queue message? Each message you send is an object, so you could send something like {file: “log1.txt”, line: “my log line 123”}
yeah and I have a bunch of essentially duplicated logic then + either have to use huge batch sizes and risk overrunning the time, or have smaller sizes, which may end up in unreasonably small write batches
I mean duplicating the queues has worked for me in the past but it makes the calling code more confusing then necessary, makes the dashboard worse and updating is annoying
I mean all this is solved on my side fairly easily with just a bit of TS magic & a few scripts but seems like a fairly easily thing to solve on the queues side
unfortunately I have no insight into what other people use queues for, but batching writes & rate limiting are the two big ones I've seen so far, and both would benefit from something there greatly
yeah that's also what I'm doing. I think (?) a fairly common need given cloudflare based products have no problem handing out request volumes that just kill vendors. Generally I've not seen that many external APIs disallow hitting the rate limit in general, but they'll get unhappy if it's too much / impacting other customers.
So at least taking some care to stop that from happening is important and queues seems like the best way to still ensure all requests make it eventually, but limit throughput. It's just a bit akward to implement efficiently right now
you can also batch from a queue if the provider allows it, to avoid spending a request for each and better optimise the amount of data you send in each request (which i think jacob is doing?)