What happens to a currently running batch when you publish a new version of a consumer Worker? Does it finish processing that batch on the old version?
That's awesome! Sadly this particular queue is just waiting on Discord rate limits, but I'm super excited about trying more scalable workloads on Queues
One thing I'd like to try with Queues that would benifit from horizontal scaling: I have about 3 million files in R2 that I accidently saved under the wrong paths. It would be cool to use Queues to move them all
Changelog Limits page updated Update /platform/configuration/ with new properties wrangler updates merged from cloudflare/workers-sdk#2859 New /learning/con...
What this PR solves / how to test: This PR adds support for new Queues Consumer configuration options: concurrency_enabled and max_concurrency. Associated docs issues/PR: In progress Author has inc...
Also, unrelated, is there a chance we can get a sort of dynamic key as part of queues? There's still only one queue and one consumer script but batching & concurrency is done per-key? Seems like an easy way to enable much more reuse. Creating a new queue for every log file that needs appending or for every API region etc. is annoying and feels unnecessary, they are all exact duplicates just with a different name to make them work in parallel
It’s not terribly clever… I just watch for it to return headers saying it’s out of quota and await scheduler.wait() with whatever duration it gives in another header. Also sleeps if I get back a 429 (based on the retry_after value it sends.) I do a bit of extra sleeping after like 4x 429s in a short period to be nice, but it’s not entirely necessary. Discord has good documentation for the rate limit headers and 429 responses it sends: https://discord.com/developers/docs/topics/rate-limits
ah ok :/ I have much tighter & complex rate limits and have so far failed to implement them on top of cloudflare without also paying a lot. Like 10k+ outstanding requests but only 200 / 1h allowed & any overrun disqualifies me from getting higher limits so