No where it stays in the queue. Think of queues like waiting in line for a cashier... if a cashier is open you can go to checkout, if not you just wait until one is open
thank you for clearing , so what i understand is "a queue can have a large number of messages in waiting state in backlog, and when the consumers are free it sends BATCH_SIZE number of messages in bulk everytime"
so just one more question i am using browser rendering in the crawler and it has a limit of 2 new browsers per minute per account, so if that limit gives an error should I add retry delay in the queue? and if I give msg.retry() for one message from a batch, will the queue send only this message or send another batch including this message?
Also I launch a browser session with { keep_alive : 600000 }, so it should stay alive for 10 minutes, I am not doing browser.close() too, instead i am doing browser.disconnect(), then why it is getting released after 60 seconds of inactivity
So my understanding is that a queue can only have a single consumer. How would we handle a 1 to many situation? Let’s say I fire an event when a user signs up, so I place a “UserSignedUp” event on the queue. Now I’d like to have a bunch of consumers that are decoupled and can react to that event (notifying, saving to database, etc). They all do incredibly important operations, so I need a guaranteed delivery to each consumer. Is this possible in Cloudflare Queues?
Yes, but how are you decoupling each consumer here? Multiple consumers are still going to receive the same events — what it sounds like you want it a way to batch based on a key (defining an event type)
The only other way I can think to do it is have the main consumer as a “dispatch” and have it send the event back out on a bunch of other queues, but this sounds terrible
Can you describe more here? A Pub/Sub consumer is going to pull something off the topic, but it’s all the same set of messages. I don’t see how this is any more decoupled: a single “UserSignedUp” event is still going to a single consumer.
Publisher writes UserSignedUp event to /sign-ups topic -> consumer pulls off the topic (messages do not persist!) -> now consumer has to fire off all of those events (write to DB, send email, etc)
I would like a consumer dedicated to writing to DB, and a separate consumer dedicated to notification, etc.
I believe these are called “consumer groups”, like in Kafka.
Now in the future, if I want to perform some other completely decoupled operation, I can simply add another consumer group to the queue (and I don’t have to modify any of the existing consumer processes)
Then there’s the whole replay feature, in case new consumer groups need to replay the events from some set point — but then you are just reimplementing Kafka , but damn that would be insanely powerful
Assuming you control all of the producers, I would probably not bother validating it on the consumer, since consumers should only ever push valid messages
Yeah, Zod is definitely the one-hit hammer here. Just think it is a little heavy-handed to validate the entire thing when you control both sides anyway
I started to experiment with R2 Event Notifications over Queues using http pull. My feedback: 1) Default Api limit of 1200 reqs/5min is 4 req/sec seems way to low. I have 100+ EC2 instances working on a queue. I just requested a limit increase to 30000 reqs/5min. I hope it will be approved. I never hit an SQS rate limit... 2) One of the most important missing features is to extend the visibility timeout while working on a message. At the moment I can only set the visibility timeout when I poll a message. At this time, I might not know how long I need to process it. 3) Jurisdiction support is missing. 4) The R2 event should contain a property jurisdiction. I hope that helps.
@aviplayer hmm.. reading this, cloudflare scales the number of consumers based on how many messages are being added to the queue, attempting to avoid the queue growing faster than it is processed. I wonder if it just behaves strangely if you have a very small number of jobs that take a long time to execute https://developers.cloudflare.com/queues/configuration/consumer-concurrency/
Aha, in that case, maybe better to do batches and process them in parallel in your worker using Promise.all and ack each message as and when it is processed