Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

hello everyone I have a doubt related to

hello everyone I have a doubt related to the Cloudflare queues, so currently we host a platform where we automate multiple applications by receiving Webhooks on our platform. Our domain is hosted via Cloudflare. Recently we have been facing downtime due to server maintenance. So I had thought of a approach that before going into downtime, we will enable a worker that would have a queue connected to it and in the worker we would have setup our route e.g. https://{{domian_name}}/* as soon as we enable this our system of the worker is up and captures webhooks and stores them into a queue. The issue that I am facing is while processing the queues for around 300 webhooks the API is taking about 75 seconds to process those webhooks and send them to the respective destination. I have tried them in sync with but still the time is approximately same. ...

One message is not going to hold up the

One message is not going to hold up the entire queue, you should continue onwards.

Hey team ! I'm trying to edit the

Hey team ! I'm trying to edit the concurrency on some of my queues, but the option doesn't exist anymore:
No description

Is the queues product actually

Is the queues product actually production-ready? I added around 200k records for processing, and it lagged for almost 20 hours — jobs barely got picked up and it was completely unreliable. It’s been days and those records still never processed, so I ended up purging the queue. Because of this, I’ve stopped using workers/queues altogether — my VPS handles the load way better. But if anyone from the queues/workers team wants to dig into this, let me know and I can share all the details....

Will you add these new metrics to the

Will you add these new metrics to the prometheus Cloudflare exporter?

I’m not sure I understand exactly, but

I’m not sure I understand exactly, but can you call your function unconditionally from your queue() handler to do this?

Message queue, if workers are not used,

Message queue, if workers are not used, can only use HTTP polling to check if messages exist?

Feeling like alone on the problem here

Feeling like alone on the problem here 🙂 At the end I finally found that I can reproduce my problem locally with cloudflare only, so I guess I will switch to another queue system, except if someone can help me, I lost my message after a weird delay, I can't see why with current API or UI on cloudflare

I'm getting `Queue sendBatch failed: Bad

I'm getting Queue sendBatch failed: Bad Request despite my requests being correctly shaped. I am sending 1000's of messages in multiple parallel sendBatch requests however. Is it possible this is the 5000 messages produced per second limit instead?...

I really would like to use Cloudflare

I really would like to use Cloudflare Queues instead of a third party provider, but consumer location not being affected by smart placement (round trips become a huge issue), concurrent consumers only scaling at the end of a batch and the 6 simultaneous open connections per worker instance result in concurrency autoscaling not working as expected and taking too long to scale up. My queue backlog becomes huge. I get that the magic of autoscaling would be great but reading people complaining about the same thing shows that we are not there yet, or maybe we are just holding it wrong. I believe things would be way better if consumers would scale up as messages comes in, or if we had a min_concurrency setting (of course I don't know how viable it would be for us to have those). I'm really frustrated by the results of trying to use Queues again and again since the beta and still hitting the same problem....

How is the consumer location chosen? Is

How is the consumer location chosen? Is it affected by smart placement?

is it within the road map for http push

is it within the road map for http push ? ( sending a message via http without worker )

On the worker queue producer side, I

On the worker queue producer side, I have a tough time understanding why the worker sometimes just fails to complete the await env.QUEUE.send() ? From testing , I noticed "warmed up" invocations to the worker wouldn't have this issue but when it's idling after a while it would (for a good amount of time) just spins itself down before sending to the await env.QUEUE.send() call. request IDs affected: 930d4183797951e2 930d2b064f611f41...

Same problem here, `Consumer Delay` is

Same problem here, Consumer Delay is about 32 seconds. My configuration: ``` max_batch_size = 1 max_batch_timeout = 0...
No description

Hi, what kind of consumer delays are

Hi, what kind of consumer delays are normal with queues? I have a simple low volume case where a durable object puts individual messages to a queue, and then a worker picks up and processes those messages. I want to process the messages one by one without any delay, so I have configured delay as 0 and batch size as 1. The actual execution in my worker is very fast, 100-300 ms as expected. But there seems to be some strange delay before the worker picks up the message from queue, and when I look for the queue metrics in console it says "Consumer Delay: 3.4 sec". So I feel I am loosing an extra 3 seconds somewhere, which means this is not OK for any customer facing online use case. Don't have much experience with queues so I don't know if this normal or not, but I was expecting the added latency to be in tens or hundreds of milliseconds....

Is Queues going to be getting any love

Is Queues going to be getting any love from dev week? Free tier?

Super cool. A few of questions:

Super cool. A few of questions: 1- Can we trigger it programmatically? 2- Do you plan to also allow purging based on message params? Like conditionally purge them?...

Pretty sure something got broken with

Pretty sure something got broken with this release. In my logs, I can see that adding to one of my queues starting failing ~2 hours ago. The error looks like this: internal error; reference = odmj851jl3gua27r036349h7

Quick question if I may, but will queues

Quick question if I may, but will queues eventually add support for remote dev? Currently any worker that makes use of queues still isn't able to make use of things like quick edit, edge preview, remote dev, etc...

Is there any issue with queue? because

Is there any issue with queue? because message in queue is resolving very late! !