I am running in to the same issues as @
I am running in to the same issues as @tom.owen had here
My consumer does a lot of work - spinning up @cloudflare/puppeteer to screenshot html, calling external apis that are sensitive to rate limiting, at etc. The primary purpose of the queue is to give me a central place to avoid rate limiting without rolling complex state management.
So far I'm close to a week in in trying to get my messages to stop instantly dumping in to the dead letter queue after only a handful get processed every hour. It's clear to me from logs that those that are sent to the DLQ are not sent to the primary consumer at all.
My producer is an R2 bucket.
I'll update everyone if I can figure this out. If I can't get it in a few days I'll be abandoning queues and rolling some state to monitor downstream calls 🤷♂️
If anybody has any ideas, or Tom if you ever found resolution, I'm grateful for any wisdom here.
1 Reply
If anybody is curious, I moved @cloudflare/puppeteer to a durable object and out of my worker and now I have no more issues. I suspect the worker was somehow overloaded and could never close properly on CloudFlare's end which clogged the queue and put the consumer worker on some of naught step / broken state. But I will never know without more visibility. Regardless if anybody else runs in to this, try offloading the larger processes out of your consumer workers.