defer / run_at style functionality. E.g. queue this job to run 4 hours from nowSEND_PENDING_NOTIFICATIONS queue that accepts a userId. When actions that result in notifications occur, producers might call env.SEND_PENDING_NOTIFICATIONS.send({ userId }, { jobKey: userId, runAt: Date.now() + 60_000 * 2 }). This queues up a job to run 2 minutes from now. If a job with the same jobKey already exists, no additional job is queued up. Thus our various producers can remain naive and simply queue SEND_PENDING_NOTIFICATIONS when relevant, and notifications will get rolled up per user on a 2m interval.concurrency=100 on our INGEST_EVENT consumer, in order to not overwhelm the stateful DB that this consumer inserts data into. One of our customers (A) has a spike in events, queuing up 10,000 jobs. The events coming in from all of our other customers will end up queued behind the 10,000 events from (A). What we do now, with graphile-worker, is the equivalent of env.INGEST_EVENT.send({ ...eventInfo }, { jobGroup: customerId }). Events will be processed 100 at a time, but max one at a time per customerId, so that one customer cannot negatively impact our other customers.graphile-worker postgres queue, and trigger worker jobs within cloudflare?fn that wraps the job queuing, and checks to see if the given unique key exists in kv or something before actually sending the message to the queue. If not, then set the value in kv + send message, and on the other end in the queue consumer it will prob have to unset that kv value when it's done processing the message. That what you were thinking? Not too shabby.process analytics event queue. This queue is used to validate and enrich incoming analytics events (each event is associated with a particular customer), and then inserted them into our clickhouse database. Since this process interacts with our postgres and clickhouse databases, we don't want to run more than 100 at at time (just picking these numbers randomly for the example), so we set concurrency=100 on the queue. Since incoming events for all customers are processed by the same queue, a spike in incoming events from a single customer will cause a delay in the processing of events for all other customers. E.g. if all of a sudden we receive a spike of 10k events from customer A, which are processed 100 at a time, events that come in from the rest of our customers will have to wait for the 10k events from customer A to finish processing. Does that make sense? Happy to try a different explanation if it still doesn't haha.run_atSEND_PENDING_NOTIFICATIONSSEND_PENDING_NOTIFICATIONSenv.SEND_PENDING_NOTIFICATIONS.send({ userId }, { jobKey: userId, runAt: Date.now() + 60_000 * 2 })jobKeyconcurrency=100concurrency=100INGEST_EVENTenv.INGEST_EVENT.send({ ...eventInfo }, { jobGroup: customerId })customerIdgraphile-workerfnkvkvkvprocess analytics event