queues backlog and retry/lag times available now in graphql!! π thanks!
queues backlog and retry/lag times available now in graphql!! 
thanks!
thanks!



# Beta. Queue backlog data with adaptive sampling. Queues that are not being written to, or read from, will not return data, even if they have a backlog.queue handlers have exactly the same limits (cpu time etc) as a fetch handler would have in the same worker? More time for unbound vs bunded - or do they behave differently like DOenv.YOUR_BINDING_NAME.send() (and passing in env as an arg?). Posting your code would help to confirmObject.keys(env) and paste the output? DM me if you prefer.

batch object to know which Queue you're consuming from in that particular execution.Workers > Overview > (your worker) > Triggers > Queues (bottom of page)) or by removing it from the wrangler.toml configuration and publishing the changes.ack() on every message (or the entire batch) when handling it? .retry() would exist for the cases where you want to explicitly, negatively acknowledge.YOUR_QUEUE.send(). In the longer term weβre exploring other on-ramps to support this natively.ack() to make partial progress through a batch so that message is not retried if the handler errorsretry() or retryAll() on a message or a batch to force a retry, even if the handler returns successfully# Beta. Queue backlog data with adaptive sampling. Queues that are not being written to, or read from, will not return data, even if they have a backlog.env.YOUR_BINDING_NAME.send()Object.keys(env)batchWorkers > Overview > (your worker) > Triggers > Queues (bottom of page).ack().ack().retry()YOUR_QUEUE.send()retry()retryAll()export default {
...
async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) {
if (batch.queue === 'my-first-queue') {
doSomething()
} else if (batch.queue === 'my-other-queue' {
doSomethingElse()
}
...
},
};