Nice!
Well... we're using one queue at the moment because we don't have a better solution for the following requirement: all of the messages in question are performing operations on users in our database (either a specific user or a batch of a whole bunch of users). If these batch jobs are all running concurrently, then there's basically going to be a whole lot of contention, potentially trying to operate on the same users at the same time.
So we use a single queue with a concurrency of 1 for basically everything that operates on users. It could be a bit of a bottleneck... but the throughput is fine at the moment and it just allows us to eliminate contention between all these batch jobs. Can't think of a better way of achieveing this?
Some of these batch jobs are triggered basically on a CRON, so they should just be discarded it unprocessed by the time the next CRON triggers another.
And some batch jobs are critical events that must be ingested eventually and therefore not discarded.
Hopefully that makes sense to clarify our use-case.
Right now, we just concede that the CRON batch jobs will keep accruing if unprocessed, which is not perfect, but it only happens when some part of the processing pipeline is down anyway, so not the biggest deal.