Thats not slow and the worker being the producer and consumer of it wont affect the speed in any way. A queue has a specific location and workers run in every datacenter cloudflare has
theres like 300 regions that run workers that would be way too many and wouldnt actually be a queue at that point unless your users all happen to be the same region
At that point though you might as well not have a queue unless you have like thousands of requests per second because chances are there are an insanely small amount of your traffic sharing the same dc
In this specific case where I think 160 ms is too much, I use the queue to guarantee writing to D1 without compromising the original request, returning success in the shortest possible time.
I'm using Durable Objects, but since the DB needs to be SQL, I didn't see any other way. Despite this, in my tests delegating writing to a queue reduced the response by up to 100ms.
I mean you can probably do the same number of rps to D1 that you can to Queues given they both use durable objects under the hood so might as well just write to D1 directly instead of queue
I just found a better way calling D1 directly without compromising the request. Since, DO still live 10 seconds after the request, I can call save without awaiting. But just to ensure I am using waintUntilwaintUntil. As long as there is no possibility of the write failing, there is no need to use a queue.
actually I made some tests and it does. They limited the in-memory to 10 seconds. With waitUntil I made it live for 30 seconds. Not sure if it's possible to live more time, I will test
The documentation got changed due to moving to its own section and some stuff seems to be forgotten/i cant find but as you can see from this message on the old docs: durable-objects
waitUntil/non awaited DO promises also has 0 guarentees of the promise actually completing and it does not retry any operations so there is a chance you return a success to the user but it does not succeed
Not sure what is happening, but I created a simple counter that I can already persist in-memory for at least 2 minutes when using waintUntil with setTimeout. I am not persisting on storage. After 2 minutes it is cleaned.
any requests that happen to the DO will extend how long it lives and if there are any unfulfilled promises it might stay longer than the 10s documented time but having an unfulfilled promise wont guarantee it will eventually get fulfilled/rejected
the core difference to my understanding is that it's essentially single-socket-per-connection based, meaning that you can reduce the amount of requests / responses between endpoints
Apologies for the late response. From my understanding, the product that you're offering is very similar in functionality to RabbitMQ, which is a product that I use a lot. I would love to be able to use your product (when it is ready for this kind of testing) as a drop-in replacement - especially in the scenario of using an edge worker to enqueue messages to the queue for asynchronous processing.
otherwise they're similar products. The big appeal of RabbitMQ for me is the amqp protocol which is essentially socket based, so a new connection doesn't need to be made every time messages need to be fetched by the consumer (or the injestor)
and support for the amqp protocol would mean that i could use an existing library instead of having to write a whole new one (even if it is provided by as SDK)
here's a quick illustration of what i'm currently dealing with. What i'd like (in order to reduce the # of "subrequests") is to be able to route from a worker into a queue, and then to be able to consume the queue from my applications which are not workers (and are too complicated to be re-built in that environment)
Is there a way to consume an event after a 10 minute delay instead of right away. I’m communicating with an iot device api and sometimes the third party is unreliable or down for couple hours so I have to retry every 30 min
I actually have not because I thought queue was the reccomeneded solution for simply processing events. I just need a way to send a command to a iot device. If the consumer worker fails sending the fetch api call then I just want it to retry. Do you think I should look at it ? If so why? Thank you. I dont really understand the major difference between queues and pub/sub besides that pub sub has topics and subscribers and queues is just a linear stack that has events in order waiting for them to be consumed
I just want to point out that HTTP requests beyond HTTP 1.1 are also kept alive if the implementation is decent. I would think that a future queue HTTP API will even support HTTP 3 which would allow you to pull a lot of messages over a single udp "connection". I highly doubt you'll have performance problems that aren't solved by a better HTTP implementation.