not sure what argo smart routing would do here? This is worker <-> queue <-> worker <-> external API I suppose doing the worker <-> external API route via Argo would be possible (?) not sure though. But really what I'd want is one of the two other connections to do this. Attaching either the queue or the second worker to a specific region would get me all the benefits for free as all the latency would be hidden
Basically, create a few DOs as close to your origin as possible, then save their IDs. Whenever you need to run code close to the origin, forward the request to those DOs, and you are good to go
fair enough, I'd learn about the datacenter going down anyways and could just "move" them back later. How does the routing work here? It would be worker <-> DO so I'm assuming that'd be similar to Argo since the worker would still be placed by some unknown force?
Queue/PubSub: No Idea Cron: Anywhere with low traffic, carbon neutral datacenter if you enable green compute Alarms: In the Colo where your DO was created.
I'll ask in Queue. I'd guess there's something similar going on to Alarms, so maybe I can somehow get the queue to be created in my respective data center. Thanks a lot for the help already!
Just a heads-up that the #cloudflare-for-saas channel has now been archived because it didn't have much activity. If you have any questions about SaaS, feel free to ask on the Community: https://community.cloudflare.com
Could you use a worker to be a websocket client? Like say I make a request to wake up a worker, then that worker then connects to a websocket server and waits for messages, then when new messages come in it processes them (or a durable object doesn't really matter)