Okay makes sense. What about DOs with unique IDs though? Their location doesn’t have to be cached, that was part of the design. Is that still true? Meaning even if that DO was temporarily resumed on another colo, would it “snap back” to the colo encoded one that is operational again?
TBH @brett_h knows the internals better than I do, but IIRC this is a known issue right now where after a failover to a different colo, we don't proactively migrate the object back.
That is my understanding.. there will probably be a much better coverage world-wide, but probably not every single colo. But I am not the definitive source on this
For us round trip latency to the DO is what matters. If there was none in South America or Australia for example we’d have to keep our own servers running there which makes our session management much more complex than switching over to cloudflare completely.
Yes, that what i was thinking... today only São Paulo get my regular worker traffic, and our DO are all to New Jersey. We will release our solution this month and keep in closed beta until end of the year, but after that all those incremental delays will be vary bad for our solution, sometimes we need to wait 2-3 seconds to update one document because several round trips.
We have other solutions that we plan to migrate to cloudflare workers + DO, and have more coverage worldwide, but we are using this first product to benchmark our needs before we start our main product migration.
Out of curiosity, if we supported "ephemeral objects" -- like durable objects, but with no persistent storage -- in all colos, would that solve your problem?
That would work, yes, since I would just need to delegate storage requests to another DO, but those wouldn't be in the critical latency path. Most convenient for that would be if I could create a new websocket connection from that ephemeral DO to the persistent DO.
I don't think so, as the main reason that Itty-Durable exists is to remove the need for performing fetch requests to your DO, which aren't necessary for KV anyway.
yea he had mentioned having the blog post written back when the change was deployed, but holding off on the blog post until after "impact week". Although pricing around DO is not completely nailed down yet, I'd be surprised if cached reads were not billed at the same rate. I guess we'll find out : )
imo variables every time - but unfortunately it's still possible to store too much (even like 12mb) so that a bunch of these instantiated into the same isolate will collectively blow the per-isolate memory limit. But that will depend on how you are accessing, how many you have at once, etc
Even more convenient night be an “ephemeral/durable hybrid” object that automatically delegates storage ops to elsewhere - trading storage latency for client latency if the colo doesn’t support DOs directly.
Am I interpreting this correctly that it is possible a worker and a DO may be loaded into the same isolate if they are the same script, sharing the same global scope? Or put differently, every isolate loads the whole script, and then it may receive a worker fetch as well as a DO instantiation+fetch?