I switched to my phone hotspot and get Melb on the cdn-cgi trace but the Worker calls still being routed through Perth. I think I'd better ignore this or it'll send me crazy
Now while Ive been loooking at this DO site the Perth option has disappeared from Melbourne entirely hopefully someone has seen me complaining and is fixing this as we speak
But, yes, your observation and what others said above is mostly true that a DO will be created in the initially created colo if possible. There are cases it will move out of that, like when that colo has network issues and is taken out of the network for a while and the DOs hosted there will move elsewhere in the same region.
In the long-term future, our hope is for DOs to be moving more often closer to requests, but not yet.
At the moment, yes, we have a record of all created DOs even without storage. As I said, though, I wouldn't depend on this because we want to change it in the future. Just explaining why you are seeing the stickiness.
Anyone used DOs to ingest/process WebRTC streams over WebSockets? I wanna export a frame of a stream every X frames and send it off to be processed elsewhere. I know about Realtime and stuff but there're other processes that would interact with this and benefit from it being on DOs
I'm using the KV api for a sqlite backed durable object, and these records don't show up in the data studio. Have we heard if this is something that will be added later?
This was mentioned by the devs as something they plan to support, at minimum read-only. So our voices do help show there's a desire for the functionality.
The class name is UserObject but how does it know where that is? Does it scan every class in every file? Also how do i make it have autocomplete when using the DO. How does typescript know what type of DO i'm working with when i get it?
I think it's pretty typical that we have a durable object represent some entity like a user or a session, so we use idFromName, and then have that DO handle things for that entity.
Once I'm in the DO, however, it doesn't know the original entity id. I'm awkwardly passing in the entity id via a public init method, and then storing it in DO storage, so that when it wakes up after eviction (from a websocket message arriving or an alarm), it can "remember" its own entity id.
Are there other workarounds people are using that I'm not considering? Is there a less clunky way of the DO being aware of its own entity id? Thanks!
That's generally the best you can do at the moment. If you know when a DO is created, then you could ensure you only push in the name of the DO on creation, but otherwise, it should be passed every time to ensure it has it in the event it is new
I do it in a header for both HTTP and the WebSocket upgrade requests. You can't set the header for WebSockets in the browser but you can add it in the Worker fetch handler when it processes the WebSocket upgrade request. This is my utility for handling both HTTP and WebSocket DO requests: . If you use it, the DO will automatically get the headers on each request or WebSocket upgrade. You'll still have to pluck them out of the headers and stick them in storage. If you want to see how I do it, look at the source code here.
This is neat, thanks. I imagine this works really well if requests are always routed directly into the DO. (I've got a few other places accessing the DO, such a scheduled handler, so perhaps for me the init method is more ergonomic because it works there too)
Does anybody know whether a Durable Object is moved back to it's original colo after it's been "moved" to a different colo during an outage (for example)? This is in reference to what Josh Howard said in this video with Aaron Francis. I would assume it does move back to it's original colo - but it's one of those things that are annoying to test haha, so I don't know the definitive answer.
23 seconds Β· Clipped by Jacob Marshall Β· Original video "How Durable Objects and D1 Work: A Deep Dive with Cloudflareβs Josh Howard" by Aaron Francis
I have a cron job (the scheduled export in the worker entrypoint) which interacts with some DOs. So in this handler, I get DO stubs using idFromName, call the init function so they know their own entity id, and then call whatever other DO methods I need. In this case it wouldnβt be as nice to reimagine this as a fetch call where I inject my own headers
Makes sense. Using Workers RPC is more efficient. If it's possible that the first access to a DO instance comes from this cron job RPC call, then your init call is the only way I can think to handle it also. However, if you can be assured that it will always have been called before the cron job, then you might be able to use the headers approach.
Nice pattern to handle this is to have init return a RPCTarget with methods you want to be public and make everything else on the DO private. This makes it impossible to access the DO without going through your init fn
You can pipeline the init call to reduce round trips and it can work out cheaper as multiple methods invoked on same rpctarget instance is counted as single session
Are there any footguns with that approach? For instance do such sessions timeout? Or is that a non-issue when calling from a cron Worker? Might it be if calling from another DO?
If using the βusingβ keyword it should automatically be disposed. Unsure about max or idle timeout. Good idea to have wrapper on calling side to do retries with backoff