Is there any publicly available benchmarks on durable object size vs startup time? I'm storing a lot of data in durable objects transiently (~30gb, will be sharded over multiple obviously), and latency is a concern. Sharding isn't a big deal, but I don't want to pre-emptively overshard for no reason.
This looks awesome. Is the setup for using this in a multi-worker situation the same as with the vitest integration? (Precompile typescript to JS and then manually configure the secondary workers with miniflare?)
That's one of the problems with . The team has traditionally not consider docs (or quality, responsiveness on Discord, etc.) important although they say they are working on improving them. Also, it must be said that they have always had a great API design, even if you have to crawl in the code to discern it.
I have crawled around in their code and my docs on how CORS behavior works default and allow everything, are accurate for routeAgentRequest but I recommend strongly against using if you are using WebSockets. They don't support the whitelist or custom validation function modes that mine does.
Issue: Sandbox Sandbox SDK returning placeholder mode despite correct configuration "Agent provisioned in placeholder mode (Cloudflare Sandbox API not configured)."
When my worker calls Sandbox.create it falls back to “placeholder mode” because the binding isn’t populated (runtime logs show Agent provisioned in placeholder mode (Cloudflare Sandbox API not configured)). It looks like Containers access hasn’t been enabled on my account yet.
Please do not post your question in multiple channels/post it multiple times per the rules at #welcome-and-rules. It creates confusion for people trying to help you and doesn't get your issue or question solved any faster.
i noticed that errors thrown from functions called across a workers RPC boundary don't contain the original stack trace (for example when calling a function on a durable object stub or a service binding)
the serialized error that crosses the RPC boundary only contains the error message but not the stack
i'd really like to retain the original stack trace. what's a good pattern for doing that? stuff the stack trace in the message? or could i throw a custom error with the stack trace serialized in a property? would that cross the RPC boundary?
and is there a place where i can intercept any errors thrown out of the RPC boundary from the callee's side?
I'm working on an app where I use DO per user model. Given the release of data studio, allowing the inspection of a DO's SQLite storage, is there a way to assure users that I CAN'T inspect the state of their DO's state? Basically, can I limit access to data studio?
Hi, any decent approach or documentation how to make sure or what guarantees a DO instance won’t hibernate while there is a logical task executing (in most cases involving IO)? Some clarity on this topic would help me design system properly. Currently hacking via repeated alarms , reminds me old times Android keep awake hacks. Not good.
It shouldn't hibernate when you are awaiting external fetch (what I assume you mean by "IO"). The limit on CPU usage defaults to 30 seconds but that clock doesn't run when you are only waiting on an external fetch. Also, it can be bumped up to 5 minutes.
I have a bunch of DO namespaces I want to convert from KV to SQL backend, is the official migration process coming soon or should I just create new namespaces and migrate data?
No worries appreciate the clarity. I'll use it as an opportunity to rewrite everything while I'm at it, there's been plenty of changes I can take advantage of since I first built it all
I probably shouldn't have used the word "clock". It's confusing. DOs are billed on wall clock time. However, their run length limits are CPU clock time which is an order of magnitude or two more generous when you have external fetches involved. It wouldn't surprise me if it were able to wait on a fetch for 20 minutes. You'll pay for the DO the whole time, but it would still use less than 30 seconds of CPU.
If I delete a DO's storage, let it evict then request the same instance, will it always spawn in the same colo? I've gotten an instance in Perth which I'd really love to move closer to home
If it's being spawned off of a request, it's usually pretty sticky from my experience, but if you specify a location hint instead I've had more success with it sometimes moving to a different datacenter. Not quite sure what the spawn algorithm is because it's seemingly slightly different each time I try.
For what it's worth, when I want a DO instance near a specific point, this is pretty inefficient but I just have a worker spawn and destroy DO instances with a region hint and with slightly different names then returns the name/id of the one in the POP I want
The locations I wanna use are within the same location hint, the routing just seems to really suck in this case (going from side of Australia to the other can be awful in terms of routing). I also don't really wanna have to stray away from the current name I am using for this instance
Honestly, I'd just spin a quick test worker to spin and destroy a DO like 10 times with the same name with a set location hint. YMMV but usually thats enough to get a couple of different POPs. If that works, then just do that again with the DO class and name you want until it's in the right region then use it
np and please let me know how it goes. I haven't experimented with coercing DOs to specific POPs outside of ENAM/WNAM and would love to know if that hack works elsewhere lol
Its never been a problem before, normally get them always in Melbourne which is close to home. There's only a 33% chance a DO will spawn in Perth via Melb instead of Sydney so I just got unlucky with this one. I will test out hitting the same instance and see how I go
That makes sense. I'm working on a small tool that I'll need to put small DOs in very specific areas (one of which is in Australia) so I've been playing around more and more with forcing the DO placement algo where I want lol. I kinda wish CF could just let me specify the IATA code of where I want it, but I'm guessing there are architecture limitations there
So to confuse myself more, I just requested a new instance (first one was named '2025', this one is '2026') and it ended up in the same colo but with far better response times when hot?? Def need to test this out more
In fact if I remember correctly once you create a DO with a specific name it's always in that colo even if you delete, evict, and recreate a while later (not 100% but that's what I remember).
I've had this happen before, and I genuinely have no idea what causes it. Maybe different machines or something? But also usually for me when it happens, sometimes when its recreated out of idle later the latency changes again
rn my scheme is usually [IATA of incoming request worker]-[DO type/name]-[some sort of short random number or UUID] so that in logs it shows up a little nicely, but I think I'm just going to go full IDs now
Yeah basically if you use the the runtime has to check (globally) whether the DO exists already whereas if you do it's generated in a way that it will always be unique and doesn't need to do that double-check