Facets

GitHub
Experimental feature: Durable Object Facets by kentonv · Pull Requ...
This is a new experimental Durable Objects feature. It requires the --experimental CLI flag and experimental compat flag to use. I fully expect that we may end up changing the design once we&#3...
12 Replies
Larry
Larry2w ago
@kenton , this is really cool! Some questions... 1. Actor programmming model. Is your vision for this to take DOs one step closer to the Actor model but in a Cloudflare way? The "broken" behavior is like supervisory control in Erlang-BEAM, right? 2. Does this give us a single-machine way to scale above 10GB? 3. Storage consistency across facet boundaries? You won't have consistency guarantees for commits that hit two facets in the same way you do in a single DO. A parent facet can wait until a child returns before it does it's storage. It would still have to roll back the child commit though if its own commit failed. Do you have a vision for how to do this? Maybe you could lock the children facets until a parent "transaction" is complete? 4. What is SRS storage?
kenton
kenton2w ago
So this is a very experimental idea that sort of enables things like Workers for Platforms and Dynamic Worker Loading to make sense with Durable Objects. You can actually run someone else's DurableObject class as a child of your own, with control over its lifecycle and the ability to sandbox it properly if desired. 1. I actually don't know erlang so any similarity is coincidental / convergent evolution. 2. This isn't intended to break the 10G barrier, which is more about the cost of pulling a DO's data out of cold storage on-demand. At present we don't have the ability to pull partial data, so the bigger a DO gets, the longer it takes to cold-start (where "cold start" here isn't about the isolate, but rather about getting the data onto local disk). Our efforts right now are mostly focused on minimizing the frequency of cold starts, rather than trying to make it possible to start execution with partial data. 3. It should indeed be possible to provide consistency guarantees across facet boundaries, since the underlying storage system (at least in prod) actually commits the whole group together. But I haven't been focused on this so far. So at present there are no such guarantees. 4. SRS is Storage Relay Service, the storage system undelrying SQLite-backed Durable Objects in production.
Larry
Larry2w ago
Thanks
João Castro
João CastroOP2w ago
I read very little about Dynamic Worker Loading but at first glance it sounds like it could become a more powerful and flexible version of what Workers for Platforms is today Do you think this can end up happenning or they serve different purposes?
kenton
kenton2w ago
Yes, Dynamic Worker Loading is an alternative to WfP dynamic dispatching. Product-wise we consider worker loaders to be a WfP feature, even though the technical implementation is pretty separate.
darkpool
darkpool4d ago
Not sure if my question is obvious but what exactly is separate in the implementations (wfp & dynamic loading vs "native" workers)? I don't know much about v8 isolates but I though they were already sandboxed so one isolate's code would not affect another isolate. Anyway, I cant wait to use them in the public beta, they are basically solving what I wanted WFP to solve and will eliminate so much complexity for me
kenton
kenton4d ago
Some Workers APIs assume that the code on an account is trusted by the account. For exmaple the cache API lets you write to the HTTP cache for the zone the worker is running on. Malicious code could poison the cache, which could impact everything else running on the zone. WfP and dynamic isolates both disable this API, because they assume that you do not trust the code you are running.
darkpool
darkpool4d ago
Awesome, thanks!
João Castro
João CastroOP2d ago
Apart from storage, are resources shared between all facets or would each of them have for example their own thread? This could make life a lot easier when you need a bit more throughput and don't want to add the complexity and latency that comes when dealing with the network layer I suppose that there would be a limit as to how many facets you could run in the same machine, but still could be very useful in some cases
kenton
kenton2d ago
all facets run on the same thread. This isn't meant to be a mechanism for parallelism, it's a mechanism for composition and sandboxing. The right way to achieve parallelism is DO replication (but I can't remember if that's release publicly yet...). In general we aim to provide parallelism that is cross-machine, because we really don't want to be stuck with workloads that involve multiple threads that must be colocated on a single machine -- that would make load balancing much harder for us.
João Castro
João CastroOP2d ago
Do you think having a group of DOs that always spawn in the same datacenter (with no guarantees as to which machine it would run on) could become a thing? I don't know a lot about how load balancing is managed for DOs/Workers and how much flexibility is needed to make it efficient, so I don't have a very good idea about what could be reasonable here.
kenton
kenton2d ago
If you have one DO create the others, they should naturally end up in the same datacenter

Did you find this page helpful?