Number of objects is unlimited. Number of namespaces (i.e. classes) is limited. Though I'm a bit surprised the limit is 20, that seems like something we should increase.
When i list my namespaces using the rest-api, i only get id/name/script/class theres no info about the environment; I just tested using the --env and the effect is the same, as the each env is also another new script.
I dont know exactly what can be a reasonable number, but if the intention is to have several small DOs, i was expecting something like 300 (i.e 30 scripts x 10 DOs) ... I do not plan to use that number but was just my expectation.
How do environments affect that number? I'd think if my script has 20 different DOs I could still separate testing from production by deploying them into different environments, right?
I suppose it'd be possible to assume the main module is ES syntax, but then people might be confused why their main module is allowed to be but all other modules have to be ...
Is it possible / wise to do fetch requests in the constructor of the Durable Object? E.g. when a durable object is (re)-created, I want to update some information (mainly the location / colo of this durable object) in our database.
Is there a way to easily identify a variable as a DO id or DO name? I think I saw a snippet somewhere but can't find it. Edit: I found something in the chat example actually
ES modules can import CommonJS modules in our system, just like in Node. We felt that it would be too harsh to say that people can't use CommonJS libraries at all if they use ES modules.
but it definitely can burn you in certain scenarios if you try to keep a bunch of things cached in memory, and expect to be able to instantiate several instances of the DO all at once
Ah okay, while experimenting and working on a chat application I got intrigued by scaling and how websockets / pubsub systems (horizontally) scale to millions of connections, it's interesting that you basically need some kind of Redis or In-Memory DB instance (which doesn't really horizontally scale) to get that far. I can image that once I have millions of websocket connections open on a DO, it'll probaby hit that memory limit.
yea, you'd hit it before millions and should probably shard them across multiple DOs at that point. It would be awesome to have a runtime api for observing the current memory usage - so you could have a rough estimate of how much each additional websocket connection costs from a memory pov
I was actually about to request that today hahaha, I've been thinking about how you would scalably setup a pub/sub system on Workers / Durable Objects. Having some sort of memory indication might help to prevent it from going over and booing up more DO's to handle that scale.
E.g. you'd have a primary DO handling storage and 'children', all the children of a DO connect to it's parent through WebSockets and can individually spawn 'children' too.
haha, they've stacked 21 of my larger objects into the same isolate - which is kind of a big problem - i wish you could specify the "sharing" level or a target mem usage when making the first stub call, that would give them more info on a per-instance basis of when to share or not
it's a tough problem, because for some DOs, they could get away with tons of sharing, no way to really tell ahead of time, but they could certainly use the runtime info from prior instantiations - anyway a way for us to specify this either at the class or instance (stub call) level would be awesome
i get the sense DOs are not really meant to be large and centralized, it would be difficult to build a centralized pub/sub bus - it seems like they are going more for smaller, user-level-state mostly durable-objects