I doubt that. You will always have less memory than what people need for bulk imports. Streaming is
I doubt that. You will always have less memory than what people need for bulk imports. Streaming is the way to go.
storage.put() multiple times without awaiting, it will group all the writes into a single round trip. If you put to the same key multiple times without awaiting, will it write to that key multiple times or will it only write the last one?env, you need to pass them in your RPC call to the DO.constructor, I might make an init method so I don't have to pass it on every call
init(uid) the first time the user object is created, save the user's id to storage, and this way it will be available in storage whenever I'm calling that DO again, right? Just need to set it in the constructor like this:Internal error in Durable Object storage write caused object to be reset.Network connection lost.


acceptWebSocket() does any storage operation, like reading something and updating it with the new socket, since it needs to somehow persist that.this.ctx.id.name to return me the string used when calling const stub = env.MY_DO.idFromName('foo') - it works fine if I try it on the stub (e.g. stub.name), but it doesn't work inside the DO itself if I try this.ctx.id.name - it's undefined. The ID itself is there, but considering the type is of DurableObjectId I expected to be able to get the name too - is this not the case?name available on there at all is really considering it can only be called right after you retrieved a DO with said string app.get('/avatars', async c => {
const user = c.get('user')
c.env.uid = user.uid // is this a bad practice?
const userObjectId = c.env.USER_OBJECT.idFromName(user.uid)
const userObject = c.env.USER_OBJECT.get(userObjectId)
const avatars = await userObject.getAvatars()
return c.json(avatars)
})ctx.blockConcurrencyWhile(async () => {
this.uid = await ctx.storage.get('uid')
})