
env.MY_DURABLE_OBJECT.idFromName("foo"), once inside of a DO instance? At one point I know the answer used to be "no" but I'm wondering if this has changed.To solve this problem, we take advantage of our global network. Every time SQLite commits a transaction, SRS will immediately forward the change log to five "follower" machines across our network. Once at least three of these followers respond that they have received the change, SRS informs the application that the write is confirmed. (As discussed earlier, the write confirmation opens the Durable Object's "output gate", unblocking network communications to the rest of the world.)
When a follower receives a change, it temporarily stores it in a buffer on local disk, and then awaits further instructions. Later on, once SRS has successfully uploaded the change to object storage as part of a batch, it informs each follower that the change has been persisted. At that point, the follower can simply delete the change from its buffer.
[...]
Each of a database's five followers is located in a different physical data center. Cloudflare's network consists of hundreds of data centers around the world, which means it is always easy for us to find four other data centers nearby any Durable Object (in addition to the one it is running in). In order for a confirmed write to be lost, then, at least four different machines in at least three different physical buildings would have to fail simultaneously (three of the five followers, plus the Durable Object's host machine). Of course, anything can happen, but this is exceedingly unlikely.
I however can't find a clear answer on how to use SQL transactions with SQLite and if it may have a signifiant impact on performance for tables with few thousands of rowsSo, is performance of transactions your question?
fetch) and then the rest, your DO could be processing other requests before returning to do the remaining inserts.sql.exec operation within the closure of the above methods will run all those queries within the same transaction, and throwing within that closure is similar to rolling back.Durable Object's isolate exceeded its memory limit and was reset. error ? wrangler.{env).json) both deployed fine in CI (worker+a few durable objects)env.MY_DURABLE_OBJECT.idFromName("foo")sql.execDurable Object's isolate exceeded its memory limit and was reset.wrangler.{env).jsonCannot apply new-sqlite-class migration to class 'MyObject' that is already depended on by existing Durable Objects [code: 10074]/**
* For more details on how to configure Wrangler, refer to:
* https://developers.cloudflare.com/workers/wrangler/configuration/
*/
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "my-app",
"main": "src/index.ts",
"compatibility_date": "2025-02-24",
"compatibility_flags": [
"nodejs_compat"
],
"observability": {
"enabled": true
},
"durable_objects": {
"bindings": [
{
"name": "Brain",
"class_name": "Brain"
},
{
"name": "Chat",
"class_name": "Chat"
},
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": [
"Brain"
]
},
{
"tag": "v1",
"new_sqlite_classes": [
"Chat"
]
}
}/**
* For more details on how to configure Wrangler, refer to:
* https://developers.cloudflare.com/workers/wrangler/configuration/
*/
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "my-app",
"main": "src/index.ts",
"compatibility_date": "2025-02-24",
"compatibility_flags": ["nodejs_compat"],
"observability": {
"enabled": true
},
"durable_objects": {
"bindings": [
{
"name": "Brain",
"class_name": "Brain"
},
{
"name": "Chat",
"class_name": "Chat"
}
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": ["Brain", "Chat"]
}
]
}