Not yet out of the box, you have to write the cross-db query logic.
Also migrations needs to run on all.
How would you like this to work, say you could have exactly the system you need, what would it look like? What would it do for you, and what would you like to do?
Thanks very much for asking. I'm still trying to solve the problem of making writes go to nearest db to customer. So if My customer in Australia, it should write to db there. Even if the worker is running in USA (most will be from within customer region, 0.1% writes will be customer support so from USA)
Ideally I still want 1 db. As multi is really hard to work with. As multiple cases of needing to have data on single db, like shared users across customers.
I think my ideal system in my read and write I specify region, and i grantee my IDs are unique (uuid not incrementing nimber) so it's strongly consistent provided I told it same region. And it's strongly consistent everywhere else within X seconds for propagation.
@DangerZone , I suggest you look at the new beta SQLite API inside of DOs. The limit there is currently 1GB but it will go up to 10GB at the end of beta. So, it won't help you scale up a single DB size, but it might be more convenenient than "thousands" of D1 DBs because DO code runs in the same thread as the DB. Dynamic binding would be necessary to manage that many. However, w/ DOs you wouldn't need dynamic binding. Also, if you want to aggregate a query result accross 1000s of DBs, you can do any needed manipulations and reduction locally, then send those reduced results to a central aggregator.
Further thinking on my ideal system. If no region specified, then it works as is today, it writes to default region, and waits till its strongly replicated acrosss all regions, so its strongly consistent regardless of region.
Not sure if this is possible or planned, but this would let me global app work fast globally. As my bottleneck right now is my postgres db for writes as a lot of admin work happens in various parts of world. (reads are fine its replicated cached)
That might work, SQLite does support some powerful fts capabilities. Our use-case is to create a full-text-search service on addresses tho, so we will have to be creative with our sharding mechanism (I'm still unsure about the sharding algo to support that tho)
The d1 team currently wants you to shard your data among many D1 DBs rather then one big one. Could be sharded by whatever makes sense for your app. By users, by company, etc d1-database
Well, I recreated the DB a couple of times in the previous weeks so I think it is the current version. Honestly I don't know to determine the location of the DB, so this is from the location of the bound worker with smart placement. The first DB I created was in Madrid, the second in Paris, and now Hamburg. I tried with location hints and without. In the previous days I did a migration to Neon (https://neon.tech) because we had a launch date and this was very urgent. I used the same worker, just refactored the database part from D1 to Neons HTTP-Proxy and Websocket APIs. This is now blazing fast! Yeah, I will definitly talk with Cloudflare and raise our issues. Thank you!
Potential Bug report: db id: 75ef58ba-b54f-411d-bec1-a184986f143c, {"TimestampMs":1729112916377},{"Level":"error","Message":["Error: D1_ERROR: Error: Internal error while starting up D1 storage caused object to be reset."],"TimestampMs":1729112932597}
no. I think it’s common but not normal and to be expected as they have a threshold of allowed errors, or I’m completely wrong and this is actually a resurfaced bug. I will try to find the message