Any details to help any investigation? account/database IDs, timeranges? There isn't any ongoing D1 service incident. There was a transient network blip a few hours ago, but not consistent and only in some locations.
where app1 and app2 are two separate workers with separate wrangler files that share the same db instance and generally share the same worker logic implemented in packages/worker. can i somehow point the local db to the same db? or are they forced to be generated in their own local .wrangler folder?
Hey, yesterday, during our burst load, we received 153 "D1_ERROR: D1 DB is overloaded. Too many requests queued." on our d1 db while serving about 161k queries for ~30 minutes. It's a lot of queries and it seems like we managed to just stand of the edge of a single D1 db capacity. Do you guys at Cloudflare see more statistics about databases hidden to us users that ought to be interesting?
Hi! On a internal computer of our company, we would like to export our production D1 database every 10 minutes ( from the CLI wrangler d1 export ). Is someone from the CF staff can confirm us it's tolerated and this will not be blocked or against the CF policies please?
There is no policy against doing that, but keep in mind that whle exporting your database is blocked from serving any queries, so doing it every 10 minutes, especially if it's more than a few hundred MBs, it's not going to be a nice experience to your users.
Every 10 minutes seems excessive since you can just read the database directly or use time travel for restores between your backup periods. Is there something in the use case that requires that frequency?
Tricky question. To get scale, what many people on here do is they switch from using D1 to using Durable Objects which allow you to spit the work and data storage up across many instances. However, you said "highly relational data" and that's more complex when your relationships are on the other side of a network hop. There are strategies for doing that like the Actors model (and Cloudflare has an Actor base class for Durable Object now). Even so, I would only recommend that if there is a relatively clean way to split up the work/storage (per user, per tenant, etc.). So for consistently "highly relational data", your best choice might be to use an external scale-up style database solution. Cloudflare uses Postgres for some of its own internal needs so there is good support for doing that. Check out Cloudflare's Hyperdrive offering if you think that's the way to go for you: https://developers.cloudflare.com/hyperdrive/
Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe from Cloudflare Workers, irrespective of your users' location.
Thank you very much, this makes a lot of sense, I was actually looking at the exact same page you linked before seeing your message, and indeed it perfectly fits our needs as we have a postgres database already and it is almost impossible to break up the data model to an actor model since data model is tightly coupled. However, I have two questions:
1. Will cloudflare support higher DO or D1 in a pay per usage model similar to something like R2 in the future? 2. Does hyperdrive cache the transaction queries (select statements in a transaction) or is it just caching select statements outside transactions?
1. I don’t know of any plans for that but I don’t work for Cloudflare and I never say never 2. My knowledge of hyperdrive consists of being aware that it exits and skimming that page a few times in the past. Sorry. Others on here may have hands on experience.
How to plan the database split for an application with more writes and reads? (cause the hard limit for a db is just 10 GB) 1. create databases with limited number of releatable tables 2. replicate the whole db structure, scale them horizontally and query them using the workers to map the right db 3. Please answer
which is the best practice? what does cloudflare recommend? what's the next best alternative for this ?
If you need a handful of DBs, D1 works fine, pre-create all the DBs upfront and attach them to your worker and use them as you wish.
If you need hundreds/thousands/millionds of these DBs, then it's much better to use SQLite Durable Objects. It's the same underlying storage and SQLite engine, but you have a much easier dev experience to use as many DBs as you wish.
Cloudflare also offers other storage solutions such as Workers KV, Durable Objects, and R2. Each product has different advantages and limits. Refer to Choose a data or storage product to review which storage option is right for your use case.
D1 allows you to capture exceptions and log errors returned when querying a database. To debug D1, you will use the same tools available when debugging Workers.
Can you give an example of what you do at the moment and what is the error/problem? I don't know much about Python workers but seeing what you are doing will help us troubleshoot.
I have a repo that uses D1 with a very standard setup - D1 locally for developing, remote D1 for production. Does anyone know of an example repo that demonstrates good tests practices for D1. Test isolation, etc?
I'm sure you get this feedback a lot but with workers now supporting preview deploys it would be good to have proper branch based D1 environments that we're linked to a preview build, today as far as I'm aware you can only have a single preview D1 instance used across all branches. This is quite limiting compared to using other setups like Neon or supabase with say vercel.
Yeah, this would be cool. We discussed about it a few times and once we implement fork/clone this will come next. No defined ETA unfortunately though as this work did not yet start.
Thanks for the insight, did seem like it would need a bunch of additional features which might stem from having timetravel which I think was added more recently