Production ready: yes, D1 is Generally Available. Maximum limit: no, cannot be raised. The intention is that you use multiple smaller DBs (i.e. one per customer) instead of one large one, as the infrastructire is designed for many smaller individual DBs.
You can either use a Worker or the Cloudflare REST API (which is rate-limited and slower than a Worker binding). There's no "connection string" or similar though, because SQLite doesn't have a standardized connection protocol. You'll need to write anything that connects to D1 outside a Worker yourself.
the reason theres a limit and you will be forced to use dynamic bindings is just filesize. its not practical to load a 10gb file for a database for every request
Hi Isaac, since you are recommending multiple smaller DBs. Is there any guidance to do multiple DBs migration? Also can we bind D1 database dynamically now?
Hey Quick question: Is there an easy way to find the most „expensive“ (most reads/writes) queries in my code (e.g. a list in the Cloudflare dashboard) without manually checking the meta data of every single query?
Hi all, I'm using Astro.build with the Cloudflare integration and I'm trying to add a staging and a prod database beside the local dev one. Anyone has an idea for how to achieve that so the api endpoints would pick the local db when in dev environment, and the staging or prod db when in the staging or prod db respectively? I've added a different db binding under each environment in
wrangler.toml
wrangler.toml
, but I have no idea and can't find any info for how to make the api endpoint pick the right db on each environment (using
Howdy y'all! I have a D1 DB with several tables, which have some foreign keys to each other (never making a cycle). Sometimes I need to recreate the DB (for testing purposes), but I don't want to regenerate db_id (because it's in wrangler.toml), so I want to drop all tables and recreate them. However, querying all table names and generating/executing a script to drop one-by-one always fails on FK related error. Even when there's no data in DB. Even when using PRAGMA defer_foreign_keys=true. Looks like the only way is to construct a dependency graph and drop tables in correct order, so there's never a FK poiting to already dropped table. Does anyone know if there is a way to work around this and drop all tables at once without FK check? Thanks!
does anyone know if the 50k databases limit is a hard limit per account? or when we hit 50k databases (users) can we then upgrade to enterprise account for unlimited number of databases?
d1-database also dynamic bindings are coming soon and they probably correlate with a higher number of dbs (natively without having to contact support for a higher limit (but thats just my speculation))
From what I’ve seen, that’s likely coming out this week, there will be a 1GB database size limit (10GB after beta), and you can make unlimited. It will be that a Durable Object can have a SQL database for storage. You’ll be able to choose whether a DO class uses KV storage or SQL.
Ah yeah that makes sense, I’m sure the DO thing will solve most of our use cases in the meantime. I imagine dynamic D1 would work similar to how Workers for Platforms is essentially just dynamic service bindings
@qo7ems @Acе @Isaac | AS213339 thanks everyone! i have a unique use case for D1. i'm both a desktop app and web/saas. in the case of desktop/local users, then they upgrade to paid accounts or if web users want to dowload all of their data or data compliance requires their data be physically located in one geo region, i initially want to keep all user data in a single sqlite db whether local/desktop or web....
but since that's not really viable option with current per DB size limit (glad to hear its increasing), i had to re-architect this whole "1 db : 1 user" strategy. but, for me, that is the goal and i'll just stay on top of D1 and see what upgrades happen over time
yeah, i'm going with turso for now for sqlite bc it's easier to move DBs to edge/geo for customers that need it in EU for instance. and, same kinda deal with them though, the DB size limits / total space GB per account limits make my goal of giving up to 1GB per user in a sqlite DB a non-starter....