Hi Isaac, since you are recommending multiple smaller DBs. Is there any guidance to do multiple DBs migration? Also can we bind D1 database dynamically now?
Hey Quick question: Is there an easy way to find the most „expensive“ (most reads/writes) queries in my code (e.g. a list in the Cloudflare dashboard) without manually checking the meta data of every single query?
Hi all, I'm using Astro.build with the Cloudflare integration and I'm trying to add a staging and a prod database beside the local dev one. Anyone has an idea for how to achieve that so the api endpoints would pick the local db when in dev environment, and the staging or prod db when in the staging or prod db respectively? I've added a different db binding under each environment in wrangler.tomlwrangler.toml, but I have no idea and can't find any info for how to make the api endpoint pick the right db on each environment (using locals.runtime.env.DBlocals.runtime.env.DB).
Howdy y'all! I have a D1 DB with several tables, which have some foreign keys to each other (never making a cycle). Sometimes I need to recreate the DB (for testing purposes), but I don't want to regenerate db_id (because it's in wrangler.toml), so I want to drop all tables and recreate them. However, querying all table names and generating/executing a script to drop one-by-one always fails on FK related error. Even when there's no data in DB. Even when using PRAGMA defer_foreign_keys=true. Looks like the only way is to construct a dependency graph and drop tables in correct order, so there's never a FK poiting to already dropped table. Does anyone know if there is a way to work around this and drop all tables at once without FK check? Thanks!
does anyone know if the 50k databases limit is a hard limit per account? or when we hit 50k databases (users) can we then upgrade to enterprise account for unlimited number of databases?
d1-database also dynamic bindings are coming soon and they probably correlate with a higher number of dbs (natively without having to contact support for a higher limit (but thats just my speculation))
From what I’ve seen, that’s likely coming out this week, there will be a 1GB database size limit (10GB after beta), and you can make unlimited. It will be that a Durable Object can have a SQL database for storage. You’ll be able to choose whether a DO class uses KV storage or SQL.
Ah yeah that makes sense, I’m sure the DO thing will solve most of our use cases in the meantime. I imagine dynamic D1 would work similar to how Workers for Platforms is essentially just dynamic service bindings
@qo7ems @Acе @Isaac | AS213339 thanks everyone! i have a unique use case for D1. i'm both a desktop app and web/saas. in the case of desktop/local users, then they upgrade to paid accounts or if web users want to dowload all of their data or data compliance requires their data be physically located in one geo region, i initially want to keep all user data in a single sqlite db whether local/desktop or web....
but since that's not really viable option with current per DB size limit (glad to hear its increasing), i had to re-architect this whole "1 db : 1 user" strategy. but, for me, that is the goal and i'll just stay on top of D1 and see what upgrades happen over time
yeah, i'm going with turso for now for sqlite bc it's easier to move DBs to edge/geo for customers that need it in EU for instance. and, same kinda deal with them though, the DB size limits / total space GB per account limits make my goal of giving up to 1GB per user in a sqlite DB a non-starter....
i may actually just do it myself since i already have edge servers i'm setting up with GPUs around all of the US and then into EU... wouldn't be that hard to then throw a storage server or two into that configuration and just store my own SQLite db's..... but for now, i'm rolling with Turso + CF (KV + R2 for binary/files). kinda a pain in reality but i'll make it work.
Hello, I don't know if anyone can help me or if this is the right channel, but I have a problem with my Linux server (Ubuntu 22.04) when trying to route MariaDB. Could someone help me?
And I see I can make reads go to nearest region with session. However the reason I'm looking to move from postgres is because I need writes to go to nearest region. Is it possible for me to associate a region hint with a user ID? Because 99% of time, no one other then that user will ever need to see that writes. Only time is when customer support from different parts of world and other people around world being curious to visit their profile page, in which case it's ok to give them slow response when they hitting the region closest to the user ID of that profile page.
D1, Cloudflare’s SQL database, is now generally available. With new support for 10GB databases, data export, and enhanced query debugging, we empower developers to build production-ready applications with D1 to meet all their relational SQL needs
Read replicas aren't a thing yet. Right now when you create the database, whatever region you specify (or if you didn't specify a region, it will be made in the nearest region to the create request) is where the writes will be sent to, as well as where all the reads will come from
Darn. Thanks very much for fast reply. I ran into issues because I'm in San Fran but over time I got a lot of users in UK, South Africa, and Australia. And they all are hitting my SF database leading to major latency.
Once read replicas are available, then this should sort out performance issues with reads (except us South Africans who don't have a low latency DO location ). Writes will still go back to the main database location though
Is there a solution d1 has for writes in the future? Cuz where the latency felt is in the admin panel which is where users spend most of their time. Outside of admin panel I serve state cached using nextjs dyanimc revalidation.
Since D1 is built on durable objects, until they implement a way for durable objects to move locations dynamically this wont happen. When this feature does become available, the database will be dynamically moved based on where the most of your traffic originates from.
Relocation though isn't what I would want. As I have users everywhere. Just primary should be considered different depending on user ID. Is thus a possibility with f1 in future?
That's the dynamic bindings feature several folks mentioned already above, where you can have different DBs for each of your users. D1 didn't announce anything about it yet.
As @lambrospetrou said, when you create the DB for your user you can set the primary location of that database. Once it's set, you cannot change it
Dynamic bindings is how you'll interact with the database, and you wont be able to supply any location hint at query time, since the database has already been created with your specified parameters. Your write query will travel back to the primary, and your read queries will come from any replicas (once this feature is available) if the data is hot
If it operates similarly to how DOs operate, then you won't even have to think about supplying location hints, since the database will be created in the closest region to your users request anyway
Thanks guys for this brainstorm. Multiple independent DB's isn't something I can do. I needed one DB, with read and write replicas. Where the write happens also to the nearest DB but would eventually propagate to all others.
I apologize for the interruption, but I am facing a problem and hope to find some solutions. The issue is "D1_ERROR: Network connection lost". I have searched the internet and discord and found that many people have reported this issue but have not found any effective solutions. This problem does not always occur, but sporadically. In the past 24 hours, my worker has only handled 4.5K requests, which is relatively low. So I want to understand whether this is because of my incorrect usage, or due to the instability of the D1 system itself?