Are you able to use the limit request form to get increased per database now? I’m happy with no replicas as a tradeoff, just want to be future proof with more storage.
D1 is now generally available and production ready. Read the blog post for more details on new features in GA and to learn more about the upcoming D1 …
We don’t currently increase that limit - it’s not that it’s hard coded, it’s that we have a very (very) high bar for performance & cold starts.
We’re continuing to work on raising it over time - and as we do it’ll just work. But also unlikely a single D1 DB will be 100GB (which is a large transactional DB by any standard)
That's fair yeah, we're considering a serverless migration at work and our current DB is 108GB (we can't split by user with our use case) so size is the issue
Hi guys, is there anyway to turn off foreign key check while migrating the database? Some table altering tasks needs table to be dropped, which will cascade referenced tables.
Some table altering tasks in SQLite cannot be achieved without replacing it with a new table (by dropping - recreating the table) (like adding foreign keys, changing primary keys, updating column t...
This works with table creating or importing but not for table dropping. Matt also suggested me to bracketing the migration by that pragma but sadly it didn't work
Got another question, there's still no way to dynamically bind to databases correct? I.e. bind to all 50k DBs from a Worker instead of the ~5k that'll fit in wrangler.toml? I believe you can use the API, but that kind of defeats the purpose.
Yeah I think it's the only way to add foreign key or change column type, but whenever I drop the table, all records in tables which reference to the table dropped before are gone too because of cascading on delete.
My current approach is store the child table data to a temp table then insert it again after migrating the parent table. But when my child table also has some more children, the task turns out to be complex and wasteful
I am trying to understand how session-based consistency would work in an application such as SvelteKit and the data invalidation methods. I understand the example of consistency when executing multiple queries in the same function, but how can I guarantee that the next reload of the page will be up to date? Does anyone have any examples of this?
It doesn't break compatibility with the old API, you can still use the old API just fine. It's just that there is no change to behaviour of the "old" API when the new API is introduced
I can't confirm whether you'll be able to use them simultaneously, but D1 is a GA product, so I would not expect any of the current APIs to change in a backwards-incompatible way.