okay, I'm having a table which containing large text, i guess I have to move it now
okay, I'm having a table which containing large text, i guess I have to move it now
DROP TABLE IF EXISTS at the start of your schema.sql, you can then add a package.json script that runs wrangler d1 execute dbname --local --file=schema.sql to reinitialize the db as neededD1PreparedStatement if we could just D1Database->exec()? I get it can be "faster" to prepare a statement and bind different values, but since CloudFlare Workers are temporary, then what's the point? If we know that our worker is always gonna execute the same single query over and over (not multiple queries), does it make a difference in performance?SELECT ? FROM foo;) is not what SQLite runs. SQLite has to compile that and run it.? again.?1 it doesWITH all_tables AS (SELECT name FROM sqlite_master WHERE type = 'table') SELECT at.name table_name, pti.* FROM all_tables at INNER JOIN pragma_table_info(at.name) pti ORDER BY table_name results in not authorized 7500 - which permission should be assigned to the API key?pragma_table_info on protected tables returned by sqlite_masterWITH all_tables AS (SELECT name FROM sqlite_master WHERE type = 'table') SELECT at.name table_name, pti.* FROM all_tables at INNER JOIN pragma_table_info(at.name) pti ORDER BY table_namebool SqlStorage::isAllowedName(kj::StringPtr name) {
return !name.startsWith("_cf_");
}DROP TABLE IF EXISTSwrangler d1 execute dbname --local --file=schema.sqlD1PreparedStatementD1Database->exec()db.batch([
db.insert(users).values(),
db.insert(posts).values({ userId: sql`last_insert_rowid()` })
])