There’s likely an edge case with how our storage layer & authorizer interact with temp tables during
There’s likely an edge case with how our storage layer & authorizer interact with temp tables during a
DROP COLUMN op.DROP COLUMNwrangler.toml? The repo is public so I do I make the id public or is it supposed to stay secret? I assume the latter so currently I'm manually adding it to wrangler.toml every time before I deploy/tables in the console, but does that work in the JS API? SELECT * FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'd1_%' AND name != '_cf_KV'_cf_KV table isn't really usable, it doesn't let you see it /tables, though /tables literally just runs that query against the execute endpointSELECT name FROM sqlite_schema WHERE type='table' ORDER BY name; is what it runsSELECT PRAGMA table_list, PRAGMA table_info(table_name)index_list(table), index_info(index)batch operations, is the Maximum SQL statement length of 100,000 (100kb) applied to the entire batch or is that applied to the individual statements within the batch?DB.batch(statements) where statements.length is 400 = 400 statements.writeDataPoint up to 25 times (each write can include multiple metrics!) -> but you can have hundreds of thousands of client requestswrangler deploy --dry-run --outdir ./dist
wrangler.tomlwrangler.toml/tables/tables/tablesSELECT * FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'd1_%' AND name != '_cf_KV'_cf_KVSELECT name FROM sqlite_schema WHERE type='table' ORDER BY name;SELECT PRAGMA table_list, PRAGMA table_info(table_name)index_list(table), index_info(index)batchbatchbatchDB.batch(statements)statements.lengthwriteDataPointwrangler deploy --dry-run --outdir ./dist