While we wait for time travel could we please get a wrangler option or something to just download th
While we wait for time travel could we please get a wrangler option or something to just download the db as an sqlite file?
db.dump() do if not dumping the db content into an sqlite file?workerd@a2fbc3dfe25 which maps to https://github.com/cloudflare/workerd/commits/main?before=a2fbc3dfe254ce6e94ceb183f7b1d6476b6c2b29+35&branch=main&qualified_name=refs%2Fheads%2Fmain -> "ALTER TABLE alterme RENAME COLUMN metadata TO somethingelse" passes.DROP COLUMN op.wrangler.toml? The repo is public so I do I make the id public or is it supposed to stay secret? I assume the latter so currently I'm manually adding it to wrangler.toml every time before I deploy/tables in the console, but does that work in the JS API? SELECT * FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'd1_%' AND name != '_cf_KV'_cf_KV table isn't really usable, it doesn't let you see it /tables, though /tables literally just runs that query against the execute endpointSELECT name FROM sqlite_schema WHERE type='table' ORDER BY name; is what it runsSELECT PRAGMA table_list, PRAGMA table_info(table_name)index_list(table), index_info(index)batch operations, is the Maximum SQL statement length of 100,000 (100kb) applied to the entire batch or is that applied to the individual statements within the batch?DB.batch(statements) where statements.length is 400 = 400 statements.writeDataPoint up to 25 times (each write can include multiple metrics!) -> but you can have hundreds of thousands of client requestswrangler deploy --dry-run --outdir ./dist