For example, if the users table has an index on a timestamp columnFor https://developers.cloudflare.com/d1/platform/pricing/, I still cannot understand how rows read works at this moment. Rows reading measurement only shows example of full table scan, but not others such like partial read (filterable, join etc).created_at, the querySELECT * FROM users WHERE created_at > ?1would only need to read a subset of the table.
wrangler pages dev bound to a local instance of D1 started up with something like pnpx wrangler -c wrangler.toml d1 execute DB --local --file=migrations/001.sql ? Just want to make sure this is a supported use case for D1 in the Beta stage @ the moment. Note: I already have a working D1 setup with pages on Cloudflare. I'm interesting in running locally. So my question essentially is just if whether this is already supported or not.pnpx wrangler d1 execute db_name --local --file=./src/dump/dump.sql but it stayed the same for past 1h:Request entity is too large [code: 7011] error from Wrangler, though.CLOUDFLARE_ACCOUNT_ID=1234 CLOUDFLARE_API_TOKEN=4321 wrangler whoamiCLOUDFLARE_ACCOUNT_ID=1234 CLOUDFLARE_API_TOKEN=4321 wrangler d1 execute test-database --command="SELECT * FROM Customers" needs to be run from a linux-like environmentCLOUDFLARE_ACCOUNT_ID and CLOUDFLARE_API_TOKEN to your windows system env vars

INSERT INTO test (id) VALUES (1), (2), (3), (4), (5){
"params": [
1,
2,
3,
4,
5
],
"sql": "INSERT INTO test (id) VALUES (?), (?), (?), (?), (?)"
}created_atSELECT * FROM users WHERE created_at > ?1D1D1D1pnpx wrangler -c wrangler.toml d1 execute DB --local --file=migrations/001.sql