Oh, so an index is like a Hash Table, rather than a standard table… That makes sense.
Oh, so an index is like a Hash Table, rather than a standard table… That makes sense.

SELECT * FROM table LIMIT N - yes. But if you have a filter like a WHERE or HAVING clause, then you have to scan more rows to match the filter, since your table isn't necessarily ordered.ORDER in that query. But it has to scan the table somehow - usually a binary search. It's going to keep searching (scanning) until it has matched all possible records, or hit the LIMITLIMIT 10ORDER BY <some_date_column> queries. I probably need to add this to the docs.SQL LIKE? 
rows_written and rows_read and I am kinda surprised to find written rows in a SELECT query
rows_written and rows_read in every result, could we imagine some total_rows_written and total_rows_read in a "status" request?
back from pushing the limits over 500MB? .batch() it gets auto transactionEXPLAIN statement. The column name is messed up? 
When the EXPLAIN keyword appears by itself it causes the statement to behave as a query that returns the sequence of virtual machine instructions it would have used to execute the command had the EXPLAIN keyword not been present. When the EXPLAIN QUERY PLAN phrase appears, the statement returns high-level information regarding the query plan that would have been used.
Many of these limits will increase during D1’s public alpha. Join the #d1-database channel in the Cloudflare Developer Discord to keep up to date with changes.
From the Docs

➜ wrangler d1 execute db-wnam --command "SELECT * FROM [Order] WHERE ShipRegion LIKE '%Western%' LIMIT 10" --json | jq '.[].meta.rows_read'
18 <-- we had to scan 18 rows to find 10 that matched our filtertotal_rows_writtentotal_rows_readEXPLAIN