By that you mean that D1 aims for an even better consistency model? So like less than 60?
By that you mean that D1 aims for an even better consistency model? So like less than 60?
Because Workers KV stores data centrally and uses pull-based replication to store data in cache...
select user.id as userid, data.id as dataid from user, data
.sql dumps, but I can use this and recreate it..toISOString(), but it was multiple changes that led to this and I'm not bothered enough to redo it just yet ahah)
.raw() and not .all() as batch does. It'd be more space efficient than a JSON object. But doing the processing on tens and tens of thousands of rows doesn't seem useful..values() API that just returns an array of values, without column names.select user.id as userid, data.id as dataid from user, data% wrangler -v
⛅️ wrangler 3.1.0 (update available 3.1.1)
-----------------------------------------------------.sql.toISOString().raw().all().values()const tables = await env.DB.prepare("SELECT name FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'd1_%' AND name != '_cf_KV'").all();
const date = new Date();
const year = date.getFullYear();
const month = `00${date.getUTCMonth() + 1}`.slice(-2);
const day = `00${date.getUTCDate()}`.slice(-2);
const hour = `00${date.getUTCHours()}`.slice(-2);
const minutes = `00${date.getUTCMinutes()}`.slice(-2);
const seconds = `00${date.getUTCSeconds()}`.slice(-2);
const tableValues = await env.DB.batch(tables.results!.map(table => env.DB.prepare(`SELECT * FROM ${table}`)));
const tableNames = tables.results!.map(table => `backups/raw-tables/${year}/${month}/${day}/${year}-${month}-${day}T${hour}:${minutes}:${seconds}.000Z-${table.name}.json`);
await Promise.all(tableValues.map((tableValue, index) => env.R2.put(tableNames[index], JSON.stringify(tableValue.results), { httpMetadata: { contentType: 'application/json' } })));