.sql dumps, but I can use this and recreate it..toISOString(), but it was multiple changes that led to this and I'm not bothered enough to redo it just yet ahah)
.raw() and not .all() as batch does. It'd be more space efficient than a JSON object. But doing the processing on tens and tens of thousands of rows doesn't seem useful..values() API that just returns an array of values, without column names.Error.cause: https://developers.cloudflare.com/d1/changelog/#deprecating-errorcause
6/27/23, 6:02 PM
user_version and I just read the docs on PRAGMA so I'll come up with a workaround - ty
no problem ahah.sql.toISOString().raw().all().values()Error.cause"exceptions": [
{
"name": "Error",
"message": "D1_ERROR: Error: not authorized",
"timestamp": 1687918209429
}
],user_versionconst tables = await env.DB.prepare("SELECT name FROM sqlite_master WHERE type = 'table' AND name NOT LIKE 'd1_%' AND name != '_cf_KV'").all();
const date = new Date();
const year = date.getFullYear();
const month = `00${date.getUTCMonth() + 1}`.slice(-2);
const day = `00${date.getUTCDate()}`.slice(-2);
const hour = `00${date.getUTCHours()}`.slice(-2);
const minutes = `00${date.getUTCMinutes()}`.slice(-2);
const seconds = `00${date.getUTCSeconds()}`.slice(-2);
const tableValues = await env.DB.batch(tables.results!.map(table => env.DB.prepare(`SELECT * FROM ${table}`)));
const tableNames = tables.results!.map(table => `backups/raw-tables/${year}/${month}/${day}/${year}-${month}-${day}T${hour}:${minutes}:${seconds}.000Z-${table.name}.json`);
await Promise.all(tableValues.map((tableValue, index) => env.R2.put(tableNames[index], JSON.stringify(tableValue.results), { httpMetadata: { contentType: 'application/json' } })));