Any chance there will be togglable support for local/remote DBs?
Any chance there will be togglable support for local/remote DBs?
.sqlite file in .wrangler, but that seems kind of jankUser, then run DROP TABLE User; - is it expected that running the migration command again returns :white_check_mark: No migrations to apply!?id, but running the migration again gave me the no migrations to apply message. So I figured dropping the table and running the migration again would work. Am I misunderstanding how you update tables?Error: D1_ERROR: D1 storage operation exceeded timeout which caused object to be reset.Anything going on?


EXPLAIN QUERY PLAN result -

Error: Too many API requests by single worker invocation I know workers has a 1000 request invocation limit but I was wondering if there was a way insert large amounts of rows with a single request like pythons executemany command. I'm currently inserting a single row per request like this execute() on a giant sql insert statement string.sqliteUserDROP TABLE User;:white_check_mark: No migrations to apply!EXPLAIN QUERY PLANError: Too many API requests by single worker invocationexecutemanyconst prepared = db.prepare("INSERT OR IGNORE INTO r2_objects_metadata (object_key, eTag) VALUES (?1, ?2)");
const queriesToRun = newBucketObjects.map(bucketObject => prepared.bind(bucketObject.Key, bucketObject.ETag));
await db.batch(queriesToRun);[[d1_databases]]
binding = "DB" # available in your Worker on env.DB
database_name = "prod-d1-tutorial"
database_id = "<unique-ID-for-your-database>"
[[d1_databases]]
binding = "DB2" # available in your Worker on env.DB
database_name = "prod-d1-tutorial-v2"
database_id = "<unique-ID-for-your-database>"for (let i = 0; i < newBucketObjects.length; i++) {
const Key = newBucketObjects[i].Key;
const ETag = newBucketObjects[i].ETag;
await db.prepare(
`INSERT OR IGNORE INTO r2_objects_metadata
(object_key, eTag) VALUES (?, ?)`
).bind(Key, ETag).run();
}let masiveInsert = "";
for (let i = 0; i < newBucketObjects.length; i++) {
const Key = newBucketObjects[i].Key;
const ETag = newBucketObjects[i].ETag;
let insert = `INSERT OR IGNORE INTO r2_objects_metadata (object_key, eTag) VALUES ('${Key}', '${ETag}');`;
masiveInsert += '\n';
masiveInsert += insert;
}
await db.exec(masiveInsert);