This typically means your query is running for too long. Do you have an index on `user_id` ?
This typically means your query is running for too long. Do you have an index on
user_id ?
execute or with migrations, it outputs this:--file) and as a command (--command "CREATE TRIGGER delete_item_tags after delete on list_item begin DELETE FROM list_item_tags WHERE item_id = OLD.id; END;")begin fully caps, i.e BEGINBEGIN (case-sensitive) to see if it's a compound statement.
d1_ btw5/22/24, 2:51 PM
.sqlite file in .wrangler, but that seems kind of jankUser, then run DROP TABLE User; - is it expected that running the migration command again returns :white_check_mark: No migrations to apply!?id, but running the migration again gave me the no migrations to apply message. So I figured dropping the table and running the migration again would work. Am I misunderstanding how you update tables?Error: D1_ERROR: D1 storage operation exceeded timeout which caused object to be reset.Anything going on?


EXPLAIN QUERY PLAN result -

Error: Too many API requests by single worker invocation I know workers has a 1000 request invocation limit but I was wondering if there was a way insert large amounts of rows with a single request like pythons executemany command. I'm currently inserting a single row per request like this execute() on a giant sql insert statement stringCREATE TRIGGER delete_item_tags
AFTER DELETE ON list_item
begin
DELETE FROM list_item_tags
WHERE item_id = OLD.id;
END;DROP TABLE User;:white_check_mark: No migrations to apply!Error: Too many API requests by single worker invocationexecutemanyexecute()const prepared = db.prepare("INSERT OR IGNORE INTO r2_objects_metadata (object_key, eTag) VALUES (?1, ?2)");
const queriesToRun = newBucketObjects.map(bucketObject => prepared.bind(bucketObject.Key, bucketObject.ETag));
await db.batch(queriesToRun);for (let i = 0; i < newBucketObjects.length; i++) {
const Key = newBucketObjects[i].Key;
const ETag = newBucketObjects[i].ETag;
await db.prepare(
`INSERT OR IGNORE INTO r2_objects_metadata
(object_key, eTag) VALUES (?, ?)`
).bind(Key, ETag).run();
}let masiveInsert = "";
for (let i = 0; i < newBucketObjects.length; i++) {
const Key = newBucketObjects[i].Key;
const ETag = newBucketObjects[i].ETag;
let insert = `INSERT OR IGNORE INTO r2_objects_metadata (object_key, eTag) VALUES ('${Key}', '${ETag}');`;
masiveInsert += '\n';
masiveInsert += insert;
}
await db.exec(masiveInsert);