Hi I am seeing strange behaviour in D1. Inserting two rows of data and the API reporting back 34 row
Hi I am seeing strange behaviour in D1. Inserting two rows of data and the API reporting back 34 rows inserted (and billed)
Edit Cloudflare Workers template, it doesn't include D1. This caught me out a little bit running D1 commands in CI, as it does include KV and R2, so seems a little inconsistent it doesn't include D1, but perhaps there is a reason for that
PRAGMA defer_foreign_keys=ON not disable on update and on delete actions? Kinda weird when SQLite often requires recreating tables (and thus triggering those actions)select * from pragma_table_info('TABLE_NAME_HERE');on delete and on update features because of this? Or should I dump my current db, create a new one, and upload the data?Error: D1_ERROR: The destination execution context for this RPC was canceled while the call was still running the docs just define it as Generic error.. I'm getting it while using the db.batch() command and my code has been working fine until today. Is there a limit to how many prepared statements db.batch can run?
INSERT INTO services (id,name,type,created_at) VALUES ("1", "2", "3", "4") ON CONFLICT (id) UPDATE SET name = excluded.name, type = excluded.type WHERE name != excluded.name OR type != excluded.type does not workINSERT INTO services (id,name,type,created_at) VALUES ("1", "2", "3", "4") ON CONFLICT DO NOTHINGSELECT field1, field2, field3, field4, field5 FROM table WHERE field9 = "k" OR field9 IS NULL.Network connection lost. in particular is the error a DO client throws when it disconnects from the DO - as long as it's a SELECT, it's safe to retryINTEGER PRIMARY KEY column, that's the auto-incrementing ID. If you don't, you'll get a default one called ROWID that you can use for queries. You can opt out by defining the table as WITHOUT ROWID but that comes with some additional constraints - https://sqlite.org/withoutrowid.html - and you should only do that if you know what you're doing.Edit Cloudflare Workers [cause]: Error: parser stack overflow
at D1Database._sendOrThrow (cloudflare-internal:d1-api:67:24)
at async D1PreparedStatement.raw (cloudflare-internal:d1-api:184:32) {
[cause]: undefined
}PRAGMA defer_foreign_keys=ONon updateon updateon deleteon deleteselect * from pragma_table_info('TABLE_NAME_HERE');Error: D1_ERROR: The destination execution context for this RPC was canceled while the call was still runningGeneric error.db.batch()db.batchError: D1_ERROR: The destination execution context for this RPC was canceled while the call was still running.\n at D1Database._sendOrThrow (cloudflare-internal:d1-api:66:19)\n at async D1Database.batch (cloudflare-internal:d1-api:36:23)\n at async updateTranscripts (index.js:15762:9)\n at async Object.fetch (index.js:16415:16)let r : D1Response = await DATABASE.prepare(query).bind(id,name,type,created_at).run();
console.log("Response success: " + r.success);
console.log("Response error: " + r.error);const query = "INSERT INTO services(id,name,type,created_at)"
+ " VALUES (?,?,?,?)"
+ " ON CONFLICT (id) UPDATE"
+ " SET name = excluded.name, type = excluded.type"
+ " WHERE name != excluded.name OR type != excluded.type";INSERT INTO services (id,name,type,created_at) VALUES ("1", "2", "3", "4") ON CONFLICT (id) UPDATE SET name = excluded.name, type = excluded.type WHERE name != excluded.name OR type != excluded.typeINSERT INTO services (id,name,type,created_at) VALUES ("1", "2", "3", "4") ON CONFLICT DO NOTHINGSELECT field1, field2, field3, field4, field5 FROM table WHERE field9 = "k" OR field9 IS NULLNetwork connection lost.INTEGER PRIMARY KEYWITHOUT ROWIDconst transcriptPrepared = db.prepare(`
INSERT INTO transcripts
(
feed_id, archive_date, archive_date_time,
object_key, segment_start, segment_end,
segment
)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)
`);
const transcriptQueriesToRun = allSegments.map(segmentObj => transcriptPrepared.bind(
segmentObj.feed_id,
segmentObj.archive_date,
segmentObj.archive_date_time,
segmentObj.object_key,
segmentObj.segment_start,
segmentObj.segment_end,
segmentObj.segment
));
console.log("transcriptQueriesToRun.length: ", transcriptQueriesToRun.length);
console.log("transcriptQueriesToRun[0] ", JSON.stringify(transcriptQueriesToRun[0]));
if (transcriptQueriesToRun.length > 0) {
try {
await db.batch(transcriptQueriesToRun);
} catch (e: any) {
console.error({
message: e.message
});
if (e instanceof Error) {
return new Response(JSON.stringify({
"error": e.message,
"traceback": e.stack
}), {
status: 500,
headers: getCorsHeaders()
});
}
}
}