So I have a question around D1 wrt dropping large tables. We have tables that sum to ~3.5 million rows and ~2.5GB each. Generally we will only have one of these, but when we need to update the data stored in them (they are used as a lookup of sorts) we will create a brand new table, switch to that, then drop the old table. The last part of this is causing us some significant issues and I am not sure how to work around it. While the DROP TABLE {name}DROP TABLE {name} query is running, the table becomes completely unresponsive and can't serve other queries. We always get returned a failure for this query, whether using worker bindings or the API, however sometimes it actually does work and sometimes it doesn't (presumably the operation times out and a rollback is initiated). I wanted to know if this is something you guys are aware of, and if there are plans to improve this, because right now we can't really delete our old data without causing a temporary outage, and we may have to even try multiple times to delete the table. Cheers!