How often do you get 10001 errors and if you retry does it eventually work? And you say on a new bucket... do you have an older bucket that operations work fine on? @RandomNick
Ah ok if you are talking about the dashboard... yeah the dashboard has some issues. But actually running using the S3 api or the workers binding api should be fine still.
Note that custom domains are recommended for anything production, the r2.dev is only recommended for development or testing, does not have caching and has a rate limit in place
I would just make the client send the content length of the file they want to upload and increase in the db their usage by that length and send a signed url that specifies the content length provided so that we can ensure the file is that size.
so like, the client determines the length of the file, asks your worker "can i upload this size", the worker decides its okay and returns a signed url that only accepts that length
I would use a DO alarm rather than a cron trigger personally. You can basically create a DO per user that tracks their uploads and you can run an alarm on a per user basis when their next upload link would have expired and check if it was actually uploaded or not.
as opposed to KV which is just a flat key/value store, R2 which is also a key/value store but scales higher/with bigger data, and D1 which is just a database
My personal path way of choosing is pretty much can I use a DO for this? Use a DO. Am I fine with higher cost / do I need/want faster responses and objects arent super large (over 25MiB)? Use KV. Else: Use R2.