So I managed to delete my 1tb of data from r2... except for a few "Ongoing Multipart Upload" which weirdly appear to be 3.94 GB in size! I can't delete these using the gui and I can't delete them using rclone..... any ideas?
Hey all. I want to upload large files from the clients file system to R2 through my worker. I am using node:fs to read the file. The client has access to the file system. I understand workers have memory limits that makes this flow slightly complex. By large files im talking about 5GB , 10GB, or even more.
How should I approach this? I have considered streaming directly from the source to my worker. I have also considered sending multi-part/formdata with the array buffer. Would this be a case for MPU, or can I upload large files in one request without having to worry about the memory limit? Any insight would greatly be appreciated
ANyone had success getting the jurisdictional buckets to work with the terraform provider? Tested again now with the latest 4.26.0 and it just says bucket not found if you try to reference the bucket (no issue on github for this, I am making a small repro to post there)
I installed rclone and uploaded the file, it lost all the old files, is there any way to recover them, I went to r2 cloudflare and only saw a single file that I just uploaded through rclone, and the previous files disappeared, but the content The amount of bucket is still displayed as 10gb