I see, so main benefit here is no egress, is their a FUP (fair use policy for egress, yes all legal, documents, just want to know before we migrate from other clouds.)
@Skye I would assume that if the request is made far away, the object would pass over cloudflare internal network anyway to get there no? hence transfer accelartion (ARGO?) is baked in to cloudflare, or would it go over public networks. thinking omoving up to 20tb.
I make a lot of requests to R2 from every single CF region but getting a definitive answer on which data center is the one hosting the bucket isnt really possible. It varies which dc gives the lowest latency all the time all I could say is east is in the east and west is in the west for NA for example lol
Please do not post your question in multiple channels/post it multiple times per the rules at #welcome-and-rules. It creates confusion for people trying to help you and doesn't get your issue or question solved any faster.
Could my 502s be caused because I'm throttling my network? Is cloudflare using workers internally to for put_objectput_object and the worker is getting killed?
Is there a overview available which cities/locations are mapped to which location hint? I need to figure out the difference between Western Europe and Eastern Europe
Thank you so much! Then, if I set the time-to-live (TTL) to a really long duration (e.g., 3 months) with Cache Rules, does that mean Class B operations won't be counted during that period?
Haven't tried using it with cache, but you would probably need to use the "ignore query string" stuff for the cache level. But also, if you did that, it kind of defeats the purpose of the hmac validation since hitting the cache wouldn't actually be doing the validation.
So not really sure it would actually work (both hmac validation and cache hit).
That reminds me... I'm using is_timed_hmac_valid_v0()is_timed_hmac_valid_v0() in some places instead of presigned URLs because presigned URLs don't support HTTP/2 or HTTP/3.
If someone is reading things, can we finally get presigned URLs on a protocol that came out more recently than almost 30 years ago.
hi, I'm trying to upload a file to my r2 bucket with rclone but I get this error. "S3: PutObject, exceeded maximum number of attempts" it tries it twice and uploads successfully on the 3rd attempt but it costs me time. I upload a maximum of 10 new files per day so there's no way I can exceed any limit.
This is the command I use "rclone sync /remote r2:bucket --progress --transfers=8 --s3-upload-concurrency=8 --s3-upload-cutoff=512M --s3-chunk-size=512M --fast-list --size-only " There was no problem for a week, but today it started giving this error.
When I looked at the log file in more detail, I saw this "can't copy - source file is being updated". I am connected to the remote server with samba/cifs, I guess it is a problem related to that. The file was actually edited 1 hour ago, but rclone still sees it as being edited. But it doesn't matter. --local-no-check-updated This command solved the problem. Sorry for wasting your time. thanks.