Best practices for large-scale R2 backup (≈250 TB, long-running)
Hey all ! (I'm using a translator)
I’m planning a one-time large backup operation from Cloudflare R2 to another object storage (Backblaze B2), and I’d really appreciate some guidance to make sure I handle this as safely as possible.
Context:
Planned approach:

I’m planning a one-time large backup operation from Cloudflare R2 to another object storage (Backblaze B2), and I’d really appreciate some guidance to make sure I handle this as safely as possible.
Context:
- Around 250 TB total
- Multimedia content uploaded by my customers, mostly delivered using HLS (m3u8 playlists with many small .ts segments)
- The R2 bucket is strictly read-only during the operation (no deletes, no sync)
- Using rclone in copy mode only
Planned approach:
- Stable and continuous bandwidth (roughly 60–120 MB/s total, no bursts)
- Limited concurrency (2–3 parallel rclone processes, low transfers/checkers)
- Folder-by-folder copy (per logical content unit), no global bucket scans
- No fast-list, no aggressive retries
- Keep traffic predictable and non-abusive
- Safely complete a full backup without putting the source data at risk
- Minimize the chance of interruptions during a long-running transfer (~2–3 weeks)
- Is this approach aligned with R2 best practices for large-scale backup operations?
- Are there specific limits or patterns I should be extra careful about when doing a long-running backup like this?