Hello, I would like to migrate an application where about 12 videos are stored from Vimeo to R2 to keep costs as low as possible.
My app has around 50k users who need to watch these videos. I am doing some research, and from what I have read, technically R2 for my use case should cost very little compared to other solutions that have billing based on data traffic, is that true?
Sorry for asking, but it seems too good to be true.
The total size of the videos is less than 10 GB, and they are viewed by 50k people per month, totaling approximately 25/50 TB of bandwidth (per month). With alternative solutions, it would cost me more than 10/15k$ per year.
let's say that one user on my platform wants to watch a 20 minutes video stored on R2. In order to do it, he needs to get a Presigned URL from my backend. How much Class A / B Operations (like GetObject) this operations will consume?
Hi, I have just started running the migration script to migrate all images from Cloudflare Images to R2, but looks like I hit rate limits, and there is no clear docs about what I can do about it. We will need to migrate around 3 million images, and I am kind of following this https://github.com/cloudflare/cloudflare-docs/pull/12615. Also starting migration caused our prod go down because of rate limit applied for all r2 calls and cloudflare images calls.
response: {
status: 429,
statusText: 'Too Many Requests',
}response: {
status: 429,
statusText: 'Too Many Requests',
} Cant see more information in the response.
I added delay and even started pulling images grouped by 10, but still at one point it hits the rate limiting, and our production gets affected along with it.
it seems like it is impossible to pull 3 million images without affecting production, is there another way that we can configure temporarily or something?
Question about R2 Sippy — I know it’s in beta and not recommended in production:
Currently I have a S3 bucket where content gets uploaded via Lambda, and this needs to stay this way.
When it goes to GA, can this feature handle a few hundreds of request in a short period of time? For example 1k objects are requested in a short period of time, and they don’t exist in R2 yet.
If I request a non-existing object in a Worker using the linked bucket in the “env” variable, will it also request this object from Amazon? Or is this only for HTTP request to my R2 subdomain/custom domain? Already got an answer: it should work in the worker
@nora @Space - Ping in replies Looks like directly using image urls working fine! already migrated more than 5k images without any error. And not affecting prod apis. This seems to be way to go!
One more quick questions, there is a Continuation token we are using the retrieve next page of images. If somehow migration stops in the middle, can I use the latest Continuation Token to continue where it left?