Hello, I currently have my website to play videos stored in HLS format in R2. The new thing I want to do is be able to create clips and download them, for this it occurred to me to take the segments included in the clip and with an offset and duration to be able to transform it into a single .mp4 video with ffmpeg and store this new .mp4 in R2. Where could I execute this ffmpeg command, in a lambda function? Do lambdas have good integration with R2?
Hey! I'm trying to see if R2 is a good use case for me but I have a question. I know that you can serve a video file on r2 through a custom domain which will allow it to make use of cloud flare cache, but is there a file size limit? For example, is it like if the file is a 50mb .MP4, it'll be served through the cache, but if it's a 3gb .MP4, it'll be loaded directly from r2 every time? Or is there no limit?
Ah I see, thanks! But let's say I'm using HLS/DASH to serve the MP4 file which means I'm technically serving smaller files, will that affect the caching in any way? Like let's say I break down each part to be smaller than 512MB?
Improve performance in a sense that technically I'll be serving my 5gb file through the cdn rather than r2 directly? Or will it still be served through r2 even though I'm only sending small parts of the large 5gb file
Amazing, thanks! I have a few more questions but basically I'm working on a small project and currently just using presigned urls to serve the file when I click download on the file. From my understanding, doing it this way won't make the file cached. To do that I'd need to attach my own domain. Once I do that, won't I still be generating presigned urls the same way, or do I have to change my code? And will the file be cached this way? (This way meaning I'm generating a presigned url but I have a domain setup)
Oh interesting, so if you setup a custom domain, all files in the bucket are public by default? And the end goal is to be able to make the link publicly accessible or private. My initial brute force idea was to create 2 buckets where one is public and one is private and move the file depending on the setting the user chooses, but that seems excessive and there's got to be an easier way
Ohh gotcha, so the way I'm doing it rn is exactly how you said, when a client requests the file I generate a get pre signed URL which is served through r2.cloudflarestorage.com. when I buy a domain and attach it to r2, I can keep my code the same where when a user requests the file it'll still make the get pre signed url and send that to the client, but this time it'll go through my custom domain and thus will be cached? (I guess I can test this all by buying a domain and attaching it but I just want to be sure lol)
Yup will do! So when using a custom domain, all this will be handled through hmac auth, not through the pre signed urls? Is the hmac generated the same way code wise? Like basically I guess I'm wondering do I have to change up my code from how it is right now where I just generate presigned urls and it's served through r2.cloudflarestorage.com, to something else when I attach a custom domain and want to use hmac
We are writing to report an issue that has been brought to our attention. Our users in Russia and Kazakhstan are experiencing difficulties accessing images hosted on our R2 storage.
Could you please advise us on how we might resolve this issue to ensure that our content is accessible to these users?
I have read an article suggesting that access issues for our users in Russia and Kazakhstan may be due to Cloudflare's use of TLS Encrypted ClientHello (ECH). Could you please advise if there is a way to disable TLS ECH on our end to resolve this issue?
Could you please let me know if the Pro plan is sufficient for our needs, or do we require the Business plan to resolve our current issues with user access?
I've noticed that DNS proxying is enabled for my domain, but I'm unable to disable it because the DNS record is marked as read-only. The message states that "This record was generated by Cloudflare R2." As a result, I cannot directly modify the DNS settings to disable proxying.
I wanted to let you know that I've resolved the issue with the images not displaying. I removed the Custom Domains and switched to using the R2.dev URL. Now users can see the images without any problems.
Managed public bucket access through an r2.dev subdomain is not intended for production usage and has a rate limit applied to it. If you exceed the rate limit, requests through your r2.dev subdomain will be temporarily throttled and you will receive a 429 Too Many Requests response. For production use cases, consider linking a custom domain to y...
Now I'm work with postman to test R2 API,When I pass the Header 'Date' with 'Fri, 08 11 2024 12:42:51 GMT' or 'Fri, 08 Nov 2024 20:42:51 GMT', the server retrun Date provided in 'date' header (Fri, 08 11 2024 12:38:00 GMT) didn't parse successfully So, What's the right Date format?
Hi there! We are facing issues with end users uploading files to cloudflare R2. Our users upload files from their browsers using presigned urls. Sometimes they get 502 errors, usually when they upload 1000+ files in one go. Under the hood, not all uploads are started simultaneously . Usually there are 10-100 file uploads at the same time. I can't reproduce the errors, but they keep happening. Are there any limits on R2 side that stop uploads or does anyone have an idea what could cause this error? It's unlikely related to the underlying machine or internet connection since this is happening for multiple users of us.
Alright, thanks for the help. I'll have a look on how I can reduce the concurrent load or spread the load otherwise. I don't know if I didnt search enough, but I couldnt find this answer with google. Maybe you can add it somewhere in the docs for others who might face the same issue?