Downloading file from R2 but unsure what the best method is.

Hey there, newbie here. Ive read several cloudflare docs on how to give users temporary access to a file using signed urls, however i want that the url is only accessible once and not for a short period. The other method would be by using these object.httpEtag attributes but i really dont know how i can use etags in my application, all i could find about etags is that browsers store these tags to either use the file from the cache or download it again, but i dont need any cache at all, i always want to download the file no matter what (i understood the concept of etags, but dont need it, i want to download the file always). My last idea would be to just get the file data within my worker and then just pass the data back in the response, but this kills the cpu time and isnt quite good as a free user. Any idea or examples of a simple eimplementation to fix my issue? I dont wanna rely on any ai prompts but didnt find many resources online (probaby missed some good resources). A one time use link would be perfect.
14 Replies
Chaika
Chaika3mo ago
My last idea would be to just get the file data within my worker and then just pass the data back in the response, but this kills the cpu time and isnt quite good as a free user.
As long as you return the body/readablestream, the worker is done and it doesn't consume any CPU time for the user to download the file https://blog.cloudflare.com/workers-optimization-reduces-your-bill (That wouldn't be CPU time eitherway, which is what Workers are limited/billed on) But yea I'd just throw it behind a Worker. You may need Workers Paid if you are getting more then 100k requests/day but otherwise pretty cheap. S3 Presigned Links just won't work, nor would R2 Custom Domains with HMAC or anything. You need something to statefully store the file has been downloaded, and you could use a Worker to do so, by calling an internal api, or using KV, or whatever, Workers can conditionally do it.
Spuckwaffel
Spuckwaffel3mo ago
thanks for the info, however i think the way im currently doing it will still bill me or is this way right?
No description
Chaika
Chaika3mo ago
It still wouldn't be CPU Time used while you wait for the file to be buffered in, but yea that's a silly way of doing it eitherway and would explode if it was too large. All you need to do is return fileBytes.body in the response constructor then it'll be streamed right through
Spuckwaffel
Spuckwaffel3mo ago
alright thank you also is there any limit for the maximum file?
Chaika
Chaika3mo ago
R2 has limits like 5 TiB per object
Spuckwaffel
Spuckwaffel3mo ago
i meant in the worker, sorry
Chaika
Chaika3mo ago
streaming through a Worker has no exact limits though. Workers only have 128 MB of memory, but since you are streaming it through, it can be whatever size. I've served multi-gb files via Workers before from R2
Spuckwaffel
Spuckwaffel3mo ago
ah thank you very much
Chaika
Chaika3mo ago
Uploading to a Worker is a different story though
Spuckwaffel
Spuckwaffel3mo ago
is there a api for R2 to do it directly without using a worker? e.g using account credentials
Chaika
Chaika3mo ago
You'd have your zone/website upload limit (100 MB on Free and Pro, 200 MB on Business and 500 MB on Enterprise by default) R2 supports the Same S3 API AWS S3 uses via S3 Compat You can use presigned links to allow people to upload directly and bypass the upload limit through the Worker
Chaika
Chaika3mo ago
Cloudflare Docs
Presigned URLs · Cloudflare R2 docs
Presigned URLs are an S3 concept for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with …
Spuckwaffel
Spuckwaffel3mo ago
alright thank you very much
Chaika
Chaika3mo ago
(if you're not familiar with S3/object storage, S3 is kind of the generally supported API for object storage in general. There 's a ton of existing tooling and applications built around in, and R2 is mostly plug and play with them)