Hello. I have a frontend app that uses a PHP backend. I have image files that only authenticated users should be able to view. Authentication is done via cookies and headers against the PHP server. The frontend app gets data from PHP including the urls of said images and then these urls are embedded using <img> tags. Currently i have an .htaccess file that re-routes requests to these images to a php function that checks if the user is authenticaed and authorized and if so serves the image from disk. I would like to move all the images to an R2 instance. I would like to issue a cookie for the user when they log in and i want the users to use that cookie to get the images from R2 directly. Is that at all possible?
@Space - Ping in replies I read about signed URLs. I would like to avoid re-calculating (pre-signing) the image urls each time they are served. Is there a way I can issue a cookie or an access token that is good for say 24 hours and then the frontend app can use this cookie or token on all of it resquests to R2?
Keep in mind that pre-signing doesn't do any network IO and is simply computing a URL based on input parameters - so it's fairly lightweight to compute.
Presigned URLs are an S3 concept for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with …
Hi everyone, I'm new to R2. I want to know if a domain connection is required to make a bucket public? Do I need to buy a new domain for this? (I don't want to use R2.dev subdomain as it is ratelimited). Thank you!
I have already a domain, and apparently I can use a subdomain for making my bucket public, is there any resource that could help me set this up? because I accidently connected my main domain and one minute later I received an email complaining that my app crashed :blob_sweat:. So, I don't want to repeat the same mistake, is creating a subdomain and connecting it to a bucket documented somewhere?
You should not be letting users name the file keys you store in S3-like services, you should ideally track that filename seperately somehow and store the file under a predictable key
However, this is a bug. Files with such names are possible, do not violate any standards and work with direct distributions from other S3-like services. In addition, I want to serve files directly from R2/S3, and not waste resources on retrieving a separately stored name.
This is not about how to “correctly” host user files. The point is that Sippy is expected to correctly migrate a file with URL-coded characters in the name.
Hi, I'm new to R2 and have read the docs and tried to use the Postman collection. I am not using S3 but simply want to push files via an API to R2. I do not find a proper APi documentation on how to send proper requests. Can anybody point me to what I might be missing please?
There's an AWS SDK for Java which you can use. The point of using the S3 Protocol rather then reinventing the wheel is that there are tons of existing libraries which work
Hi guys. Im having a problem with R2 usage metrics, when i obtain a object a just one time from s3 api, the metrics put a class A operation. Why this is happening??