Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

I m getting really slow download speeds

I'm getting really slow download speeds from R2 directly from the s3 endpoint. It's taking 2000ms to pull 2 1kb parquet files, which should be like 50ms. Is there anything I can check? For example testing with minio I get them in 50ms.

I m hoping someone who works for

I'm hoping someone who works for cloudflare will see this: I'm seeing consistent 500 Server errors when trying to use the R2 S3 API. This started at the beginning of the month and hasn't changed. It was working for me before with no issues, and i didn't change anything. It just started giving me 500 errors. I can see the API hits in the Metrics tab of the web interface but nothing gets stored or retrieved....

With ` response content encoding `

With ?response-content-encoding= ```sh âžś ~ file test.gz test.gz: gzip compressed data, from Unix, original size modulo 2^32 1684...

The docs page you linked has an example

The docs page you linked has an example of that

sid 3835 no im using module helper for

@sdnts no im using module helper for S3 in elixir ```config = %{ region: @s3_region, access_key_id: System.fetch_env!("AWS_ACCESS_KEY_ID"), secret_access_key: System.fetch_env!("AWS_SECRET_ACCESS_KEY") }...

Is it better to create a “general help”

S3 upload speeds (slow) I’ve also been waiting for 2 weeks on my CloudFlare tickets. Wait times are insane, even when we’re paying for over $100 (increasing every month) on CloudFlare products per month...

this is the part you use dev tools and

this is the part you use dev tools and steal from the dashboard for

R2 error "net::ERR_CONTENT_DECODING_FAILED 200 (OK)"

Have you created a ticket already? If not, can you do so so it can be escalated? That broken file might be of help too!

Cross-regional

1. Location Hints are just hints, not guarantees. 2. If you’re worried about the latency differences between Eastern and Western Europe, it will be negligible enough to ignore. The way the CF network splits Europe is not as you’d expect, so don’t focus on this too much. 3. This is more “big picture” stuff, but R2 is different from S3 in that we don’t really want you to think in “regions”. The endgame here is to have your data available wherever your users access it from, without you having to guess. Smart placement in Workers is a step in that direction, but this is a much harder problem for blob storage, so it’ll take time. I understand that this doesn’t mean much to you right now though. 4. In the mean time, you can use the CF cache to get around cross-regional data access. Caches are datacenter-local, so in most cases, you’ll be able to either automatically or manually cache your files so they get served quickly. If you absolutely must have your data replicated in multiple places, the best way to do that right now is to have multiple buckets and have copies of your data. I know this isn’t ideal, but it is very rare for someone to want to do this, so you should really consider what you might be getting into. I would naively expect this to only be an enterprise-level concern. If so, you should definitely be talking to a specialist at CF before you decide to do this, at least in my opinion. 5. Why are you trying to test latencies of different regions? If this is about Europe, see 2. Otherwise, geographical distance is a pretty good estimate right now because there are so few regions available. ...

You can either use the S3 compat API or

You can either use the S3-compat API or stream it to R2 via a Worker binding

but it s just the files that are

but it's just the files that are actually on the bucket. In my local wrangler setup, the files will never be uploaded there because I have them locally in .wrangler/state/r2 then. But someone in the #wrangler channel suggested to just serve those files with any file server which I'll try now

CORS from Webworker?

Actually it works using the direct URL but here I'm preloading a bunch of videos in a WebWorker: ```js const preloadVideo = async (url: string) => { const res = await fetch(url) const blob = await res.blob()...

Removing the slashes does not help

Removing the slashes does not help

FileZilla

we did try this (with Filezilla Pro) but it is not working, hence wanted to see if anything specific documentation is present. I will review our settings again

minio .net

personally I've used minIO for .net without any issue, potentially another option: https://min.io/docs/minio/linux/developers/dotnet/minio-dotnet.html

I mean the code is working it returns a

I mean, the code ""is working"", it returns a signed URL which seems to be invalid

Please help out here

Please help out here. So I have moved alot of our data from S3 to using R2. Now everything is working, on django I have made a custom storage class, and some models are using it. While everything is working there is one small issue. I am trying to get the file size but when I try to get the file size I get the 400 head object error. And this happens on the endpoint url and not on the custom domain. How to force botocore to use the custom domain, else where its using custom domain as it is...so why here....

You can import render as a library do

You can import render as a library, do your own auth and then pass it off to R2 as render.fetch(req, env, ctx);

Is there any ETA for per bucket tokens

Is there any ETA for per-bucket tokens? We want our data suppliers to upload directly to R2 (can't use browsers because it's many TB per dataset), but for that to work we need tokens that only have write access to a specific bucket

If it returns it deleted if it throws an

If it returns it deleted if it throws an error it didn’t