Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

sid 3835 no im using module helper for

@sdnts no im using module helper for S3 in elixir ```config = %{ region: @s3_region, access_key_id: System.fetch_env!("AWS_ACCESS_KEY_ID"), secret_access_key: System.fetch_env!("AWS_SECRET_ACCESS_KEY") }...

Is it better to create a “general help”

S3 upload speeds (slow) I’ve also been waiting for 2 weeks on my CloudFlare tickets. Wait times are insane, even when we’re paying for over $100 (increasing every month) on CloudFlare products per month...

this is the part you use dev tools and

this is the part you use dev tools and steal from the dashboard for

R2 error "net::ERR_CONTENT_DECODING_FAILED 200 (OK)"

Have you created a ticket already? If not, can you do so so it can be escalated? That broken file might be of help too!

Cross-regional

1. Location Hints are just hints, not guarantees. 2. If you’re worried about the latency differences between Eastern and Western Europe, it will be negligible enough to ignore. The way the CF network splits Europe is not as you’d expect, so don’t focus on this too much. 3. This is more “big picture” stuff, but R2 is different from S3 in that we don’t really want you to think in “regions”. The endgame here is to have your data available wherever your users access it from, without you having to guess. Smart placement in Workers is a step in that direction, but this is a much harder problem for blob storage, so it’ll take time. I understand that this doesn’t mean much to you right now though. 4. In the mean time, you can use the CF cache to get around cross-regional data access. Caches are datacenter-local, so in most cases, you’ll be able to either automatically or manually cache your files so they get served quickly. If you absolutely must have your data replicated in multiple places, the best way to do that right now is to have multiple buckets and have copies of your data. I know this isn’t ideal, but it is very rare for someone to want to do this, so you should really consider what you might be getting into. I would naively expect this to only be an enterprise-level concern. If so, you should definitely be talking to a specialist at CF before you decide to do this, at least in my opinion. 5. Why are you trying to test latencies of different regions? If this is about Europe, see 2. Otherwise, geographical distance is a pretty good estimate right now because there are so few regions available. ...

You can either use the S3 compat API or

You can either use the S3-compat API or stream it to R2 via a Worker binding

but it s just the files that are

but it's just the files that are actually on the bucket. In my local wrangler setup, the files will never be uploaded there because I have them locally in .wrangler/state/r2 then. But someone in the #wrangler channel suggested to just serve those files with any file server which I'll try now

CORS from Webworker?

Actually it works using the direct URL but here I'm preloading a bunch of videos in a WebWorker: ```js const preloadVideo = async (url: string) => { const res = await fetch(url) const blob = await res.blob()...

Removing the slashes does not help

Removing the slashes does not help

FileZilla

we did try this (with Filezilla Pro) but it is not working, hence wanted to see if anything specific documentation is present. I will review our settings again

minio .net

personally I've used minIO for .net without any issue, potentially another option: https://min.io/docs/minio/linux/developers/dotnet/minio-dotnet.html

I mean the code is working it returns a

I mean, the code ""is working"", it returns a signed URL which seems to be invalid

Please help out here

Please help out here. So I have moved alot of our data from S3 to using R2. Now everything is working, on django I have made a custom storage class, and some models are using it. While everything is working there is one small issue. I am trying to get the file size but when I try to get the file size I get the 400 head object error. And this happens on the endpoint url and not on the custom domain. How to force botocore to use the custom domain, else where its using custom domain as it is...so why here....

You can import render as a library do

You can import render as a library, do your own auth and then pass it off to R2 as render.fetch(req, env, ctx);

Is there any ETA for per bucket tokens

Is there any ETA for per-bucket tokens? We want our data suppliers to upload directly to R2 (can't use browsers because it's many TB per dataset), but for that to work we need tokens that only have write access to a specific bucket

If it returns it deleted if it throws an

If it returns it deleted if it throws an error it didn’t

Hi all am currently trying to perform a

Hi all, am currently trying to perform a multi file upload using cloudflare workers and R2 but I keep getting this error
EntityTooSmall: Your proposed upload is smaller than the minimum allowed object size.
EntityTooSmall: Your proposed upload is smaller than the minimum allowed object size.
...

I m getting this error for some requests

I'm getting this error for some requests in my Worker that handles a simple file upload: put: We encountered an internal error. Please try again. (10001) . Apparently the R2 put fails for some reason. It's only happening for ~0.02% of requests. Any idea what could be causing that?...

That’s mostly an API for the dashboard

That’s mostly an API for the dashboard, and is undocumented right now because I plan on changing it slightly. If you’re looking for a way to get bucket usage statistics, you might want to look into the GraphQL API instead.

Multipart uploads

Hi, I did a search but was not able to find a proper answer. What is the proper programmatic way to upload large (multi gig) files? I need large video calls uploaded to R2 for AI purposes. I am guessing that if I use Workers, they will timeout (right?)