128 MB Workers memory limit when using R2

I'm trying to understand the 128 MB memory limit for Workers and what it means for my use-case. The documentation says Only one Workers instance runs on each of the many global Cloudflare network edge servers. Each Workers instance can consume up to 128 MB of memory. Is there any way to monitor how much memory is actually used when my Worker is handling one request? Also, I have a very simple use-case with a worker that supports GET and PUT to put files directly into and read files directly out of a R2 bucket. Those files might be big. Is any significant memory allocated in the Worker for those files during that process? For example, if I have 1000 concurrent requests of people trying to download a file that is 100 MB big, could that cause issues? Would I likely run into any limits?
6 Replies
kian
kian16mo ago
Downloading the files, assuming you just pass the object body as the response as-is, doesn’t buffer the body so it’s a non-issue with regards to memory
denchi
denchi16mo ago
Perfect, that's what I was hoping.
kian
kian16mo ago
Ditto with PUT - as long as you’re sending the binary data as the request body rather than using FormData, it won’t buffer
denchi
denchi16mo ago
Got it. Let's say I wanted to check the first few bytes of a file before putting it into R2, would that be possible without allocating memory for the whole file?
kian
kian16mo ago
Not that I’m aware of - you can only read a body once and if you tee/clone it so that you have two, they must be read in parallel or one of them will be buffered into memory
denchi
denchi16mo ago
Okay. So if I wanted to do that, I would likely run into the memory limit. Thanks for clearing that up 🙏