Hitting memory limits again?

Hey, so a couple months ago when we started using workers for delivering game content (using R2 as a cache layer), we ran into an issue where files that were larger than 128MB caused the request to fail at 127.99MB. We solved this by better utilising response.body.tee(), but the issue is back again as of this morning.

Nothing has changed with our workers code, we essentially left it as-is once we got it working how we wanted it to, including serving files in the 300MB-800MB range.

This is the code that we currently use for delivering the response, the same code that this is supposedly failing with:
// get two streams for the body (one for R2, one for responding)
const tee = response.body?.tee()

// if the response status isn't 200, respond with the status code given by the upstream
if (response.status !== 200) {
    console.log(`Got HTTP ${response.status} from upstream: ${url}`)
    return new Response(null, { status: response.status })
}

// attempt to put the upstream data into R2
try {
    await R2_BUCKET.put(key, tee[0])
    console.log(`Successfully put object ${key} into R2`)
} catch (err) {
    console.log(`Error while putting ${key} into R2:`, err)
}

// return the response from upstream
return new Response(tee[1])

It's worth noting that I cannot tail the worker as it's getting too much traffic (250+ invokes per second) so getting logs are out of the question.

If there's anything here that can be improved, I'd much appreciate it if you could point it out. I'm not very familiar with streams so it's possible that I've made a mistake here.

Thanks 🙂
Was this page helpful?