Hitting memory limits again?
Hey, so a couple months ago when we started using workers for delivering game content (using R2 as a cache layer), we ran into an issue where files that were larger than 128MB caused the request to fail at 127.99MB. We solved this by better utilising
Nothing has changed with our workers code, we essentially left it as-is once we got it working how we wanted it to, including serving files in the 300MB-800MB range.
This is the code that we currently use for delivering the response, the same code that this is supposedly failing with:
It's worth noting that I cannot tail the worker as it's getting too much traffic (250+ invokes per second) so getting logs are out of the question.
If there's anything here that can be improved, I'd much appreciate it if you could point it out. I'm not very familiar with streams so it's possible that I've made a mistake here.
Thanks
response.body.tee(), but the issue is back again as of this morning.Nothing has changed with our workers code, we essentially left it as-is once we got it working how we wanted it to, including serving files in the 300MB-800MB range.
This is the code that we currently use for delivering the response, the same code that this is supposedly failing with:
It's worth noting that I cannot tail the worker as it's getting too much traffic (250+ invokes per second) so getting logs are out of the question.
If there's anything here that can be improved, I'd much appreciate it if you could point it out. I'm not very familiar with streams so it's possible that I've made a mistake here.
Thanks