puppeteer runs often stopping due to `ProtocolError ` after `Memory is critically overloaded.`
During my Apify scraping runs with Crawlee / puppeteer, 32GB RAM per run, my jobs stop showing
There was an uncaught exception during the run of the Actor and it was not handled.
And the logs you see in the screenshot at the end.
This often happens for runs that are running for 30+ minutes. Under 30 minutes is less likely to have this error.
I've tried "Increase the 'protocolTimeout' setting ", but observed that the error still happens, just after a longer wait.
Tried different concurrency settings, even leaving to default, but consistently have seen this error.

2 Replies
Someone will reply to you shortly. In the meantime, this might help:
typical-coral•2mo ago
If you're still seeing the "Memory is critically overloaded." notification in logs even with 32GB, it likely means there's a memory leak somewhere in your scraping logic. I’d recommend reviewing your code to identify and fix any potential memory issues.