puppeteer runs often stopping due to `ProtocolError ` after `Memory is critically overloaded.`

During my Apify scraping runs with Crawlee / puppeteer, 32GB RAM per run, my jobs stop showing There was an uncaught exception during the run of the Actor and it was not handled. And the logs you see in the screenshot at the end. This often happens for runs that are running for 30+ minutes. Under 30 minutes is less likely to have this error. I've tried "Increase the 'protocolTimeout' setting ", but observed that the error still happens, just after a longer wait. Tried different concurrency settings, even leaving to default, but consistently have seen this error.
const crawler = new PuppeteerCrawler({
launchContext: {
launchOptions: {
headless: true,
args: [
"--no-sandbox", // Mitigates the "sandboxed" process issue in Docker containers,
"--ignore-certificate-errors",
"--disable-dev-shm-usage",
"--disable-infobars",
"--disable-extensions",
"--disable-setuid-sandbox",
"--ignore-certificate-errors",
"--disable-gpu", // Mitigates the "crashing GPU process" issue in Docker containers
],
},
},
maxRequestRetries: 1,
navigationTimeoutSecs: 60,
autoscaledPoolOptions: { minConcurrency: 30 },
maxSessionRotations: 5,
preNavigationHooks: [
async ({ blockRequests }, goToOptions) => {
if (goToOptions) goToOptions.waitUntil = "domcontentloaded"; // Set waitUntil here
await blockRequests({
urlPatterns: [
...
],
});
},
],
proxyConfiguration,
requestHandler: router,
});
await crawler.run(startUrls);
await Actor.exit();
const crawler = new PuppeteerCrawler({
launchContext: {
launchOptions: {
headless: true,
args: [
"--no-sandbox", // Mitigates the "sandboxed" process issue in Docker containers,
"--ignore-certificate-errors",
"--disable-dev-shm-usage",
"--disable-infobars",
"--disable-extensions",
"--disable-setuid-sandbox",
"--ignore-certificate-errors",
"--disable-gpu", // Mitigates the "crashing GPU process" issue in Docker containers
],
},
},
maxRequestRetries: 1,
navigationTimeoutSecs: 60,
autoscaledPoolOptions: { minConcurrency: 30 },
maxSessionRotations: 5,
preNavigationHooks: [
async ({ blockRequests }, goToOptions) => {
if (goToOptions) goToOptions.waitUntil = "domcontentloaded"; // Set waitUntil here
await blockRequests({
urlPatterns: [
...
],
});
},
],
proxyConfiguration,
requestHandler: router,
});
await crawler.run(startUrls);
await Actor.exit();
No description
2 Replies
Hall
Hall2mo ago
Someone will reply to you shortly. In the meantime, this might help:
typical-coral
typical-coral2mo ago
If you're still seeing the "Memory is critically overloaded." notification in logs even with 32GB, it likely means there's a memory leak somewhere in your scraping logic. I’d recommend reviewing your code to identify and fix any potential memory issues.

Did you find this page helpful?