How can I make sure that my self-hosted firecrawl can handle large amount of request?

If the request is plenty how could I handle the large amounts of requests?
6 Replies
Gaurav Chadha
Gaurav Chadha2w ago
you can simply increase the limits and configure it within the docker-compose.yaml example if you want to increase it for api worker:
deploy:
resources:
limits:
cpus: '4.0' # Increase if you have more CPU cores
memory: 8G # Increase if you have more RAM
reservations:
cpus: '2.0' # Minimum guaranteed CPU
memory: 4G # Minimum guaranteed RAM
deploy:
resources:
limits:
cpus: '4.0' # Increase if you have more CPU cores
memory: 8G # Increase if you have more RAM
reservations:
cpus: '2.0' # Minimum guaranteed CPU
memory: 4G # Minimum guaranteed RAM
Add this, and ensure your system (docker instance) has enough memory to spin up a bigger instance.
edsaur
edsaurOP2w ago
Ohh so its more on the docker-server that I need to set to handle concurrency right, sir @Gaurav Chadha? I need to increase the limits?
Gaurav Chadha
Gaurav Chadha2w ago
yes, only if you want to scale, first check your current firecrawl container usage instead.
No description
edsaur
edsaurOP2w ago
This is my container so its so far good right?
No description
Gaurav Chadha
Gaurav Chadha2w ago
Yea more than enough
edsaur
edsaurOP2w ago
But what if I want to limit the jobs how could we do it? Is the delay parameter enough?

Did you find this page helpful?