F
Firecrawl14mo ago
dhruvs

Concurrency

Is there a way to increase concurrency for crawls? I'm on the highest plan - is this enabled by default? Alternatively, would it be better to run a query to return URLs and then have the scraper run in parallel?
3 Replies
Caleb
Caleb14mo ago
Hey there! Is the issue that you're running into rate limits, or is it just not fast enough? It should be enabled by default. Definitely don't crawl the urls then scrape, that will take much more time. What bottleneck are you running into?
dhruvs
dhruvsOP14mo ago
Hey @Caleb, I'm trying to understand if there's any way to speed up the crawl? Even if it incurs additional credits, could we consider crawling URLs and then scraping them in batches of 10 concurrently/ in parallel?
mogery
mogery14mo ago
Hi @dhruvs, crawl is already parallelized, but the exact concurrency depends on the traffic our servers get. We're working every day to make crawl faster and faster. We currently do not offer higher priority crawls.

Did you find this page helpful?