scrape job status

Hi!

I noticed when scraping web sites with a lot of pages (>1000) scrape job gets stuck (or just job status) in a state from where I don't know any more what is going on.

For example, right now I have a running job (job has been limited to max of 1000 scrape urls), and if I fetch the status using the API, I get the following data:
  • status: active
  • current: 1000
  • total: 1000
  • data
    : 0 items in the list
  • partial_data: 50 items in the list
This state is now the same for hours. Items in partial_data are the same, from index 951 to 1000.

And that's it. Nothing is coming to the webhook, and the job isn't listed in logs (https://www.firecrawl.dev/app/logs).

Should this kind of behaviour be expected?
Should we wait for hours to get all the complete data?
Was this page helpful?