scrape job status
Hi!
I noticed when scraping web sites with a lot of pages (>1000) scrape job gets stuck (or just job status) in a state from where I don't know any more what is going on.
For example, right now I have a running job (job has been limited to max of 1000 scrape urls), and if I fetch the status using the API, I get the following data:
And that's it. Nothing is coming to the webhook, and the job isn't listed in logs (https://www.firecrawl.dev/app/logs).
Should this kind of behaviour be expected?
Should we wait for hours to get all the complete data?
I noticed when scraping web sites with a lot of pages (>1000) scrape job gets stuck (or just job status) in a state from where I don't know any more what is going on.
For example, right now I have a running job (job has been limited to max of 1000 scrape urls), and if I fetch the status using the API, I get the following data:
status: activecurrent: 1000total: 1000
: 0 items in the listdatapartial_data: 50 items in the list
partial_data are the same, from index 951 to 1000.And that's it. Nothing is coming to the webhook, and the job isn't listed in logs (https://www.firecrawl.dev/app/logs).
Should this kind of behaviour be expected?
Should we wait for hours to get all the complete data?