Crawl job never stopping
so i am trying out firecrawl sdk for the first time as the playground worked very well. Getting confusing results.
1. this code just hangs and never prints the crawl_results:
2.
i can see my credits are getting drained though so it is doing something. i do want to cancel the job as it was just a test but i don't have the job id and i don't see a way to cancel anything in the dashboard, nor do i see results even coming through!

14 Replies
I am getting the same issue, also looks like it is at the limit for concurrent browsers and i don't see a way to reset / cancel

Hey @ekeric13 you should try with async crawl url method which does return the id. The above one will run until finished. With a crawl id you can cancel the crawl by calling the cancel endpoint
Hey @robint there is not yet a way to reset your browsers unless you cancel a crawl like the above
where in the docs is the async crawl method specified?
https://docs.firecrawl.dev/features/crawl
is there an asgi based one?
and yeah i saw i can call a cancel endpoint but because i never got to
print(crawl_result)
i didn't know what to cancelIt might be on the sdk part only
Firecrawl Docs
Python SDK | Firecrawl
Firecrawl Python SDK is a wrapper around the Firecrawl API to help you easily turn websites into markdown.
Will update it to be better, thanks for pointing that out
got it. so wsgi based but it runs in the bg as a green thread or whatever
Did it ever return the results?
i can try that later
no i gave up and sigint
Gotchaa, happy to give the credits back
Just dm me ur email
sure that would be nice.
Will do!
I've having the same issue. @Adobe.Flash . Expectation: Stopping the script should stop the Crawl function therefor stop the concurrent Browsers, right?
You need to call the crawl delete endpoint: https://docs.firecrawl.dev/api-reference/endpoint/crawl-delete
Firecrawl Docs
Cancel Crawl - Firecrawl Docs