Local Environment Max Retries Error

I am trying to self-host Firecrawl, but am running into a weird error. Here is my code:
from firecrawl import FirecrawlApp

app = FirecrawlApp(api_key="hi", api_url="http://localhost:3002")

#Crawl a website:
crawl_status = app.crawl_url(
'https://www.scu.edu/engineering/',
params={
'limit': 500,
'scrapeOptions': {'formats': ['markdown', 'html']}
},
poll_interval=100
)
print(crawl_status)
from firecrawl import FirecrawlApp

app = FirecrawlApp(api_key="hi", api_url="http://localhost:3002")

#Crawl a website:
crawl_status = app.crawl_url(
'https://www.scu.edu/engineering/',
params={
'limit': 500,
'scrapeOptions': {'formats': ['markdown', 'html']}
},
poll_interval=100
)
print(crawl_status)
I get the following error after it runs for a bit:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='localhost', port=3002): Max retries exceeded with url: /v1/crawl/8db68f10-fbf2-4911-a806-8faa06f478d2?skip=221 (Caused by SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1020)')))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='localhost', port=3002): Max retries exceeded with url: /v1/crawl/8db68f10-fbf2-4911-a806-8faa06f478d2?skip=221 (Caused by SSLError(SSLError(1, '[SSL] record layer failure (_ssl.c:1020)')))
If I decrease the limit to 100 pages, everything works fine. Any tips on how to fix this? Thank you!
0 Replies
No replies yetBe the first to reply to this messageJoin

Did you find this page helpful?