Error: Supabase Client Not Configured in Self-Hosted Firecrawl

I'm currently self-hosting Firecrawl and running into an issue with configuring the Supabase client. I'm using the example code from the documentation: import uuid from firecrawl.firecrawl import FirecrawlApp app = FirecrawlApp( api_key="NO KEY IS NEEDED", api_url="http://localhost:3002/", ) idempotency_key = str(uuid.uuid4()) # optional idempotency key crawl_result = app.crawl_url('mendable.ai', {'crawlerOptions': {'excludes': ['blog/*']}}, True, 2, idempotency_key) print(crawl_result) Despite setting all necessary variables like the OpenAI key, model name, and URL, I get the following error: HTTPError: Internal Server Error: Failed to start crawl job. Supabase client is not configured. And the server logs show: [2024-08-11T13:37:00.209Z] WARN - You're bypassing authentication api_1 | [2024-08-11T13:37:00.210Z] ERROR - Error: Supabase client is not configured. I've verified that the server URL is correct (http://localhost:3002/). It seems like there's an issue with the Supabase client configuration, but I'm not sure what exactly is missing or misconfigured. I am aware that in the note of SELF_HOST.md you have stated that Supabase cannot be configured but that I can still scrape. However, in my case, I am unable to crawl as well. Could someone help me figure out how to properly configure the Supabase client for Firecrawl in a self-hosted environment? Any advice or pointers would be greatly appreciated! Thanks in advance!
2 Replies
rhyswynnastro
rhyswynnastro14mo ago
did you set USE_DB_AUTHENTICATION=false in the .env file?
Julie Grace
Julie GraceOP14mo ago
Yes I did in the .env Scraping works but not crawling scrape_result = app.scrape_url('firecrawl.dev') print(scrape_result['markdown']) Fixed it removing idempotency_key

Did you find this page helpful?