Crawl not working on self-hosted with external redis (Upstash)
The bull dashboard doesn't seem to be showing the web scraper queue when it is connected to Upstash redis. I did not do anything like IP whitelisting etc that should block the connection.
I also got no concerning log messages
It also seems like the redis connection is successful. It might be worth noting that using local docker composed redis works.
However, whenever I try to do crawl job, I got socket hangup instead of jobId. Am I missing something here? I attached the IP address of the service in the image and also attached the postman request for crawl.
Is there any missing steps to connect to external redis providers like Upstash?



6 Replies
Also, great job team @🔥 crawl
Thank you @purnama! That's quite odd. Try manually running
pnpm run start
and pnpm run workers
seperately and see if that helps
Essentially they should be 2 different processes and it seems like for some reason here only start
is getting initiatedI see, I will try again!
@Adobe.Flash I tried, it doesn't work, there's no suspicious logs either. Any pointers on why this is the case?
Hi Team, bumping this up!
Hey @purnama not really sure, ccing @rafaelmiller here who has dealt more closely with the self hosting issues.
But my guess is that the workers arent being initialized correctly. Also, make sure you have a redis instance running!
hey @purnama ! It seems like your server isn't connecting properly with redis (though the logs suggest it does). Have you configured the
.env
variable for your external redis (upstash)? It’s possible that the credentials are missing from the .env
variable (I'm not familiar with Upstash, so it's just a guess)Hi @rafaelmiller @Adobe.Flash thanks so much for following up!
I can share the redis string through DM so you can try to quickly spin one up on your end?
I have tried:
1. Upstash
2. DigitalOcean Redis (I deploy firecrawl on DO)
Also I'd be happy to share how I deploy firecrawl on DO for yall if that's something useful