F
Firecrawlβ€’8mo ago
Mart

Getting extract values without Queue

Hey there πŸ‘‹πŸ» I am working on an AI workflow that uses the Firecrawl extract functionality as a Tool. It seems that if I now use the extract endpoint as described, it will return the job id and not the actual results. I could see the value of this in some scenarios, but in my case I am using the /extract endpoint for an LLM agent to consume, where it would make much more sense to have the request stay open until the extract job is finished. Otherwise, it adds unnecessary complexity where I have to instruct the LLM to wait and poll the /extract endpoint for the result. I am wondering if there is a workaround for this. Only thing I can think of is making a wrapper api around the extract endpoints, but for obvious reasons I want to explore other options. Thanks in advance!
5 Replies
Nollter
Nollterβ€’5mo ago
I have the same problem, is there any solution to this?
mogery
mogeryβ€’5mo ago
Use one of the SDKs which does the polling for you
Mart
MartOPβ€’5mo ago
I am now using a different service (Jina.ai) and instructing llms to extract information instead of using firecrawl. I really liked firecrawl and understand why they implemented queueing but it makes it unusable for a lot of custom agentic tool workflows in my opinion
SoulBloxXer
SoulBloxXerβ€’5mo ago
Mind telling me how ur finding jina ai so far?
Mart
MartOPβ€’5mo ago
It works pretty well. I like that I can grab an API from their homepage instantly. Only issue I am running into is that I have to set a new token every week or so because the API token has run out. I did find out that Jina uses their ReaderLM v2 for the reader API, which is a small (1.5b params) and open source model that can be used within ollama. So I might run the model fully local to prevent the api limiting stuff.

Did you find this page helpful?