Firecrawl

F

Firecrawl

Join the community to ask questions about Firecrawl and get answers from other members.

Join

❓┃community-help

💬┃general

🛠️┃self-hosting

Scraping sites built with Framer.com

The output is a terrible mess filled with framer references. Anyone has a solution to this? Thanks in advance...

/extract enpoint metadata per url

Is there a way to get metadata on /extract? For example I provide 10 urls in extract endpoint and instead of receiving a final blob parsed from all of this urls I want get the info of url => data.

Change Tracking

As I understand it, the current change tracking scrapes the url content and then reports whether its changed or not. Is there a way to check for a change or when the site was last updated without fully scraping the url content again?

API Extract not getting Up to Date / Live data - Lots of Lag Time

When using the extract API though N8N or direct using PostMan, it is taking hours before showing the most recent data. For example, if I change details of a product, create a new product etc. when I do a an Extract post and then a GET post with the POST ID, it keeps returning the old data. Not until an hour or so later when I do a POST and GET do I get the most recent data. I don't have any CDN or Caching installed on my server. I recorded a quick video here. Anyone else run into this issue? Any ideas? https://sammyt05-hotmail.tinytake.com/msc/MTA4MjA0MDdfMjQ1MzczODM...

/scrape Endpoint fails with 500, can i see the error message somewhere in the dashboard?

Like the title says. I fired a call to the /scrape endpoint with an url and get a 500 error. Am i allowed to post the job id and or the url to the page?

Getting Cookies as Reponse

As there any way to get the cookies along with scarped data at reponse ?

Able to scrap markdown but not json

Hey everyone I am trying to develop a script which scrapes a page and returns a json with information on that page. I can retrieve markdown for the entire page, however I can't retrieve anything in json. Here's the payload for my curlCommand...

PDF with FIRE-1 agent

I really love your product! Is there a way to attach a PDF file when using the FIRE-1 agent? Right now, it looks like only text can be passed into the prompt....

batch scrape

ideally how long would an async batch scrape of 10 links take? i have been getting this for the past hour :{ "success": false, "error": "Job not found" }...

Removing 1000 limit cap on /map function

Hi, I need to remove the cap on the 1000 limit for results returned on the map/ function. I need to scrape all the available urls on a site and this limit is too low. How can I remove it? Thanks

Filter Search (between two specific dates)

I'm currently using the Search API to filter search results by time with the tbs parameter (e.g., qdr:w for past week). However, I hope to specify both a start AND end date for my search queries. Is there a way to filter results between two specific dates (from date X to date Y) rather than just relative time periods? If this isn't currently supported, could you please consider adding this functionality?...

Crawling available bookings

I want to know if this tool is right for the job. In my screenshot you can see that this is a page which has a arrival/departure calendars and multiple types of rooms, and different options. I would like to scrape all available bookings for all available rooms for the next 6 months. Is this possible with paid account? or is this not the correct tool
No description

A crawl on an ecommerce website with more than 500 products returns only 39 items. Do you know why ?

I am trying to get all perfumes from the website https://la-barfumerie.com/ I use the url https://la-barfumerie.com/products/* in the Extract tool. I did it twice: first result contains 39 items and the second 37 items. The website contains more than 500 products ! I don't understand if the limitation comes from FireCrawl or if there is something like javascript or a protection technology on the website. ...

Crawls are taking a very long time

Hey team, had a couple of crawls here that only crawled 80 or so webpages but didn't finish for over an hour. What gives? 9ced422c-e8eb-4310-a905-7d18bb424b4d 56f29e0f-9211-457e-bc6b-69ed7aa70556...

Make.com LLM module returning "empty"

Hello All, I just found Firecrawl using Make.com, tested it in the playground and it worked great for what I need.
I am using the LLM module on Make.com is running successfully, but the Output is returning "empty".
Does anyone know why this might be? I tested multiple websites and multiple output formats....

How to return 1 search result with Firecrawl <> Make.com API

Do I have to set a minimum number of search results in a firecrawl module as 2? Is there any way I could only return 1 result? I want to search for the website of 200,000 companies, and so returning 2 results rather than 1 result significantly increases the time it takes to do this.

Need help for headers and cookies

does anyone know how to get the response header from a web scrape or

Anyone know anything about their hosted mcp server?

I see a post about it on Day 6 of their launch week, but im assuming the url for the hosted mcp server is different than their regular endpoints, anyone use this live? ive used the stdio version locally but want to use sse for deployed projects

Crawling + Scraping Classifieds

So far I've been able to use the new Search feature to actually find classified or booking sites that list rental properties and hotel rooms. It seems though that Firecrawl is unable or won't crawl those given classified sites? Is there a reliable way otherwise?
No description