billing successful scrapped urls after timeout crash in n8n workflow
Hello everyone,
I have a workflow in n8n where I used an HTTP request to Firecrawl (scrape) with a default timeout of 30000ms. The node failed after reaching the timeout. I can see in the Firecrawl activity logs that most urls were successfully scrapped. I have removed the timeout constrain to avoid this issue. If I run my workflow again on the same urls batch, will the ones that have been previously scrapped billed again? I was hoping that Firecrawl would identify those scrapped urls and return the data for free, instead of having to download each markdown by hand from the activity log. I haven't found anything about this in the doc or the faq.
Fred.
0 Replies