The response does not return all of the products on the page.
By making an API call to this address
https://www.svapoebasta.com/sa1_207/, I get a maximum of 30 products.
The JSON for the call is as follows:
18 Replies
Hi there,
The issue is likely that your single click action loads some additional products but doesn't trigger all available content. The "show more" button may need multiple clicks or different timing.
Try this:
Add wait after click:
Hi and thank you for your kind reply! I tried but I am getting the same results
Hi, sorry for the delayed response. Could you share the outcome of the actions in the results? Were the actions executed correctly, or was anything undefined? Assuming the selector is correct. If you can confirm the action is working on your site and takes a full screenshot, I can check further.
Hi, no problem 🙂 I attached the result of the API call. How can I confirm that the action is working? Do You mean by manual clicking on the button on the website and take a screenshot of the page?
Hi there, thanks for sharing the the info, the site's content is dynamically rendered,, so we'll need to use
executeJavascript along with stealth proxy.
This will give much more data than initial request.Now It finds only 7 items
Odd, It gave me 37 results. Is the 7 responses persistant, or it changes?
Hi and sorry for the late reply. I tried now and it gave me 12 results
are you using map endpoint?
No, scraping:
https://api.firecrawl.dev/v2/scrapeAre you hitting concurrency by any chance? What plan are you on? As the same API returns 37 URLs, I'm trying to eliminate the cases where it's different for you.
How can I know if I have concurrency issues? I have the free plan. Sorry but I'm new to scraping and Firecrawl
You'll get an alert notification on app and email, if you can share your scrape job ID I can check, what's happening.
You've recently hit your concurrency limit. This means that you could be scraping faster. If you'd like to increase it, please upgrade your plan.I just noticed this notification on the dashboard from five days ago. "scrapeId": "d3c00617-9ec7-4065-8202-4d0b1f3ce970", Is this the job id? (it's the last one)
Yes, thanks for sharing, I'll check and will update you.
Thanks!
Apologies that our concurrent browser limit notification is a bit too sensitive. Sounds like you probably just hit the limit briefly (we only show a rolling average across 10 minutes in our dashboard).
If you aren't experiencing any performance bottlenecks, then you can safely ignore this! And you can also opt-out of this particular notification here if you want.
Let me know if you have any further questions!
No need to apologize! Thanks for the hint 🙂