infinite scrolling - best practice
I'm crawling a products page on an online toy store. There is an 'all products' page that loads 20 products at a time. It's an infinite scrolling scenario so no pagination buttons. I followed the tutorial from the blog and added the await for netowrkidle state and infinite_scroll method to implement the page to be scrolled continuously. After each scroll, 20 more products are loaded and the process repeats. The issue I'm facing is that there are thousands of products to retrieve. So, eventually a timeout error is thrown.
To me, this approach seems rather inefficient. Not only is the timeout an issue (which I'm sure can be adjusted), but since it might take an hour to scroll all products any random event (popup, or some anti-scrape challenge) might appear and end the run before the enqueued product detail links can even be processed.
I get that using API requests, where possible is preferable but I haven't identified those yet. I do see that after each scroll, the URL is appended with ?page=2. So scroll down again and it become ?page=3, etc.
Would would be best practice to implement this cleanly within the Crawlee framework? Any pointers from you gurus? Thanks!
Below is the default handler implementing the infinite scroll. I'm enqueuing all the product detail links which are then processed by a separate route handler.
@router.default_handler
async def default_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f'default_handler is processing: {context.request.url}')
await context.page.wait_for_selector('.ais-InfiniteHits-list')
# Infinite scrolling: wait for page load to complete and scroll to bottom
await context.page.wait_for_load_state('networkidle')
await context.infinite_scroll()
await context.enqueue_links(
selector='.ais-InfiniteHits-item .result-wrapper div a',
label='PRODUCT',
unique_key=context.request.url
)
9 Replies
Someone will reply to you shortly. In the meantime, this might help:
manual-pink•5mo ago
Hi! As you've written, the best practice in these cases is sending API requests manually.
I think for most websites you can see API requests by going into DevTools > Network tab and seeing what is sent upon each scroll.
Also, can you utilize ?page= query parameter? You would then finish scraping e.g. xxx.xxx?page=10, enqueue xxx.xxx?page=11, scrape that and so on.
You can link a website here, so I can take a look and maybe help. Or send to me in the DMs if you're uncomfortable sharing it
@foxt141 just advanced to level 1! Thanks for your contributions! 🎉
Hey. It may be useful for you to read this topic
https://discord.com/channels/801163717915574323/1286353084456374314
But I would use the approach with HTTPCrawler and API calls. Which are called when infinite scrolling, and would replace the explicit pagination
@Mantisus just advanced to level 6! Thanks for your contributions! 🎉
fair-roseOP•5mo ago
I appreciate the reply guys! No problem sharing the link. I'm not building anything for a client. Just personal development to get to a point where I can offer my services...
The site is https://www.hamleys.com/shop-toys
Example of a GET request on a product detail page that returns a nice JSON blob with all the product info: https://recs-us-e1a.particularaudience.com/2.7/PageView?w=5f330b64-e9ed-ec11-aae9-02dca44cceec&p=1015979&r=https%3A%2F%2Fwww.hamleys.com%2Fstitch-crack-me-up-feature-plush&c=679aa1ed-7e8a-4157-8fd0-f7e2a44e19be&e[0].s=.product-info-main%20.page-title%20span&e[0].v[0]=Stitch%20Crack%20Me%20Up%20Feature%20Plush&e[1].s=.product-info-main%20.product.brand.name%20.brand-name&e[1].v[0]=Disney%20Stitch&e[2].s=.product-info-movable%20%23product-addtocart-button%20span&e[2].v[0]=&e[3].s=.product-info-main%20.page-title%20span&e[3].v[0]=Stitch%20Crack%20Me%20Up%20Feature%20Plush&e[4].s=.product-info-main%20.product.brand.name%20.brand-name&e[4].v[0]=Disney%20Stitch&e[5].s=.product-info-main%20.page-title%20span&e[5].v[0]=Stitch%20Crack%20Me%20Up%20Feature%20Plush&rcc=GBP&rcl=en-GB
However, to get that, I still need to get the links to all the product pages which is why I was trying to implement infinite scrolling.
I'd be very interested to see how you would go about aquiring all the links without the approach I used. Using the API to achieve that sounds like a great idea though. Although wouldn't that cause potential issues with anti-scrape detection if it were in place?
@scubaduba just advanced to level 1! Thanks for your contributions! 🎉
I would just use sitemap.xml
https://www.hamleys.com/media/sitemap.xml
If you ignore that possibility.
The lazy approach is PlaywrightCrawler and perform pagination using
?page={i}
, infinite scrolling is just not needed here.
A less lazy approach is to analysis the POST request site make to the backend and use HTTPCrawlermanual-pink•5mo ago
here is an example of a request that is sent when scrolling:
but yeah, there is no problem with just using ?page={i}