not skipping over urls for unfound elements
when i am scraping product data from product urls, if i am trying to either see whether a tag is available and if not to use a different tag or if a tag simply isn't found, i don't want it to give a full error for not finding that certain element i want and not scrape and save the rest of the data
how do i avoid this "skipping" over by overriding or changing the natural response of the crawler
i even have tried try catch statements and if else statements and nothing works
1 Reply
sensitive-blueOP•2y ago
code:
especially this here - it doesn't work to avoid an error and even using a try catch statement that tries to use that different tag and catches it andjust logs an error and returns doesnt work either:
ive tried all different combinations to catch errors but it doesn't avoid the built in crawlee error