How to reset a queue?
While developing a scraper I'm often facing this issue:
1) I add initial page to the queue
2) I run the scraper, which marks the url done
3) I want to re-run the scraper on the same page
I know I can keep renaming the queue name, but is there a way to reset/clear the queue instead?
if I call
drop()
on it, it simply fails with:
8 Replies
Hello @Michal ,
Since you are using named RequestQueue (which is meant to be persistent) you should be able drop it on start and (probably the importatnt part there) create a new one with the same name again for the development.
You may also use unnamed (default) RequestQueue, and then if you run scraper locally with apify-cli via
apify run -p
the default RequestQueue would be deleted for you, everytime you run it. Be aware that you may have only single unnamed (default) RequestQueue in your run.absent-sapphireOP•3y ago
Since you are using named RequestQueue (which is meant to be persistent) you should be able drop it on start and (probably the importatnt part there) create a new one with the same name again for the development.Thanks, but how?
@Michal just advanced to level 2! Thanks for your contributions! 🎉
absent-sapphireOP•3y ago
I am trying to remove it but this fails right away with the above error:
this is done at the very beginning
maybe I changed the name of the queue already and it fails because it dodes not technically exist yet.. will do some testing
or should it be written more like this, with re-opening the queue?
in either case it keeps failing if I call
drop()
harsh-harlequin•3y ago
isn't drop also async? there should be await
I am not sure it will solve your problem though
absent-sapphireOP•3y ago
can you at least tell me if I should re-open it or not?
await did fix the issue - thanks!
and I'm reopening it - is that the right way?
harsh-harlequin•3y ago
yes, droping it and reopening the the queue with the same name will give you new empty queue
btw. if await fixed the issue then it did not failed before on drop(), no? that is not making sense
absent-sapphireOP•3y ago
Correct, I misread the exception trace, it was failing when the crawler was initiated.