crawlee-js
apify-platform
crawlee-python
💻hire-freelancers
🚀actor-promotion
💫feature-request
💻devs-and-apify
🗣general-chat
🎁giveaways
programming-memes
🌐apify-announcements
🕷crawlee-announcements
👥community
Wiping session between inputs
preNavigationHooks not followed
Proxy settings appear to be cached
Caching requests for development and testing
Customising logging
How to clear cookies?
How to handle 403 error response using Puppeteer and JS when click on the button which hit an API
Request works in Postman but doesn't works in crawler even with full browser
about RESIDENTIAL proxies
served with unsupported charset/encoding: ISO-88509-1
error in loader module
Saving the working configurations & Sessions for each sites
Request queue with id: [id] not does not exist
Only-once storage
Camoufox failing
Redirect Control
TypeError: Invalid URL
crawler.run(["https://website.com/1234"]);
works locally while in the apify cloud it breaks with the following error: Reclaiming failed request back to the list or queue. TypeError: Invalid URL
It appears that while running in the cloud, the URL is split by character and each creates a request in the queue, as it can be seen in the screenshot.
The bug happens no matter the URL is hardcoded in the code or added dynamically via input....
How to ensure dataset is created before pushing data to it?