Saving scraped data from dynamic URLs using Crawlee in an Express Server?

Hello all. I've been trying to build an app that triggers a scraping job when the api is hit. The initial endpoint hits a crawlee router which has 2 handlers. one for the url-list scraping and the other for scraping the detail from each of the detail-page. (the url-list handler enqueues the next url-list page to url-list handler too btw) I'm saving the data from each of these scrapes inside a KVstore but I want a way to save all the data in the KV store related to a particular job into database. The attached screenshots are the MRE snippets from my code.
No description
No description
No description
3 Replies
Hall
Hall8mo ago
View post on community site
This post has been pushed to the community knowledgebase. Any replies in this thread will be synced to the community site.
Apify Community
frail-apricot
frail-apricotOP8mo ago
In case the 3rd screenshot is not very clear. Here's the split of both handlers
No description
No description
frail-apricot
frail-apricotOP8mo ago
I want to be able to hit the endpoint concurrently for multiple job request. Each job gets routed to handler 1- extracts let's say 10 detail urls and routes each one to detail-url. then each detail url handler saves the detail page results into a kv store @Oleg V. @Marco

Did you find this page helpful?