M
Mastra•3w ago
Mario

Limits of workflows as serverless functions in NextJS

Hi there, I was curious how you manage long-performing tasks? I came across Qstash workflows, which somehow enables to run long tasks even in serverless functions in nextjs, but ideally would love to stick with Mastra workflows. How do you manage that? Do you slice in smaller pieces and then for example queue via Qstash? Specifically, my current task is: - Crawl c. 1000 pages and create embeddings off of them (currently one process) - Alternatively, I could queue the 1000 pages and then kick off a workflow that only does one page crawl + the embedding generation (or maybe 10-30 as it's usually one domain that brings 1-30 pages). Thanks for any pointers and apologies if I'im missing the obvious. Cheeres, Mario
7 Replies
Mastra Triager
Mastra Triager•3w ago
šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10330 šŸ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. šŸ™ Thank you @Mario for helping us improve Mastra! šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10331 šŸ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. šŸ™ Thank you @Mario for helping us improve Mastra! šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10332 šŸ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. šŸ™ Thank you @Mario for helping us improve Mastra! šŸ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10333 šŸ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. šŸ™ Thank you @Mario for helping us improve Mastra!
Abhi Aiyer
Abhi Aiyer•3w ago
Hi @Mario, great question! A little bit about Workflows in Mastra: With mastra you define your workflow and it gets run on what we like to call "execution engines". By default, we have a default execution engine that runs in a single process. If using that, you'll need to deploy to an environment that can run long processes. We also support execution engines: * Inngest engine https://mastra.ai/docs/workflows/inngest-workflow Soon * Vercel * Cloudflare * Temporal * Google Pubsub And if people want to contribute engines that is even better. This is a big area of development for us in 2026!
Inngest Workflow | Workflows | Mastra Docs
Inngest workflow allows you to run Mastra workflows with Inngest
Mario
MarioOP•2w ago
Thank you Abhi for the response and apologies for the delayed response from my side. I'll definitely checkout inngest as an engine. That being said: would you say it is best practice to use something like qstash to queue things like crawling and embedding generation for a RAG system? I watched the RAG podcasts too but haven't seen any specific details about this there. Thanks a ton! Cheers, Mario
Abhi Aiyer
Abhi Aiyer•2w ago
Hi Mario, I think it makes sense if your application has the scale to warrant it. If you wanna be ahead of the curve then queue systems like qstash are fine. In the future you should be able to get distributed workflows out of the box with Mastra to run these things
arpitBhalla
arpitBhalla•2w ago
hi Abhi, I'd love to contribute the temporal engine, I'm using it at my work.
Abhi Aiyer
Abhi Aiyer•2w ago
@arpitBhalla that would be cool!
Mario
MarioOP•2w ago
Thanks Abhi - much appreciated!

Did you find this page helpful?