My Traffic is Increasing - What's The Best Way To Scale on Fly

My wasp app has started to grow significantly in terms of traffic and background proccess. What's the best way for me to scale? And what should I prioritise scaling - db, server or client? I am deploying on fly.io
7 Replies
kapa.ai
kapa.ai5w ago
Hi @CamBlackwood. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- Scaling your Wasp app on Fly.io involves considering the three main components: the server (Node.js backend), the client (static frontend), and the PostgreSQL database. Here’s how you should approach scaling, based on the knowledge sources: 1. Prioritize Scaling the Database The database is often the bottleneck as your app grows, especially with increased background processing and concurrent users. Fly.io allows you to provision a managed PostgreSQL database, and you should monitor its performance (CPU, memory, connections) and consider scaling it up (more resources, read replicas, etc.) as needed. Ensuring your database can handle the load is critical for both app responsiveness and background job reliability. Wasp apps are designed to use PostgreSQL in production, and you can deploy and manage your database directly on Fly.io or use an external provider if needed Deploying Manually. 2. Scale the Server (Backend) Your Node.js server handles API requests and background jobs. On Fly.io, you can scale the number of server instances (VMs) to handle more concurrent requests. However, Wasp currently does not support independent, horizontal scaling of background job workers—jobs and server logic share the same process and CPU. This means scaling the server will help with both API traffic and job throughput, but you should avoid running CPU-intensive jobs that could impact request handling pg-boss considerations. 3. Scale the Client (Frontend) The client is a static app and is typically the easiest to scale. You can deploy it on any static hosting provider (e.g., Netlify), and most static hosts handle scaling automatically. If you’re serving the client from Fly.io, you can increase the number of instances or use a CDN to improve global performance Deployment Overview. 4. Monitoring and Redeployment Monitor all three components for performance issues. Use Fly.io’s dashboard and logs to track resource usage. When you make changes or need to redeploy, you can use the Wasp CLI for a streamlined process Deploying with the Wasp CLI. Summary of Priorities - First, scale the database if you notice slow queries or connection issues. - Next, scale the server to handle more API requests and background jobs. - Finally, ensure the client is served efficiently, but this is rarely a bottleneck. If you need more granular control over background job scaling, be aware that Wasp’s current job system (pg-boss) does not yet support running as separate worker processes or independent scaling pg-boss considerations. If you have specific scaling needs or run into limitations, consider reaching out to the Wasp community for advice on advanced setups. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @CamBlackwood, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
martinsos
martinsos4w ago
Nice :D! Quick human answer as an addition. So it is easiest to just scale vertically, which means stronger/bigger machines/databases. At some point it can make sense to scale horizontally, but that is more complex and you probably don't need it yet. Additional solution, on top of these two, is to potentially extract some heavy server workloads into a separate microservice, that you will deploy on your own and have Wasp server communicate it with it. There is no additional support for this in Wasp, but there is also no obstacles, that microservice just becomes an external service you are using from Wasp server via HTTP or whatever (and probably authenticate via some secret you stored on that microservice and what Wasp server knows). But this makes sense only if you do have such workloads. So what to scale: client, server, or db, is very hard to say, it really depends a lot on what your app is doing. First step is to measure: see where you are hitting some limits. You can take a look at metrics on your deployment provider: CPU usage, Memory, DB capacity, ... . That said, it is least likely that you will need to scale client, for that you will need a lot of traffic, like a lot, since client is just serving static files and not doing much. Server, on the other hand, is usually the one doing more crunching, so will most likely be what you want to scale. And DB -> that almost certainly depends on the amount of data you are storing, so hard to say. Do you have more info what is becoming slow for you, or is reachign some limits?
CamBlackwood
CamBlackwoodOP4w ago
Thanks Martin! The main bottleneck seems to be my DB currently, it keeps crashing. I was thinking I'd specifically need to add more RAM as I'm getting "ran our of memory issues"
martinsos
martinsos4w ago
Ok, so that sounds like RAM (memory), not disk space hm, interesting! I mean you can certainly just get a bigger server for running that database, yeah, but I would first try to figure out what is causing it. It sounds like you have some kind of operation happening that eats up a (too) big amount of memory, and if you know what it is, you can potentially look into resolving it directly. I would look at spikes in memory in general, but then also at the moment right before it crashes, what do the logs say, what was DB exeucting.
CamBlackwood
CamBlackwoodOP4w ago
Yeah that's what I was thinking, interestingly the GUI said it has 1gb but when I ran the cli it said only 256 mb (no wonder it was crashing lol). I have doubled it and no crashes since. It seemed to crash on a semi-complex query being executed a few times a minute
martinsos
martinsos4w ago
Hm ok GUI vs CLI is weird! Ok great, you could maybe still look into potetnially optmizing that query or throwing some indexes to make it faster, but yeah sounds like a safe bet to give this DB some more memory hah 😄
CamBlackwood
CamBlackwoodOP4w ago
Yeah indexes is def on my to do list, especially now that I have 10,000 creators indexed on my platform now

Did you find this page helpful?