Shared compute - more cost effective
Having an individual resource allocation per db is very inefficient and not cost effective. It's because of this that we're putting all customer data in the same database instead of creating a database per customer. I understand that Neon can make more money with the current setup, but it makes more sense for customers to have a more efficient option. Vercel is doing something somewhat similar with fluid compute.
For example, if I have 4 servers running and each one is using 0.2 CPU, I will be billed 4 * 0.25 = 1 CPU. With shared compute, I would only be billed 4 * 0.8 = 1 CPU. In a more extreme example where I'm using hundreds across customers and/or agents, if I have 200 servers running at 0.02, the I will be billed 200 * 0.25 = 50 CPU instead of 200 * 0.02 = 4 CPU. This means that while Neon starts out as a very affordable option compared to traditional hosting, when using Neon feature to the fullest, it can become very expensive at scale.
1 Reply
harsh-harlequin•5mo ago
we're putting all customer data in the same database instead of creating a database per customer.Honestly this is fine for most cases. The recommendation for doing a Neon project per tenant/customer is when you need: 1. Strict data isolation 2. Data residency requirements (e.g. you have customers would like to have their data stored in the EU, while having other customers in another region) 3. You're building a platform and you would like deploy individual resources for each customer(e.g. building an agents platform). In this case you could pass down the costs to your customers (we have APIs for billing) The downside of having multiple Postgres databases in the same project is that data recovery will be tricky. You wouldn't be able to instantly restore your data since Neon recovers your data at the project level, not the database level. You would need to manually backup and restore