Getting "Max client connections reached" with only 25/60 connections
I am using supabase with transaction mode with prisma in a serverless Next.js application hosted via Vercel. My database URL is
postgres://postgres.<ref>:<password>@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connect_timeout=20&connection_limit=1&pool_timeout=20
As a test, I sent 1k requests to my database, and I got a "Max client connections reached" error. After I get this, every single API route returns the same error. Isn't the entire point of Supabase that it would handle cases like this? And how come it says I have 35 open connections still, yet says max clients reached?
69 Replies
did you already read through this? https://supabase.com/partners/integrations/prisma .
why are you specifing connect_timeout and pool_timeout in your connect string?
Is your connection pooler setting in supabase setup for transaction or session?
yes, i read through that. i added those timeouts just for extra timeout length. but either way that shouldnt make a difference. the pooler setting is transaction mode
is prisma setup in your nextjs app as recommended here? https://www.prisma.io/docs/orm/more/help-and-troubleshooting/help-articles/nextjs-prisma-client-dev-practices
Prisma
Best practice for instantiating PrismaClient with Next.js
Best practice for instantiating PrismaClient with Next.js
yes
it was working fine with neon db. i wanted to try out supabase
seems the supavisor pooler just isnt working as expected -> https://github.com/orgs/supabase/discussions/17817 lots of people seem to have issues here
GitHub
PGBouncer and IPv4 Deprecation · supabase · Discussion #17817
We recently announced Supavisor, our new connection pooler. Supavisor is now enabled for all projects created on or after Wednesday September 27th. All existing projects will have Supavisor enabled...
the number of connections is not a supabase thing. it's something in your app that is setup incorrectly
your connection is using pgbouncer, not supavisor. That's the pgbouncer=true part
Socal, all supabase is supavisor now. It requires the flag though to operation in the correct mode.
Well then, I still believe it's an issue with the prisma client. seems like these things have been popping up a lot more frequently
Not denying that it might be.
im not sure how it could be the client if connection_limit is set to 1
i believe i may have narrowed my search
(await Promise.allSettled([...Array(1200)].map(async (v) => {await fetch('https://<myappurl>/api/app-data/get');}))).filter(({status}) => status !== 'fulfilled');
when i run this via the console, i have no issues
the route that is problematic actually is contacting a lambda func in AWS. still using the same connection url with the supabase pooler thoughthe fetch calls here ('https://<myappurl>/api/app-data/get') are to a lamda function that uses a prisma client to fetch the data from supabase?
the fetch calls are to a regular vercel serverless function that makes a simple database query
the test in my appliation that i originally mentioned is to a vercel serverless that makes a call to an AWS lambda function that interacts with the database
both use the same conneciton string
do you have a load balancer setup in front of the aws lamda function?
What i'm thinking is happening is that your sending individual fetch calls to the lamda function, the load balancer is splitting it into multiple lamda functions creating a lot of unneccessary connections.
This almost looks like you're trying to mimic a lot of users fetching data at the same time, which will create a lot of connections doing that.
well, scratch that
'https://<myappurl>/api/app-data/get' is also returning max clients errors again
but no, no load balancer
can you show the code for the lambda function? Are you creating the client within the function handler?
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
moving is outside the function handler is preferred. that way you're not creating a client / connection every time your function is ran. This is similar to what is being suggested by next.js
Best practices for working with AWS Lambda functions - AWS Lambda
Best practices for using AWS Lambda.
it is created outside the function handler in a globally scoped export from an internal database package
then i just import
import prisma from '@myproject/database
stripped out (removed not relevant code) version of the lambda
can you try dropping all optional parameters from your query string in the lamdba and retry?
this might be your issue

will give this a shot tomorrow
i actually found that it has nothing to do with the lambda. I was able to reproduce the error just fine by only accessing my simple endpoint (no AWS invovled)
i was speaking to @Mihai Andrei and he was able to reproduce the same issue as me with prisma
I made some tests And it managed to handle 100 paralel req But when I tried 500 Only 120 were successful and 380 failedfwiw using neon db with a pooler url i have no issues no matter how many requests i use with promise.all
postgresql://<app>:<password>@<dbname>-pooler.us-east-1.aws.neon.tech/neondb?sslmode=require?pgbouncer=true&connect_timeout=20&connection_limit=1&pool_timeout=20
just also just tried using your suggested plain url for supavisor (i had to add pgbouncer=true to get prisma to play nicely) and i receive the same results. with a large number of requests it bricks the entire database and all functions cease to function properly.Are you using the free tier of supabase? I can't find documentation on neon, but it looks like they just give you as many connections as you want because you are just billed for usage. Supabase has restrictions on # of concurrent connections based on your plan. You can also upgrade # of concurrent connections as needed
free tier on supabase is 200 concurrent connections. while you only got up to 120 it may explain some of the wierdness you're seeing between neon and supabase
i have 2 workspaces in supabase, one free and one pro. both using the micro cluster (was using the free one for this test). regardless of the connection limit though, i wouldnt expect this behavior. since to my knowledge vercel wouldnt spin up >120 instances of the same serverless function
and assuming it did spin up 120 instances, since i have my pooler setting as
transaction
i wouldnt expect that to matter since after the function code runs, the connectino shouldnt persist. yet when i do my test, the entire database is bricked with no available connections for several minutes despite no database queries runningI would agree with that. Postgres and a connection pooler do not open connections for no reason though. Something in your code is opening those connections or keeping those connections open longer than they need to.
i feel like this is the intended behavior of just about every ORM though. there is no reason you would expect the ORM to detach the connection. in a serverless context, you want to maintain that connection until the function dies so that you minimize cold starts and having to reconnect to the database. it seems like the pooler ought to be handling this
the connection pool isn't that smart. if a client connects it needs to release the connection back to the pool. Prisma handles this for you automagically

yeah but see the "when the Node.js process ends" (specifically #5 in my example below)
wouldnt that imply the following in a case where i have a severless function that makes a single db query and returns the data:
1. my function is cold
2. i invoke the serverless function
3. vercel spins up an instance of this function, the function then makes a db connection
4. the function receives my HTTP request, runs the db query, and returns the data
5. my function is still hot and ready to serve requests for another 10-15m or so (the db connection is maintained)
#3 is only when you're using a global client in your application and only 1 copy of your application is running. Also, a spike in the traffic as you're trying to mimic can spin up additional serverless functions to handle scale as depicted here

I believe you can upgrade and downgrade you pro compute for hours/day and only be billed for that time. You might double check that. But if so, upgrade to small and see what happens. The client limit goes from 200 to 1000.
how does this screenshot (one from the original post) factor into this understanding?
i can give it a shot on the pro plan and see how that works
my connection limit was originally set to 1
also regarding this, what is the benefit of transaction mode vs session mode if it is maintaining the connection for the duration of the function being alive either way
If you can setup a minimal example repo of what you're dealing with I'll take a look and see what's causing this
tried on a small instance, same issue with 500 concurrent requests
will do this
Just to clarify it died at 160 like before?

at time of error^

looking at that it looks like the connections aren't going through pgbouncer
I was just looking and I think they are postgres

Inital pool connections to database is 10.
if they're not using pgbouncer anymore why even have it connected?
The way I read it the default for new instances is now supavisor, the dropping of pgbouncer is out in Jan sometime.
I think Jake is using the supavisor url
From some ticket I discussed, I think Supabase still uses pgbouncer for other internal things like storage
That’s why you see it there in the list
took forever, but i created a minimal repro
https://supavisor-test-q1jahiu77-jakeleventhal.vercel.app/ is the live URL
repo is private and invite only. please send GH username and i will add. i hardcoded my db url and my vercel stuff for my personal org (idc as long as it is a private repo)

https://www.prisma.io/blog/serverless-database-drivers-KML1ehXORxZV could it be that the solution is just that without a serverless driver, you will always run into this problem? when using neon, i am using the neon serverless driver
it could be that the "correct" solution is that if i want to use prisma + supabase, i need to use something like prisma accelerate
Prisma
Support for Serverless Database Drivers in Prisma ORM Is Now in Pre...
Prisma support for JavaScript-based serverless database drivers is now in Preview, enabling you to connect to Neon and PlanetScale via HTTP and WebSockets.
@jakeleventhal you should set up your minimal case and repo for Supabase to look at in the Supavisor repository. We can't do much with it as users unless we solved that it is an issue on your side.
i just made it public (removed keys) and created the following issue: https://github.com/supabase/supavisor/issues/233
GitHub
Supavisor doesn't handle prisma properly · Issue #233 · supabase/su...
Bug report I confirm this is a bug with Supabase, not with my own application. [z] I confirm I have searched the Docs, GitHub Discussions, and Discord. Describe the bug Supabase doesn't pool co...
your prisma client isn't instantiating on globalThis. this is how it should look
it is in my actual repo, but that is only relevant for local dev. note the node_env !== production check
on produciton, it will just return the singleton value anyway. the reason they rec the singleton pattern is because without it on local dev, you will get dozens of clients created which is unnecessary
so i purposefully excluded that logic in my repro to simplify the code
how many records are in that AppData table?
1
what's the script your running against this api? ***n/m found it

localhost will not ever recreate this because you are guaranteed to have exactly one instance of the server less func
it necessarily has to be deployed
you'll never get >1 func spooled up
maybe that behavior is overridable
Just curious, is the Pooler log in the dashboard showing anything?
i just ran the test again with 500 concurrent queries
50 failed and these are the pooler logs
notably, there are only 2 unique instance_id values
Does not reveal anything new. The instance ID corresponds to the IP of the supavisor server you are hitting is my conclusion from that.
@garyaustin would you expect that running 500 concurrent queries with prisma should be something supavisor can easily handle without a problem?
According to their announcements and info it should IF you are not exceeding the fixed connections to it (has only some relation to the connections to the database). But if you burst connections to it before they can be finished coming back from the database, then as we found in an issue/answer it generates the error you see about about too many connections.
It would be nice to see the total simultaneous connections in some instrumentation (and there may be such a thing) to make sure it is not erroring too soon on connections or where it actually is. As is the documentation implies you exceeded 200 connections for free/basic and 1000 for the next level pro (if indeed the 1000 setting propagated immediately on an upgrade of CPU.
yeah, 1k connections on pro (small) had the same issues with the same number of concurrent queries. i'll be interested to see what the the supavisor team says
i tested it with the supabase client just because I wanted to see if it would throw the same errors as prisma.
still having the same issues with prisma though. Not really clear what it's doing. There's something in the way prisma is interacting with supavisor that isn't setup optimally. Postgres connections were only at 9 during the test with prisma which is in line with the requests going to supavisor.

well isn't the supabase client fundamentally different since it communicates via http like the prisma serverless drivers?
were you using the neon serverless driver in your test on neon?
makes a lot more sense now. https://www.prisma.io/blog/serverless-database-drivers-KML1ehXORxZV. You can wait for prisma to release a driver for serverless for supabase. I didn't look at why they should be different between postgres providers though
The clients on Supabase use the http REST API PostgREST. That has its own built in pooler and uses direct ports to the database
i just re-ran a test with neon
the error-free results are with the serverless drivers enabled

i tested again with neon (WITHOUT serverless drivers) and large numbers of requests gives me errors. interestingly, the errors are not about max connections. the errors say
though i imagine its the same thing
so the outcome that im seeing is basically that prisma will not work well with supabase for spikes in serverless traffic unless supabase comes out with a driver that allows prisma/drizzle/etc to function similarly to supabase-js and communicate via postgrest
Hello! Regarding this. I have a project on the Pro plan, but i see a maximum of 60 connections. Is. there any way to upgrade it?
CPU upgrades get more connections. Those are direct connections not pooler connections.