Getting "Max client connections reached" with only 25/60 connections

I am using supabase with transaction mode with prisma in a serverless Next.js application hosted via Vercel. My database URL is postgres://postgres.<ref>:<password>@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connect_timeout=20&connection_limit=1&pool_timeout=20 As a test, I sent 1k requests to my database, and I got a "Max client connections reached" error. After I get this, every single API route returns the same error. Isn't the entire point of Supabase that it would handle cases like this? And how come it says I have 35 open connections still, yet says max clients reached?
No description
69 Replies
Socal
Socal2y ago
did you already read through this? https://supabase.com/partners/integrations/prisma . why are you specifing connect_timeout and pool_timeout in your connect string? Is your connection pooler setting in supabase setup for transaction or session?
Supabase
Prisma | Works With Supabase
Next-generation Node.js and TypeScript ORM.
jakeleventhal
jakeleventhalOP2y ago
yes, i read through that. i added those timeouts just for extra timeout length. but either way that shouldnt make a difference. the pooler setting is transaction mode
Socal
Socal2y ago
Prisma
Best practice for instantiating PrismaClient with Next.js
Best practice for instantiating PrismaClient with Next.js
jakeleventhal
jakeleventhalOP2y ago
yes it was working fine with neon db. i wanted to try out supabase
jakeleventhal
jakeleventhalOP2y ago
seems the supavisor pooler just isnt working as expected -> https://github.com/orgs/supabase/discussions/17817 lots of people seem to have issues here
GitHub
PGBouncer and IPv4 Deprecation · supabase · Discussion #17817
We recently announced Supavisor, our new connection pooler. Supavisor is now enabled for all projects created on or after Wednesday September 27th. All existing projects will have Supavisor enabled...
Socal
Socal2y ago
the number of connections is not a supabase thing. it's something in your app that is setup incorrectly your connection is using pgbouncer, not supavisor. That's the pgbouncer=true part
garyaustin
garyaustin2y ago
Socal, all supabase is supavisor now. It requires the flag though to operation in the correct mode.
Socal
Socal2y ago
Well then, I still believe it's an issue with the prisma client. seems like these things have been popping up a lot more frequently
garyaustin
garyaustin2y ago
Not denying that it might be.
jakeleventhal
jakeleventhalOP2y ago
im not sure how it could be the client if connection_limit is set to 1 i believe i may have narrowed my search (await Promise.allSettled([...Array(1200)].map(async (v) => {await fetch('https://<myappurl>/api/app-data/get');}))).filter(({status}) => status !== 'fulfilled'); when i run this via the console, i have no issues the route that is problematic actually is contacting a lambda func in AWS. still using the same connection url with the supabase pooler though
Socal
Socal2y ago
the fetch calls here ('https://<myappurl>/api/app-data/get') are to a lamda function that uses a prisma client to fetch the data from supabase?
jakeleventhal
jakeleventhalOP2y ago
the fetch calls are to a regular vercel serverless function that makes a simple database query the test in my appliation that i originally mentioned is to a vercel serverless that makes a call to an AWS lambda function that interacts with the database both use the same conneciton string
Socal
Socal2y ago
do you have a load balancer setup in front of the aws lamda function? What i'm thinking is happening is that your sending individual fetch calls to the lamda function, the load balancer is splitting it into multiple lamda functions creating a lot of unneccessary connections. This almost looks like you're trying to mimic a lot of users fetching data at the same time, which will create a lot of connections doing that.
jakeleventhal
jakeleventhalOP2y ago
well, scratch that 'https://<myappurl>/api/app-data/get' is also returning max clients errors again but no, no load balancer
Socal
Socal2y ago
can you show the code for the lambda function? Are you creating the client within the function handler?
Socal
Socal2y ago
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html moving is outside the function handler is preferred. that way you're not creating a client / connection every time your function is ran. This is similar to what is being suggested by next.js
jakeleventhal
jakeleventhalOP2y ago
it is created outside the function handler in a globally scoped export from an internal database package
import { PrismaClient } from '@prisma/client/app';

declare global {
// eslint-disable-next-line vars-on-top, no-var
var localPrisma: PrismaClient;
}

// Export the prisma client
const getPrismaClient = (): PrismaClient => {
if (process.env.NODE_ENV === 'production') {
return new PrismaClient({ datasourceUrl: process.env.POSTGRES_PRISMA_URL });
}

if (!global.localPrisma) {
global.localPrisma = new PrismaClient({ datasourceUrl: process.env.POSTGRES_PRISMA_URL });
}
return global.localPrisma;
};

const prisma = getPrismaClient();
export default prisma;
import { PrismaClient } from '@prisma/client/app';

declare global {
// eslint-disable-next-line vars-on-top, no-var
var localPrisma: PrismaClient;
}

// Export the prisma client
const getPrismaClient = (): PrismaClient => {
if (process.env.NODE_ENV === 'production') {
return new PrismaClient({ datasourceUrl: process.env.POSTGRES_PRISMA_URL });
}

if (!global.localPrisma) {
global.localPrisma = new PrismaClient({ datasourceUrl: process.env.POSTGRES_PRISMA_URL });
}
return global.localPrisma;
};

const prisma = getPrismaClient();
export default prisma;
then i just import import prisma from '@myproject/database stripped out (removed not relevant code) version of the lambda
import prisma, { ImageFileType } from '@artelo/database';
import { ImageUploadUtils } from '@artelo/process-image';
import { ImageFileConfig } from '@artelo/types';
import { UploadUtils } from '@artelo/utils';
import { GetObjectCommand, PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import { Handler, SNSEvent } from 'aws-lambda';
import { ReadableStream } from 'stream/web';

const processNewFile: Handler = async (event: SNSEvent) => {
const { fileKey, ...fileDetails } = JSON.parse(event.Records[0].Sns.Message) as ImageFileConfig;
const bucketName = process.env.S3_UPLOAD_BUCKET;

const s3Client = new S3Client();
const response = await s3Client.send(new GetObjectCommand({ Bucket: bucketName, Key: fileKey }));

const mimeType = response.ContentType;
const stream = response.Body.transformToWebStream();
const image = await ImageUploadUtils.getJpegImage(stream as ReadableStream, mimeType);
const { height, width } = await ImageUploadUtils.getImageDimensions(image);
const previews = await ImageUploadUtils.makePreviews(image);

// Set the previews in S3
await Promise.all(
previews.map((preview) =>
s3Client.send(
new PutObjectCommand({
Body: preview.previewBuffer,
Bucket: bucketName,
ContentType: 'image/jpeg',
Key: UploadUtils.getPreviewKey(fileDetails.id, fileDetails.userId, preview.size)
})
)
)
);

await prisma.imageFile.create({
data: {
...fileDetails,
fileType: fileDetails.fileType as ImageFileType,
height,
width
}
});
};

export default processNewFile;
import prisma, { ImageFileType } from '@artelo/database';
import { ImageUploadUtils } from '@artelo/process-image';
import { ImageFileConfig } from '@artelo/types';
import { UploadUtils } from '@artelo/utils';
import { GetObjectCommand, PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import { Handler, SNSEvent } from 'aws-lambda';
import { ReadableStream } from 'stream/web';

const processNewFile: Handler = async (event: SNSEvent) => {
const { fileKey, ...fileDetails } = JSON.parse(event.Records[0].Sns.Message) as ImageFileConfig;
const bucketName = process.env.S3_UPLOAD_BUCKET;

const s3Client = new S3Client();
const response = await s3Client.send(new GetObjectCommand({ Bucket: bucketName, Key: fileKey }));

const mimeType = response.ContentType;
const stream = response.Body.transformToWebStream();
const image = await ImageUploadUtils.getJpegImage(stream as ReadableStream, mimeType);
const { height, width } = await ImageUploadUtils.getImageDimensions(image);
const previews = await ImageUploadUtils.makePreviews(image);

// Set the previews in S3
await Promise.all(
previews.map((preview) =>
s3Client.send(
new PutObjectCommand({
Body: preview.previewBuffer,
Bucket: bucketName,
ContentType: 'image/jpeg',
Key: UploadUtils.getPreviewKey(fileDetails.id, fileDetails.userId, preview.size)
})
)
)
);

await prisma.imageFile.create({
data: {
...fileDetails,
fileType: fileDetails.fileType as ImageFileType,
height,
width
}
});
};

export default processNewFile;
Socal
Socal2y ago
can you try dropping all optional parameters from your query string in the lamdba and retry?
postgres://postgres.<ref>:<password>@aws-0-us-east-1.pooler.supabase.com:6543/postgres
postgres://postgres.<ref>:<password>@aws-0-us-east-1.pooler.supabase.com:6543/postgres
Socal
Socal2y ago
this might be your issue
No description
jakeleventhal
jakeleventhalOP2y ago
will give this a shot tomorrow i actually found that it has nothing to do with the lambda. I was able to reproduce the error just fine by only accessing my simple endpoint (no AWS invovled) i was speaking to @Mihai Andrei and he was able to reproduce the same issue as me with prisma
I made some tests And it managed to handle 100 paralel req But when I tried 500 Only 120 were successful and 380 failed
fwiw using neon db with a pooler url i have no issues no matter how many requests i use with promise.all postgresql://<app>:<password>@<dbname>-pooler.us-east-1.aws.neon.tech/neondb?sslmode=require?pgbouncer=true&connect_timeout=20&connection_limit=1&pool_timeout=20 just also just tried using your suggested plain url for supavisor (i had to add pgbouncer=true to get prisma to play nicely) and i receive the same results. with a large number of requests it bricks the entire database and all functions cease to function properly.
Socal
Socal2y ago
Are you using the free tier of supabase? I can't find documentation on neon, but it looks like they just give you as many connections as you want because you are just billed for usage. Supabase has restrictions on # of concurrent connections based on your plan. You can also upgrade # of concurrent connections as needed free tier on supabase is 200 concurrent connections. while you only got up to 120 it may explain some of the wierdness you're seeing between neon and supabase
jakeleventhal
jakeleventhalOP2y ago
i have 2 workspaces in supabase, one free and one pro. both using the micro cluster (was using the free one for this test). regardless of the connection limit though, i wouldnt expect this behavior. since to my knowledge vercel wouldnt spin up >120 instances of the same serverless function and assuming it did spin up 120 instances, since i have my pooler setting as transaction i wouldnt expect that to matter since after the function code runs, the connectino shouldnt persist. yet when i do my test, the entire database is bricked with no available connections for several minutes despite no database queries running
Socal
Socal2y ago
I would agree with that. Postgres and a connection pooler do not open connections for no reason though. Something in your code is opening those connections or keeping those connections open longer than they need to.
jakeleventhal
jakeleventhalOP2y ago
i feel like this is the intended behavior of just about every ORM though. there is no reason you would expect the ORM to detach the connection. in a serverless context, you want to maintain that connection until the function dies so that you minimize cold starts and having to reconnect to the database. it seems like the pooler ought to be handling this
Socal
Socal2y ago
the connection pool isn't that smart. if a client connects it needs to release the connection back to the pool. Prisma handles this for you automagically
No description
jakeleventhal
jakeleventhalOP2y ago
yeah but see the "when the Node.js process ends" (specifically #5 in my example below) wouldnt that imply the following in a case where i have a severless function that makes a single db query and returns the data: 1. my function is cold 2. i invoke the serverless function 3. vercel spins up an instance of this function, the function then makes a db connection 4. the function receives my HTTP request, runs the db query, and returns the data 5. my function is still hot and ready to serve requests for another 10-15m or so (the db connection is maintained)
Socal
Socal2y ago
#3 is only when you're using a global client in your application and only 1 copy of your application is running. Also, a spike in the traffic as you're trying to mimic can spin up additional serverless functions to handle scale as depicted here
No description
garyaustin
garyaustin2y ago
I believe you can upgrade and downgrade you pro compute for hours/day and only be billed for that time. You might double check that. But if so, upgrade to small and see what happens. The client limit goes from 200 to 1000.
jakeleventhal
jakeleventhalOP2y ago
how does this screenshot (one from the original post) factor into this understanding? i can give it a shot on the pro plan and see how that works my connection limit was originally set to 1 also regarding this, what is the benefit of transaction mode vs session mode if it is maintaining the connection for the duration of the function being alive either way
Socal
Socal2y ago
If you can setup a minimal example repo of what you're dealing with I'll take a look and see what's causing this
jakeleventhal
jakeleventhalOP2y ago
tried on a small instance, same issue with 500 concurrent requests will do this
garyaustin
garyaustin2y ago
Just to clarify it died at 160 like before?
jakeleventhal
jakeleventhalOP2y ago
No description
jakeleventhal
jakeleventhalOP2y ago
at time of error^
No description
Socal
Socal2y ago
looking at that it looks like the connections aren't going through pgbouncer
garyaustin
garyaustin2y ago
I was just looking and I think they are postgres
garyaustin
garyaustin2y ago
No description
garyaustin
garyaustin2y ago
Inital pool connections to database is 10.
Socal
Socal2y ago
if they're not using pgbouncer anymore why even have it connected?
garyaustin
garyaustin2y ago
The way I read it the default for new instances is now supavisor, the dropping of pgbouncer is out in Jan sometime.
Mihai Andrei
Mihai Andrei2y ago
I think Jake is using the supavisor url From some ticket I discussed, I think Supabase still uses pgbouncer for other internal things like storage That’s why you see it there in the list
jakeleventhal
jakeleventhalOP2y ago
took forever, but i created a minimal repro https://supavisor-test-q1jahiu77-jakeleventhal.vercel.app/ is the live URL repo is private and invite only. please send GH username and i will add. i hardcoded my db url and my vercel stuff for my personal org (idc as long as it is a private repo)
No description
jakeleventhal
jakeleventhalOP2y ago
https://www.prisma.io/blog/serverless-database-drivers-KML1ehXORxZV could it be that the solution is just that without a serverless driver, you will always run into this problem? when using neon, i am using the neon serverless driver it could be that the "correct" solution is that if i want to use prisma + supabase, i need to use something like prisma accelerate
Prisma
Support for Serverless Database Drivers in Prisma ORM Is Now in Pre...
Prisma support for JavaScript-based serverless database drivers is now in Preview, enabling you to connect to Neon and PlanetScale via HTTP and WebSockets.
garyaustin
garyaustin2y ago
@jakeleventhal you should set up your minimal case and repo for Supabase to look at in the Supavisor repository. We can't do much with it as users unless we solved that it is an issue on your side.
jakeleventhal
jakeleventhalOP2y ago
i just made it public (removed keys) and created the following issue: https://github.com/supabase/supavisor/issues/233
GitHub
Supavisor doesn't handle prisma properly · Issue #233 · supabase/su...
Bug report I confirm this is a bug with Supabase, not with my own application. [z] I confirm I have searched the Docs, GitHub Discussions, and Discord. Describe the bug Supabase doesn't pool co...
Socal
Socal2y ago
your prisma client isn't instantiating on globalThis. this is how it should look
import { PrismaClient } from '@prisma/client'

const prismaClientSingleton = () => {
return new PrismaClient({
datasourceUrl:
'postgres://postgres.P:PASSWORD@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1'
})
}

declare global {
var prisma: undefined | ReturnType<typeof prismaClientSingleton>
}

const prisma = globalThis.prisma ?? prismaClientSingleton()

export default prisma

if (process.env.NODE_ENV !== 'production') globalThis.prisma = prisma
import { PrismaClient } from '@prisma/client'

const prismaClientSingleton = () => {
return new PrismaClient({
datasourceUrl:
'postgres://postgres.P:PASSWORD@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1'
})
}

declare global {
var prisma: undefined | ReturnType<typeof prismaClientSingleton>
}

const prisma = globalThis.prisma ?? prismaClientSingleton()

export default prisma

if (process.env.NODE_ENV !== 'production') globalThis.prisma = prisma
jakeleventhal
jakeleventhalOP2y ago
it is in my actual repo, but that is only relevant for local dev. note the node_env !== production check on produciton, it will just return the singleton value anyway. the reason they rec the singleton pattern is because without it on local dev, you will get dozens of clients created which is unnecessary so i purposefully excluded that logic in my repro to simplify the code
Socal
Socal2y ago
how many records are in that AppData table?
jakeleventhal
jakeleventhalOP2y ago
1
Socal
Socal2y ago
what's the script your running against this api? ***n/m found it
Socal
Socal2y ago
No description
jakeleventhal
jakeleventhalOP2y ago
localhost will not ever recreate this because you are guaranteed to have exactly one instance of the server less func it necessarily has to be deployed you'll never get >1 func spooled up maybe that behavior is overridable
garyaustin
garyaustin2y ago
Just curious, is the Pooler log in the dashboard showing anything?
jakeleventhal
jakeleventhalOP2y ago
i just ran the test again with 500 concurrent queries 50 failed and these are the pooler logs
jakeleventhal
jakeleventhalOP2y ago
notably, there are only 2 unique instance_id values
garyaustin
garyaustin2y ago
Does not reveal anything new. The instance ID corresponds to the IP of the supavisor server you are hitting is my conclusion from that.
jakeleventhal
jakeleventhalOP2y ago
@garyaustin would you expect that running 500 concurrent queries with prisma should be something supavisor can easily handle without a problem?
garyaustin
garyaustin2y ago
According to their announcements and info it should IF you are not exceeding the fixed connections to it (has only some relation to the connections to the database). But if you burst connections to it before they can be finished coming back from the database, then as we found in an issue/answer it generates the error you see about about too many connections. It would be nice to see the total simultaneous connections in some instrumentation (and there may be such a thing) to make sure it is not erroring too soon on connections or where it actually is. As is the documentation implies you exceeded 200 connections for free/basic and 1000 for the next level pro (if indeed the 1000 setting propagated immediately on an upgrade of CPU.
jakeleventhal
jakeleventhalOP2y ago
yeah, 1k connections on pro (small) had the same issues with the same number of concurrent queries. i'll be interested to see what the the supavisor team says
Socal
Socal2y ago
i tested it with the supabase client just because I wanted to see if it would throw the same errors as prisma.
import { NextApiRequest, NextApiResponse } from "next";

import { createClient } from "@supabase/supabase-js";

const supabase = createClient(
"https://PROJECT.supabase.co",
SERVICE_ROLE_KEY
);

export default async (req: NextApiRequest, res: NextApiResponse) => {
let { data: AppData, error } = await supabase.from("AppData").select("*");
return res.status(200).send(AppData);
};
import { NextApiRequest, NextApiResponse } from "next";

import { createClient } from "@supabase/supabase-js";

const supabase = createClient(
"https://PROJECT.supabase.co",
SERVICE_ROLE_KEY
);

export default async (req: NextApiRequest, res: NextApiResponse) => {
let { data: AppData, error } = await supabase.from("AppData").select("*");
return res.status(200).send(AppData);
};
Socal
Socal2y ago
still having the same issues with prisma though. Not really clear what it's doing. There's something in the way prisma is interacting with supavisor that isn't setup optimally. Postgres connections were only at 9 during the test with prisma which is in line with the requests going to supavisor.
No description
jakeleventhal
jakeleventhalOP2y ago
well isn't the supabase client fundamentally different since it communicates via http like the prisma serverless drivers?
Socal
Socal2y ago
were you using the neon serverless driver in your test on neon? makes a lot more sense now. https://www.prisma.io/blog/serverless-database-drivers-KML1ehXORxZV. You can wait for prisma to release a driver for serverless for supabase. I didn't look at why they should be different between postgres providers though
garyaustin
garyaustin2y ago
The clients on Supabase use the http REST API PostgREST. That has its own built in pooler and uses direct ports to the database
jakeleventhal
jakeleventhalOP2y ago
i just re-ran a test with neon the error-free results are with the serverless drivers enabled
No description
jakeleventhal
jakeleventhalOP2y ago
i tested again with neon (WITHOUT serverless drivers) and large numbers of requests gives me errors. interestingly, the errors are not about max connections. the errors say
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(PostgresError { code: "42P05", message: "prepared statement \"s3\" already exists", severity: "ERROR", detail: None, column: None, hint: None }), transient: false })
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(PostgresError { code: "42P05", message: "prepared statement \"s3\" already exists", severity: "ERROR", detail: None, column: None, hint: None }), transient: false })
though i imagine its the same thing so the outcome that im seeing is basically that prisma will not work well with supabase for spikes in serverless traffic unless supabase comes out with a driver that allows prisma/drizzle/etc to function similarly to supabase-js and communicate via postgrest
Mihai Andrei
Mihai Andrei2y ago
Hello! Regarding this. I have a project on the Pro plan, but i see a maximum of 60 connections. Is. there any way to upgrade it?
garyaustin
garyaustin2y ago
CPU upgrades get more connections. Those are direct connections not pooler connections.

Did you find this page helpful?