N
Neon2y ago
optimistic-gold

Increased duration in queries using Pool on Lambda

I have being playing more with Neon Pool driver and found that on Vercel Serverless Function (Lambda), the queries take significantly longer. I'm using drizzle for this test and wrote 2 instances to test with. 1) Edge compatible Pool Driver
import { neonConfig, Pool } from "@neondatabase/serverless"
import { drizzle } from "drizzle-orm/neon-serverless"

if (!process.env.VERCEL_ENV) {
neonConfig.wsProxy = (host) => `${host}:5433/v1`
neonConfig.useSecureWebSocket = false
neonConfig.pipelineTLS = false
neonConfig.pipelineConnect = false
}

export const getDb = () => {
const pool = new Pool({ connectionString: process.env.DATABASE_URL })
return {
db: drizzle(pool),
[Symbol.asyncDispose]: async () => {
console.log("disposing pool")
return await pool.end()
},
}
}
import { neonConfig, Pool } from "@neondatabase/serverless"
import { drizzle } from "drizzle-orm/neon-serverless"

if (!process.env.VERCEL_ENV) {
neonConfig.wsProxy = (host) => `${host}:5433/v1`
neonConfig.useSecureWebSocket = false
neonConfig.pipelineTLS = false
neonConfig.pipelineConnect = false
}

export const getDb = () => {
const pool = new Pool({ connectionString: process.env.DATABASE_URL })
return {
db: drizzle(pool),
[Symbol.asyncDispose]: async () => {
console.log("disposing pool")
return await pool.end()
},
}
}
2) Normal Drizzle Adapter for Lambda
import { drizzle } from "drizzle-orm/postgres-js"
import postgres from "postgres"

export const db = drizzle(postgres(process.env.DATABASE_URL))
import { drizzle } from "drizzle-orm/postgres-js"
import postgres from "postgres"

export const db = drizzle(postgres(process.env.DATABASE_URL))
18 Replies
optimistic-gold
optimistic-goldOP2y ago
I made 3 API routes for testing 1) A Cloudflare workers route. The average query time is around 45ms

import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const runtime = "edge"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

export const GET = async () => {
console.time("GET")
const response = await getData()
console.timeEnd("GET")
return NextResponse.json(response)
}

import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const runtime = "edge"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

export const GET = async () => {
console.time("GET")
const response = await getData()
console.timeEnd("GET")
return NextResponse.json(response)
}
2) An AWS lambda function using the neon pool driver where the average query time is around 350ms
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const dynamic = "force-dynamic"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

export const GET = async () => {
console.time("GET")
const response = await getData()
console.timeEnd("GET")
return NextResponse.json(response)
}
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const dynamic = "force-dynamic"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

export const GET = async () => {
console.time("GET")
const response = await getData()
console.timeEnd("GET")
return NextResponse.json(response)
}
3) An AWS lambda function using the postgres-js adapter for neon, where the average time is 4ms
import { NextResponse } from "next/server"
import { sql } from "drizzle-orm"

import { db } from "@/libs/drizzle"

export const dynamic = "force-dynamic"

export const GET = async () => {
console.time("GET")
const response = await db.execute(sql`SELECT 1 AS one`)
console.timeEnd("GET")
return NextResponse.json(response)
}
import { NextResponse } from "next/server"
import { sql } from "drizzle-orm"

import { db } from "@/libs/drizzle"

export const dynamic = "force-dynamic"

export const GET = async () => {
console.time("GET")
const response = await db.execute(sql`SELECT 1 AS one`)
console.timeEnd("GET")
return NextResponse.json(response)
}
Note: This isn't a Neon Cloud Databases issue. I did the same test using a Neon, Supabase, Railway and Render databases. All of them follow almost the same response timers Lambda with postgres-js: 4ms Cloudflare workers with pool: 45ms Lambda with pool: 350ms
harsh-harlequin
harsh-harlequin2y ago
I'd like to replicate these. Is the 4ms an average? Seems wildly low! Best I've seen is maybe 6-9ms from Vercel IAD to us-east-1, but the average is probably in the teens. What region is the Lambda in and what region is your Neon DB in. I have a benchmark tool I've been building and here are some results from it. It uses the Neon Serverless Driver in HTTPs mode:
platform_name platform_region neon_region query_times
vercel iad1 us-east-1.aws.neon.tech {95,64,35,19,20,21,11,30,19,23}
vercel iad1 us-east-2.aws.neon.tech {133,62,41,20,19,21,19,20,21,20}
fly lax us-west-2.aws.neon.tech {342,162,39,39,32,35,36,34,34,39}
platform_name platform_region neon_region query_times
vercel iad1 us-east-1.aws.neon.tech {95,64,35,19,20,21,11,30,19,23}
vercel iad1 us-east-2.aws.neon.tech {133,62,41,20,19,21,19,20,21,20}
fly lax us-west-2.aws.neon.tech {342,162,39,39,32,35,36,34,34,39}
optimistic-gold
optimistic-goldOP2y ago
4ms is the average using postgres-js and vercel iad1 also, i'm only measuring the db call, not the whole request
harsh-harlequin
harsh-harlequin2y ago
Same. What region is your Neon database hosted in? Can you try measuring across 10 queries perhaps? And setting pool size to a fixed number, like 5? I'm thinking Pool has high initialisation overhead, but will be fast for future queries (assuming the function is "hot")
optimistic-gold
optimistic-goldOP2y ago
All databases where host on US East I will run it and hit you back with the results Okay, running with 10 queries and the pool set to 5 in the first 2 runs: Lambda with postgres-js: 8.65ms Cloudflare workers with pool: 69.34ms Lambda with pool: 550.43ms Cloudflare Workers Pool code:
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const runtime = "edge"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now();
const response = await getData()
const end = performance.now();
return {
response,
time: end - start
}
}

export const GET = async () => {

// Run it 10 times to get an average
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)

// Calculate average time
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time
})
}
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const runtime = "edge"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now();
const response = await getData()
const end = performance.now();
return {
response,
time: end - start
}
}

export const GET = async () => {

// Run it 10 times to get an average
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)

// Calculate average time
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time
})
}
Lambda Pool code.
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const dynamic = "force-dynamic"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now();
const response = await getData()
const end = performance.now();
return {
response,
time: end - start
}
}

export const GET = async () => {
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time
})
}
import { getDb } from "@/libs/db"
import { sql } from "drizzle-orm"
import { NextResponse } from "next/server"

export const dynamic = "force-dynamic"

const getData = async () => {
await using connection = getDb()
const response = await connection.db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now();
const response = await getData()
const end = performance.now();
return {
response,
time: end - start
}
}

export const GET = async () => {
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time
})
}
Lambda Postgres-js code:
import { NextResponse } from "next/server"
import { sql } from "drizzle-orm"

import { db } from "@/libs/drizzle"

export const dynamic = "force-dynamic"

const getData = async () => {
const response = await db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now()
const response = await getData()
const end = performance.now()
return {
response,
time: end - start,
}
}

export const GET = async () => {
// Run it 10 times to get an average
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)

// Calculate average time
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time,
})
}
import { NextResponse } from "next/server"
import { sql } from "drizzle-orm"

import { db } from "@/libs/drizzle"

export const dynamic = "force-dynamic"

const getData = async () => {
const response = await db.execute(sql`SELECT 1 AS one`)
return response
}

const measureData = async () => {
const start = performance.now()
const response = await getData()
const end = performance.now()
return {
response,
time: end - start,
}
}

export const GET = async () => {
// Run it 10 times to get an average
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(measureData())
}

const results = await Promise.all(promises)

// Calculate average time
const time = results.reduce((acc, result) => acc + result.time, 0) / results.length

return NextResponse.json({
data: results,
averageTime: time,
})
}
harsh-harlequin
harsh-harlequin2y ago
Oooh, I see your getDb code creates a pool for every query. That's always going to be slow. The postgres.js time being so fast is impressive.
optimistic-gold
optimistic-goldOP2y ago
I thought it was necessary
No description
optimistic-gold
optimistic-goldOP2y ago
The example on the website do it
No description
harsh-harlequin
harsh-harlequin2y ago
Can you share a repo with this code? I can't recreate these wild latency differences I just tested out a next.js application locally, and all drivers are quite similar. Cold (warmed neon, cold vercel function):
postgres-js: {"times":[492,47,47,49,47,48,44,57,46,44],"overallTime":921}
neon http: {"times":[170,156,55,52,49,51,55,54,57,52],"overallTime":751}
neon client: {"times":[217,49,48,52,49,47,48,48,52,48],"overallTime":658}
neon pool: {"times":[394,51,53,45,51,51,49,59,50,50],"overallTime":853}
postgres-js: {"times":[492,47,47,49,47,48,44,57,46,44],"overallTime":921}
neon http: {"times":[170,156,55,52,49,51,55,54,57,52],"overallTime":751}
neon client: {"times":[217,49,48,52,49,47,48,48,52,48],"overallTime":658}
neon pool: {"times":[394,51,53,45,51,51,49,59,50,50],"overallTime":853}
Warm (warmed neon, warmed vercel function):
postgres-js: {"times":[48,47,43,42,43,47,47,45,46,44],"overallTime":452}
neon http: {"times":[62,48,49,51,51,53,49,46,54,53],"overallTime":516}
neon client: {"times":[54,50,49,47,49,48,48,44,49,50],"overallTime":488}
neon pool {"times":[48,53,47,50,54,49,49,51,49,49],"overallTime":499}
postgres-js: {"times":[48,47,43,42,43,47,47,45,46,44],"overallTime":452}
neon http: {"times":[62,48,49,51,51,53,49,46,54,53],"overallTime":516}
neon client: {"times":[54,50,49,47,49,48,48,44,49,50],"overallTime":488}
neon pool {"times":[48,53,47,50,54,49,49,51,49,49],"overallTime":499}
harsh-harlequin
harsh-harlequin2y ago
harsh-harlequin
harsh-harlequin2y ago
Here's the code. I think your getDb function creating a new instance each time is the main issue Hmm, maybe for Cloudflare this is accurate. For a Lambda it should be fine since those are just a container running Node.js. If you're being very strict about a connection per request, then you might want to take the approach mentioned. I'll raise it internally. BTW, we have an article coming out this week about using Cloudflare Hyperdrive. This will probably help.
optimistic-gold
optimistic-goldOP2y ago
I made some changes to my code mirroring your git repo, it works great for lambda, but for cloudflare, some of the requests get timeout as I mention here #Weird timeout on Cloudflare Workers I was just trying to make a hybrid driver between both Vercel runtimes
harsh-harlequin
harsh-harlequin2y ago
Gotcha. I'll need to experiment with Cloudflare then. It does sound like a connection per worker might be necessary on Cloudflare if there's those weird timeouts occurring without any clear reason.
optimistic-gold
optimistic-goldOP2y ago
If you need help with the experiment, feel free to tag me or dm me
harsh-harlequin
harsh-harlequin2y ago
Will do! Thank you! And thank you for raising these sorts of queries. It's very helpful for us when we need to improve docs and understand interactions with other platforms etc.
harsh-harlequin
harsh-harlequin2y ago
My initial assumption (without testing yet) is that creating a pool keeps the Cloudflare worker active and consumes connection limits so future requests can't be processed which results in timeouts, e.g this limit is hit https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections
Cloudflare Docs
Limits · Cloudflare Workers docs
Cloudflare Workers plan and platform limits.
harsh-harlequin
harsh-harlequin2y ago
Just a guess right now though
optimistic-gold
optimistic-goldOP2y ago
I also tried creating the Pool outside and then connect/end inside the request handler but It still throws an error
import { neonConfig, Pool } from "@neondatabase/serverless"
import { drizzle } from "drizzle-orm/neon-serverless"

if (!process.env.VERCEL_ENV) {
neonConfig.wsProxy = (host) => `${host}:5433/v1`
neonConfig.useSecureWebSocket = false
neonConfig.pipelineTLS = false
neonConfig.pipelineConnect = false
}

const pool = new Pool({ connectionString: process.env.DATABASE_URL })
export const getDb = async () => {
await pool.connect()
return {
db: drizzle(pool),
[Symbol.asyncDispose]: async () => {
return await pool.end()
},
}
}
import { neonConfig, Pool } from "@neondatabase/serverless"
import { drizzle } from "drizzle-orm/neon-serverless"

if (!process.env.VERCEL_ENV) {
neonConfig.wsProxy = (host) => `${host}:5433/v1`
neonConfig.useSecureWebSocket = false
neonConfig.pipelineTLS = false
neonConfig.pipelineConnect = false
}

const pool = new Pool({ connectionString: process.env.DATABASE_URL })
export const getDb = async () => {
await pool.connect()
return {
db: drizzle(pool),
[Symbol.asyncDispose]: async () => {
return await pool.end()
},
}
}
Error: Cannot use a pool after calling end on the pool

Did you find this page helpful?