Hi, sorry if is not the correct channel

Hi, sorry if is not the correct channel... We use this stack: Drizzle + Hyperdrive + Neon for full intensive processing. Neon has the maximum configuration, with 4,000 connections (and 10,000 pooled). In a production environment, we noticed that when we reached the limit of 4,000 (according to the Neon dashboard), we started getting a worker error saying "Timed out while waiting for an open slot in the pool." And even though the system load had decreased, we still got the error and still saw 4,000 active connections in Neon. Only by forcing a restart of the pool connections (by lowering and raising the number of CPUs), Hyperdrive released the connections and the system recovered. However, upon its return, it was only using 100 connections. This suggests that the other 3,900 were never released. The Hyperdrive configuration is using the Neon connection string used by its connection pool. This is out drizzle's use: const cnx1: Pool = new Pool({ host: hd.host, user: hd.user, password: hd.password, port: Number(hd.port), database: hd.database, idleTimeoutMillis: 10000, (WE TRY WITH OTHERs TOO) connectionTimeoutMillis: 5000,
min: 0, (WE TRY WITH 1, 5, etc) max: 5, (WITH OTHERS too) ssl: false, }); return drizzle(cnx1); We have an expected Hyperdrive configuration with a maximum soft-limit connection of 500. Our workers use 8 Hyperdrives in a round-robin strategy. This allows us to reach 4,000 active connections. Thanks in advance Martín
7 Replies
knickish
knickish4w ago
Changing the idleTimeoutMillis and connectionTimeoutMillis values will have no effect on how Hyperdrive handles its connections to your origin, only on the connection between your worker and Hyperdrive. I will look in to the issue you've described here
AJR
AJR4w ago
As an operational note, Hyperdrive does consider those soft limits, so I'd expect you'll have a better experience trying to run at 3500 or so, and leaving some wiggle room for queries to still succeed if there's a network partition and Hyperdrive needs to open a new connection from a different datacenter, or similar.
thomasgauvin
thomasgauvin4w ago
@letincho5 you should be aware that your 10,000 connections pool will run into issues, because your workload makes heavy use of transactions which prevents connections from being reused effectively (by either Hyperdrive or the Neon connection pooler) There are only 4000 connections that can be used at once to your database and if you need more, you should increase the configuration of your underlying Neon database
letincho5
letincho5OP4w ago
Thanks. Through Hyperdrive, I don't see any use for the 10,000 pooled connections. Only for Neon's 4,000 direct connections, spread across the eight hyperdrives. But with the inconvenience that it apparently is not releasing them as it should.
AJR
AJR4w ago
Following up on this, we believe we have a root cause on this issue, and are working on releasing a fix. Should be out in the next 1-3 days, and if you don't mind we'll be following up with you to see if you're still observing this afterwards. Thanks for the report!
knickish
knickish4w ago
The fix for this issue should be completely rolled out at this point. If you notice the same behavior occurring after equivalent usage in future, please let us know.
letincho5
letincho5OP4w ago
Thanks!

Did you find this page helpful?