too many subrequests
Hello everyone! I'm trying to connect to supabase postgres instance from a worker via hyperdrive using this guide https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/ . I'm using postgres.js 3.4.7. In local it works connecting to the same database the hyperdrive is using, but on cf i got CONNECTION_TIMEOUT hitting the 10s i configured always. If i just connect directly in cf i got "too many subrequests" after 1 or 2 secs (the function is very simple). Any ideas?
Cloudflare Docs
Postgres.js
Postgres.js is a modern, fully-featured PostgreSQL driver for Node.js. This example demonstrates how to use Postgres.js with Cloudflare Hyperdrive in a Workers application.
7 Replies
That error is not strictly related to Hyperdrive, but instead suggests that you're calling out from your Worker too many times (https://developers.cloudflare.com/workers/platform/limits/#subrequests).
A Hyperdrive connection does count as a subrequest, so it seems like you're trying to connect to Hyperdrive too many times. You'll probably need to share some code for debugging, I think.
thanks for your response AJR. Actually with hyperdrive connection string the request just hangs for 10s and then the postgres connection timeout is hit. If I use the direct connection string to the supabase instance, it trigger the "too many subrequests" error.
The code is very simple (Nuxtjs server)
test.ts
postgres.ts
ok, i got it. It is the ssl flag that is causing issues. If i disable it, it works for both configurations (hd and direct). I think i do not need ssl for hd as a client to it, giving it is configured to connect to supabase with ssl active. However I have to figure out why it is not connecting to it directly with ssl required.
Oh, I see.
Hyperdrive does its own SSL handling after you connect from the Worker
So the traffic will be encrypted without it.
Hyperdrive does not do SSL from Worker on the client node, the runtime negotiates its own encrypted channel with Hyperdrive
Thanks for clarification! I think that the library in the latter case (direct connection) tries to reconnect to it aggressively and hitting the subrequest limit opening a lot of tcp connections, probabily due to ssl verification failures.
oh this is very interesting. i was actually coming here to ask a similar question -- about whether connections from the worker to the hyperdrive pool count actually towards subrequest limits
i was using the same guide linked in OP here
in our function, we're instantiating this hyperdrive client (pictured) for every auth request, and closing it with
c.executionCtx.waitUntil(sql.end());
after awaiting the query itself. not sharing it across requests
should we probably also be reducing this just to max: 1
to be safe?
i see that connections are made lazily in postgres-js, but i'm not sure if there's something i'm missing here
max: 1
is definitely a good idea unless you explicitly want per-request parallelism in your queries
We've been seeing some issues with explicitly closing the connection after network disconnects too, so you may want to omit the explicit close if it's giving you issues. It isn't strictly necessary and apparently there's a library bug with it: https://github.com/porsager/postgres/issues/1097GitHub
Calling sql.end() or sql.close() after a server disconnection cause...
I've found that if I experience a disconnection from the server, then attempting to clean up by calling sql.end() or sql.close() before exiting returns a promise that will never resolve. Below ...
sick
appreciate ya 🙏