bit confused on why hyperdrive can't scale connections very well here; if I have 1,000 requests in an Worker all at once, Hyperdrive can't handle that?
Part of the benefit of the split connection approach Hyperdrive uses. You can have a truly absurd number of isolates, the connections to origin are kept separately in their own pool and checked out for the minimum time needed to interact with the origin DB
can I connect to hyperdrive from within a worker while using a mysql2 pool? Kysely only supports using mysql connection pools and I'm getting a "the server closed the connection" error
I am trying to make a new Hyperdrive config but I keep getting:
"Failed to connect to the provided database: Connecting to database via Cloudflare Tunnel failed: 403 Forbidden"
"Failed to connect to the provided database: Connecting to database via Cloudflare Tunnel failed: 403 Forbidden"
As far as I can tell I setup everything exactly as my previous configuration. Tunnel setup and access policies look identical. Runnign the tunnel with debug output yield nothing so I feel like something is going wrong inside zero trust but I can't figure it out...
So... apparently it takes a bit for a new access token to start working maybe? Waiting a few minutes and using an existing token it started working...
Just went and double checked, and I'm seeing at least some free tier Hyperdrives hitting their limit every day for the last 2 weeks, so it seems like resets are still happening as intended.
Can you elaborate a bit on what you're seeing that suggests that yours isn't? You're very far from hitting it, from what I can see.
Yes, very far from hitting it. Currently just testing, but was building all night and noticed it didn't reset at midnight UTC. I'm currently the only user as it's not live.
I'm not very concerned about hitting it. I'm about to migrate a personal site that gets quite a reasonable amount of traffic at certain times of the year, and was just wondering when I'll likely need to upgrade it in the future.
In the long run, I will potentially use Hyperdrive for client projects on their own Cloudflare instances, and again, it's useful to understand pricing to be able to communicate with clients about foreseeable future costs.
I've got a feeling the usage indicator on the top-level hyperdrive page in the dashboard is a rolling 24-hour time period rather than coinciding with the reset specified in the docs
Will Hyperdrive support cache invalidation on writes/updates anytime soon? This is the only thing keeping me from enabling the cache on it. I'm currently caching via a custom cache built on durable objects on drizzle
No, not very soon. While it would be relatively straightforward to invalidate all cached entries on any write, getting all the way down to schema-aware automatic cache invalidation on a distributed pooler+cache is a ton of complexity that we are being very cautious about taking on.
That said, increased flexibility on whether/how to read from or write to cache on individual queries is something that we're actively working on.
note: this will spin up an actual (ephemeral) worker for you to dev with, so be cautious about pointing your connections to a staging instance or similar
The ability to make TCP and QUIC client connections from within Workers and Durable Objects, as well as the ability to connect to Workers over TCP and QUIC without using HTTP, will be coming to Cloudflare Workers. However, there is much to consider and a lot to do to make it happen. Here’s a peek at what we’re working on.
Please kick the tires and let us know how it works for you!
Bonus news for those who keep an eye on this channel: as a result of this work the limit overrides that the Hyperdrive team can do to support large traffic volumes can now be done in minutes, instead of taking a few hours to roll through, so if you have need of that please reach out.
I think I can see exactly when this went live https://screen.bouma.link/XjBnMv39lxpkny2HfM3c Love to see this change but it also changes some stuff resulting in more errors. Already adjusting my limits but only helped for a short period.
It looks like setting lower limit and allowing a few more connection on my side resolved the issue. Currently have the limit set to 20 and allowed 35 on the MySQL user. Have not seen errors for the past ~2 hours now. So might be solved. (and also learnt that I really need notifications when the error rate spikes like this the notification of the changelog triggered me to look. Worker falls through to the origin though, so no biggy in this case for me)
That said. It would be cool to see some graphs on connection counts from the CF side and some details on what happens if I set this limit lower. Does that "just" increase query latency or will it eventually fail to execute queries if the traffic is large enough.