I'm pretty sure the answer to this is "heck no", but would it possible to have a unique outgoing port for each Hyperdrive config? (And I don't think it would be too insane, since it's only 25 ports per paid Workers account that would need to be "reserved".) Or even 1 unique port per account would be nice.
That would be really nice to firewall based off of.
I would be curious if using Hyperdrive reduces bandwidth pricing (and by how much) for Google Cloud customers, since Cloudflare has a reduced bandwidth price iirc.
The goal is that it should, yes. Still needs some work to get all the way there, I believe. I don't work with that project a ton and there is some nuance IIRC.
It gets complicated. It goes GCP->Workers->Hyperdrive for ingress, so I believe that's already reduced. Egress from ??->Workers->Hyperdrive->GCS is the one I'm not certain about.
You should be able to automatically tell who did the allowlist by trying to connect from a new IP. Then whoever fails, email them giving them a warning that they need to use the new proper official list of IP ranges.
Of the shared address pool? Not publicly, but folks are clever. Hazard of making tools for other devs, I guess. In some regions things are stable enough that folks noticed that all Hyperdrive connections were coming in from a small number of addresses, and eventually just coded that into their firewall rules.
Hey thanks for reply on this. Yes when I try to setup hyperdrive + a managed DO database in cf dashboard it errors
I have setup connections with supabase in the past successfully and I searched this channel for issues with DO leading me to believe it’s with DO ip range?
Interesting. I searched and you're right, TCP connect from workers to DO seems to have some issues.
I'm not sure if it's due to the shared address range or not, though.
I'll check Monday with my teammate who's working on the migration to the CF IP ranges and see if we have a way to test if DO cooperates with those. I'd be curious to know, honestly
I have digital ocean and azure credits to burn and the latter seem to be priced crazily so would be ideal!
Maybe if there is no solution with public there is one with private connection?
I’d be open to using d1 instead also but I have about 50gb of data already so I don’t know if limits can be increased to say 100gb (for future scale)
Also happy to discuss using r2 as this is mostly a primary store of library like data with reads being carried out from my typesense instance.
I want to hide my database behind Cloudflare's Network without needing to have the CloudflaredCloudflared application on all devices.. Is that what Hyperdrive fixes?
Hyperdrive is a way to talk from Workers to a centralized database. We integrate with tunnels too so you'd only need have cloudflared running once somewhere within your vpc to egress the traffic to your DB from.
Based on your questions I'm assuming you're not running your application on Workers?
Are there any plans for supporting custom connection limits per region? Right now we seem to be hitting that limit and losing performance by being routed to a nearby region.