Not necessarily? You should be fine for a whole month if you go 0.25 vCPU, 2 GB RAM. Or, too, there is the scale to zero, which again, for a side project, you might not need 24/7 uptime
I sure know every project I have deployed(other than WDOL) has DBs cold-starting, because I definitely don't have enough usage to keep them alive all the time. YMMV may vary of course, depending on your project
Well, we get to cheat a little, with the runtime-generated connection strings.
We probably could if we wanted to. Not saying it's something we'll do, but I'm curious and have never considered that before. What would be the use case for something like that?
Another topic, I think there is a bug in delete hyperdrive feature: after delete a hyperdrive, I can't create another hyperdrive with the same name, because it's already exists.
Hey all! Today we released the final step in a fairly large rearchitecture of Hyperdrive. We'll be getting into the details in a full length blog post a little down the road, but for now here are the takeaways.
All newly created Hyperdrives will perform substantially more efficiently. This goes double for users with the origin database in APAC.
This should primarily manifest as lower latencies and fewer disconnects.
This will affect all Hyperdrives with or without caching.
We will be backfilling this solution to existing Hyperdrives progressively, over the course of the next couple weeks, starting next Monday. If you want yours handled sooner, please feel free to message me or respond in a thread here.
This one took the whole team quite a bit of work, it's been fun to ship. I'm excited to finally have it going out the door.
I've done some search on this channel and it seems like cache busting/purging is still WIP? I ran into a bug today that took me awhile to figure out it was being caused by hyperdrive - user creates a comment, the query client then invalidates the comments query, it refetches the stale data causing my optimistic comment to disappear. This makes sense given hyperdrive caching. I'm just curious as to what my workaround should be given that cache purging isn't a thing (yet).
It'd be a little contrived for me to add some volatile function to the query given that it'd have to originate from the client and then drill through the api router and modify queries in a weird way
The alternative is likely just a second hyperdrive config with caching disabled, and direct your queries to whichever is appropriate at the application level.
Totally depends on what you're doing. If you are sorting by a date column, but are returning the date column anyway, you can sort it in JavaScript after.
I just turned off caching for now, I'll wait until cache purging is shipped, I don't love modifying the application layer in weird ways to get expected query behavior
I think the two hyperdrive configs is a pretty good solution, since in my case it's really there's a set of resources that frequently are mutated and should never be cached, and sets of resources that can be cached
Part of the reason that this is taking so long is that a bunch of folks went ahead and tried to allowlist based on the shared/untrusted ip pool. We don't want to just break them (though at some point they'll get broken anyway when the pool shifts enough to clip them, but at least it won't be because we did it). So the migration over is kind of a process.
@AJR oh, another reason why exposing Hyperdrive as a regular PostgreSQL pooler would be great: we could do stacked Hyperdrive configs! e.g. one "main" Hyperdrive with no caching connected to the real backend directly, and then have another Hyperdrive config with caching enabled that connects to the main Hyperdrive's pooler.
Oh yeah. Measure how long it takes to create a connection to a pooler/origin further away than a couple hundred miles and you'll have a clear picture of why only ~half of our configs even having caching enabled.