Look at switching to Aurora Serverless v2; it now supports scaling to zero, which is great for side projects that don't need to be on all the time. (Takes <30 seconds to turn back on.)
Should be $43.8/m for 0.5 ACUs (assuming you never scale to zero) + $3.65/m for IPv4. Then there's some extra costs for bandwidth and I/O, but it's pretty small for (most?) workloads.
Yeah, idk what Max is running, but side project, in my mind at least, is something that would fit on a free-tier Supabase/Xata/Neon(also since you probably don't need redundancy or other jazz)
If you bundled many projects together onto a single database, it might be more worth it, but imo it's still a lot more pricy than the hobby alternatives.
Not necessarily? You should be fine for a whole month if you go 0.25 vCPU, 2 GB RAM. Or, too, there is the scale to zero, which again, for a side project, you might not need 24/7 uptime
I sure know every project I have deployed(other than WDOL) has DBs cold-starting, because I definitely don't have enough usage to keep them alive all the time. YMMV may vary of course, depending on your project
Well, we get to cheat a little, with the runtime-generated connection strings.
We probably could if we wanted to. Not saying it's something we'll do, but I'm curious and have never considered that before. What would be the use case for something like that?
Another topic, I think there is a bug in delete hyperdrive feature: after delete a hyperdrive, I can't create another hyperdrive with the same name, because it's already exists.
Hey all! Today we released the final step in a fairly large rearchitecture of Hyperdrive. We'll be getting into the details in a full length blog post a little down the road, but for now here are the takeaways.
All newly created Hyperdrives will perform substantially more efficiently. This goes double for users with the origin database in APAC.
This should primarily manifest as lower latencies and fewer disconnects.
This will affect all Hyperdrives with or without caching.
We will be backfilling this solution to existing Hyperdrives progressively, over the course of the next couple weeks, starting next Monday. If you want yours handled sooner, please feel free to message me or respond in a thread here.
This one took the whole team quite a bit of work, it's been fun to ship. I'm excited to finally have it going out the door.
I've done some search on this channel and it seems like cache busting/purging is still WIP? I ran into a bug today that took me awhile to figure out it was being caused by hyperdrive - user creates a comment, the query client then invalidates the comments query, it refetches the stale data causing my optimistic comment to disappear. This makes sense given hyperdrive caching. I'm just curious as to what my workaround should be given that cache purging isn't a thing (yet).
It'd be a little contrived for me to add some volatile function to the query given that it'd have to originate from the client and then drill through the api router and modify queries in a weird way
The alternative is likely just a second hyperdrive config with caching disabled, and direct your queries to whichever is appropriate at the application level.
Totally depends on what you're doing. If you are sorting by a date column, but are returning the date column anyway, you can sort it in JavaScript after.
I just turned off caching for now, I'll wait until cache purging is shipped, I don't love modifying the application layer in weird ways to get expected query behavior