N
Neon9mo ago
adverse-sapphire

I just set up a new Neon DB finding

I just set up a new Neon DB finding extreeeeemely slow perf querying - i can't even get a row count. i have about ~25M rows. is this kind of perf expected...?
11 Replies
continuing-cyan
continuing-cyan9mo ago
what do you have min/max CU configured to?
sensitive-blue
sensitive-blue9mo ago
min 5 max 10 looks like all 10 are allocated
sensitive-blue
sensitive-blue9mo ago
No description
continuing-cyan
continuing-cyan9mo ago
indexes?
sensitive-blue
sensitive-blue9mo ago
none, i was trying to set one up and it was taking forever i just want to see how many rows i have lol even without an index, pg should be able to count the rows, no? just a simple SELECT COUNT(*) FROM "table"
continuing-cyan
continuing-cyan9mo ago
count in postgres is actually notably slow https://wiki.postgresql.org/wiki/Slow_Counting
continuing-cyan
continuing-cyan9mo ago
here's another good article on the topic with some suggestions https://www.citusdata.com/blog/2016/10/12/count-performance/
sensitive-blue
sensitive-blue9mo ago
i see so, using the estimation, it looks like 29M rows i'm trying to wrap my brain around the fact that I can't do a query that walks through every row in my table, 29M doesn't seem that big would 50 vCPUs help? i just want to be able to have that query return in like, under 30 seconds
continuing-cyan
continuing-cyan9mo ago
what kind of data is it? can you create an index? I'm not an expert, but I would say 29M unindexed rows on a transactional DB is big 🙂 Neon holds the "working set" of actively accessed data in your compute, but if the working set is the entire DB, and the entire DB is larger than what your compute can hold, that might explain your query issues https://neon.tech/blog/dynamically-estimating-and-scaling-postgres-working-set-size
sensitive-blue
sensitive-blue9mo ago
mm, yeah i see i guess we might just need a columnar DB for this i did end up creating an index, took about 5 mins that at least gives me a sense of the scale of the slowness here
other-emerald
other-emerald9mo ago
Do you need to return an exact count? You can return an estimate which should be a lot faster
SELECT reltuples::bigint AS approximate_row_count
FROM pg_class
WHERE relname = 'table_name';
SELECT reltuples::bigint AS approximate_row_count
FROM pg_class
WHERE relname = 'table_name';

Did you find this page helpful?