Postgresql, only getting 512 mb memory on hobby plan
hello, looking for tips for my postgresql service, seem to be capped at 512 mb memory on hobby plan
tried restarting but still only hitting 512
Solution:Jump to solution
there's a volume now, this is the new database as a service, new postgres version, new supporting infrastructure
33 Replies
Project ID:
2d8142fa-1d81-44b3-bd4d-c4bb310dd3c3
2d8142fa-1d81-44b3-bd4d-c4bb310dd3c3
why do you want it to use more memory? is it crashing?
I was getting an out of memory issues in python when doing some large batch insertions, but i've since reduced the number of inserts to avoid the error i was receiving. i expect having access to more memory would improve performance when inserting rows and also not slow down others accessing the database
did you get these out of memory error logs from python or from the postgres service
to be clear, at this time I don't think postgres is limited to 512mb, I think that's just simply what it's currently using
python only, trying to find the error i received but when googling a lot of feedback around memory server side, i.e. killed query. let me see if i can find the error i was receiving
SSL SYSCALL error: EOF detected was the error
logged from the python service, correct?
correct was returned from within debug menu in python
tried to check logs in railway but was chugging, going to see if i can find that when it happened
what client are you using in python
vs code
that's a code editor
I'm asking what client library you are using in python
oh sorry
psycopg2
read this https://blog.stigok.com/2021/02/28/sqlalchemy-postgres-ssl-eof-detected.html
it's not an issue with railway or with the postgres database, it's a code issue
thanks for the dialog. i will see what i can do to implement a pre-ping like behavior and see if that allows the larger insert. i am not using sql alchemy if thats helpful at all.
only thing I will add is i've been able to reproduce by trying to insert a larger number of rows, and see the metrics in railway only hitting exactly 512 during these inserts. not sure what memory settings within postgre are set by default in the railway image
I've seen plenty of postgres databases use well over 512mb of memory, I'm confident this is just a code issue
I would normally expect to see some variation in memory usage, i.e. above and below 512 when performing operations , especially if i increase the number of transactions that are ocurring
seems like a disconnect occurring mid query rather than before the insert op occurred as described in the blog you posted
from your screenshot it looks like it's going from 300 to 500mb
does this database have a volume attached
it is going from around 300 to exactly 512, but never above. the 300 memory is when no operation is taking place (in between batch insertions)
i am not sure , i did not create any seperate services/storage options when creating the service within railway
oh that's the old database, I was under the impression that you where using a new database
even so, where your python service?
python service is on my local machine
gotcha
or rather, python script
just doing some testing stuff nothing actually being used
you are currently using a deprecated database plugin, plenty of issues with those. go ahead and deploy a new postgres database
clicked add, seems the same?
oh i see it has a disk?
Solution
there's a volume now, this is the new database as a service, new postgres version, new supporting infrastructure
interesting.. how did i manage to create the old one?
you created it before the new databases went into general availability
so my bad, shouldn't have assumed you where using a database v2
thanks brody, i will try with the new v2 tomorrow.. interesting though i created on 10/23, did v2 come out since then?
appreciate your time tonight as well
I'd have to check when when database v2's went into general availability
thank you again!
will let you know how it goes
sounds good!