Is MongoDB disk pricing different?
I've noticed that the new mongodb deploys come with a disk, the old ones (above) look different without a mounted drive.
Is the pricing different? I noticed I am paying quite a lot on the first mongodb service, it's billing it with the storage size in RAM
47 Replies
Project ID:
b5c2da31-6490-410f-9a6b-6fafa1ef1379
b5c2da31-6490-410f-9a6b-6fafa1ef1379
with the new databases (database services) you are still billed for cpu and mem but also billed for the storage used in the volume, since the old database plugins had an invisible 32gb volume that you weren't charged for
Where can I find some more info about this pricing change?
In the old databases - I am being billed for memory that is equal to the database size?
that's just how much memory the database uses, it does not incorporate the invisible volume
I guess my question is - on MongoDB (for example a 20gb database) is it cheaper to use the new databases?
a 20gb database?
oh 20gb of ram
no 20gb of data
the database itself is inactive even...
See this is my confusion
how do you know you are using 20gb of storage on the old database plugin?
the database is not really active
but the storage is about 20gb
which is being billed as RAM
which ends up being a lot
way more expensive than digitalocean
where I migrated from
side note, you are pro right?
yeah
I think the storage should not be billed as RAM
disk storage is just database size
but it's not truely using that in RAM idt
look at the graphs
I don't think it is, I think that's just how much memory the plugin is using
wild
I'm gonna try to migrate the data over to the new database
and see if there's a difference
see in the new database
I assume the data goes into the mounted disk
hence I get billed for persistent storage
not RAM
which is 0.25 /gb
correct, and same for the old plugin, it's just the volume is invisible
storage is not billed as ram with the old plugins
That appears to be incorrect?
I get billed 0 for disk
all in ram
but disk is not 0
as I've said, the volume is invisible, you are billed for that much ram because that's how much ram is in use
okay gotcha... I just don't see how my application can be using that much ram 😓
it only shows disk because of a UI oversight
that's mongo for you
with the new database as services you have so much more control, if you don't like your data being cached into memory by mongo itself, you can just disable it with a flag or maybe an environment variable
alright - I'm gonna try moving to a new database just so i can see the difference
otherwise I'll likely have to migrate back to digitalocean, this pricing ended up way more
if you dont like the outcome, its your database and you can do all the tuneing you want!
So FYI - migrating to the new database version made a big difference, the memory usage seems more natural now & is down about 50%
I'm still trying to figure out how to lower the data being cached by mongo service itself
any ideas?
tbh, mongodb is well know for all that caching
if u still managed to lower it, ur good
unless someone here has some unknow knowledge on how to lower it even more
I'm sure there's some flags or something you could play with that would get mongo to stop catching as much data into memory.
fwiw the old plugin used mongo 4.4 and the new mongo service uses 7.0
So after running it for a while, I'm running into this issue:
ERROR(4850900): pthread_create failed
Thread creation failed
Unable to schedule a new loop for the session workflow
Connection ended
Connection accepted
pthread_create failed
Thread creation failed
ERROR(4850900): pthread_create failed
Connection ended
Connection accepted
pthread_create failed
ERROR(4850900): pthread_create failed
Thread creation failed
Unable to schedule a new loop for the session workflow
Connection ended
Connection accepted
pthread_create failed
ERROR(4850900): pthread_create failed
Looks identical to this: https://www.answeroverflow.com/m/1161445811943649372
Answer Overflow
MongoDB failing to write - Railway
I'm getting the error
`Connection accepted
pthread_create failed
ERROR(4850900): pthread_create failed
Thread creation failed
Unable to schedule a new loop for...
gonna try to redeploy...
I'll extend the same offer to you, need a way to reliability trigger this error in mongo for me to do any kind of useful debugging
I'll let you know if it starts giving me that error again!
and won't redeploy so you can debug
also - noticed that I can't deploy replicate sets, any eta on when this becomes available?
I can't debug anything like that, I don't work for railway
why can't you, though I will note railway replicas don't have anything to do with mongo replicas
Oh gotcha... well here's the error again
any advice?
that makes 3 people
Setting this number to anything other than 1 results in the deployment failing to build and an error
others having same issue with mongodb and that pthread issue>?
correct
like mentioned, railway replicas have nothing to do with a mongo replica set, that is not what that setting does
Separate issue. My redis deployment used alongside that database (same project) is not responsive.
Error says too many clients when I try to output message
restarting fixes it temporaily
assuming these are databases as services. these arent managed by railway, you manage your own database, you are free to increase the connection limit on them as you see fit