Hey Guys👋Getting Request Timeout Issue on Load Test
I deployed a node app with express under railway, everything using railway managed services postgres and redis,.
After some time on load test, i started seeing this issue
{"remote_ip":"34.87.74.158","remote_port":443,"url":" https://onebet-dev.up.railway.app/event/bet/place","status":0,"status_text":"","proto":"","headers":%7B%7D,"cookies":%7B%7D,"body":null,"timings":%7B"duration":60000.783,"blocked":0.001,"looking_up":0,"connecting":0,"tls_handshaking":0,"sending":0.125,"waiting":60000.658,"receiving":0%7D,"tls_version":"","tls_cipher_suite":"","ocsp":%7B"produced_at":0,"this_update":0,"next_update":0,"revoked_at":0,"revocation_reason":"","status":""%7D,"error":"request timeout","error_code":1050,"request":
How to solve this, is this issue due to ratelimiting on railway side or issue on node app side?
23 Replies
Project ID:
2cb48114-4aaa-4153-b45f-277a43de1723
2cb48114-4aaa-4153-b45f-277a43de1723
railway has no hard set rate limits
tell us more about how you're load testing, how many connections are you able to I'm currently achieve before you get timeouts?
i was trying arounf 100 concurrent connection and total of 2000 requests
sometimes at 50 concurrent connections it is happening
2000 requests per ??
can you give me more metrics?
2000 req in span of 3-5 mins
breaking at 50 or 100 rps
what kind of data does this endpoint you where testing return?
it will return a json, takes input of a payload
the processing will take around 2 secs, it has 3 db calls from prisima and then grpc request
the load on db and server running the grpc is fine
i am not seeing any spikes on railway instance also
still it is getting timeout
are you able to do more than 100 concurrent connections and more than 5000 req/5mins when hosting your app elsewhere?
that we havent tried
we will try that and update back,
but we wanted to understand how railway handle requests for more than 100 rps
usually 100-500 rps
Given that you’re on the hobby plan, you only have access to 8vCPU and 8GB of RAM. Can you please share your metrics for the period of time you were load testing? it’s possible you have maxed out the CPU
do we directly deploy node app and expose with railway, or is it better if we add some load balancer on top
Railway can load balance with replicas, that should be a non issue
have you ran the same test locally and achieved more throughput? are you sure this isn't a bottleneck in your code?
sure, ill share the metrics, is the project on hobby plan? my account is not creator of project
what plan is the project owner on?
I assumed that the project was on your account, your discord account doesn’t have the pro plan role
ill check the owners project plan, and also make load test locally and get back on this, thanks for the insights
these are the metrics in past 6 hrs
metrics look perfectly normal, looking forward to the local load testing results
agreed. Looks like it’s crashing after a spike, but that spike is nowhere near your max
yes, its better if i load test locally and elimate the possibilty of code level issues, thanks