Max numbers of users
My telegram bot is taking a long time to reply user requests. Around 10 minutes at peak usage time. There are around 1 500 users there. What can do to address this issue?
78 Replies
Project ID:
N/A
N/A
the simple answer? write optimized code
Are there anything that can cause this issue on the railway server side?
Or should I only focus only my program?
you are on the trial plan with limited resources, I'd suggest upgrading first?
I am on the pro plan
are you running into your resource limit of 32 vcpu and 32gb ram?
I guess I have never hit my resource limit.
One of my users have told me that they get double responses for one request (like clicking a command once). I can't see any reason in my code that can cause this.
Are you using replicas?
What does your bot do? Does it access a database?
@botdev ^
My bot generates ppt files
power point files??
It sends requests to openai 3 times, unsplash 5-15 times, 10-30 times to googletranslator for each slide
yes
It generates .pptx files
Doesn't sound like that should take 10 mins, is the repo public?
No, it is not.
In what situations does it take 10 minutes to respond?
When the user requests a powerpoint?
yeah that is a lot of requests, but not 10 minutes worthy
It will be very hard to help you debug if we do not have an example
when pressing /start. Which starts a form to fill. then bot starts generating files
Does it take 10 minutes to process locally?
no. 1, 2 minutes maximum
I use acyncio to process ppt generation at the background. And save a reference to those task in a dictionary
my users say when they try to cancel bot takes a long time to respond
well there's a sure fire way to figure out why it takes so long, you need to add fine grained telemetry logging for how long each step after you receive /start
Exactly what I was going to say
I will try doing that.
what is this?
Unclosed client session
@replica: 094657f8-ece9-4f26-acab-5e5d93d35085
A Railway feature where you can have multiple instances of your app running
I get this erro, too.
you aren't, you can't use replicas for telegram bots (can only use a token at a single time)
Shouldn't, but they could've enabled it unintentionally
fair
does this error mean I am using replica?
you aren't
you can't with a telegram bot
but yeah, without this kind of telemetry logging me and Adam would just be guessing at the problem
Okay. I will try this first
Then comeback here
Thanks!
sounds good
I have a question. Just asking to get some general roadmap. My bot has two parts. One, gathers info from a user. Two, generates ppt file using that data. How would I separate these two parts so that first is only single instance and the second can be multi-instance? I mean telegram bots can't have multiple instance, but file generation can be. How would this be done on the Railway cloud?
@Adam any idea lol
I wouldn’t do multiple instances if you haven’t already implemented multithreading. Doesn't look like you have given that your resource usage is so low
have you implemented multithreading?
@botdev
I would definitely still recommend splitting the services though
I haven't yet. When I did research, I came to the conclusion that multithreading will lead me to a lot of problems at the endi
spillitting means, developing another application that run on a different computer?
And talk to each other
different railway service in the same project, but yes
same concept yeah
one service just to handle the bot actions, another service to generate the pptx files, this pptx generating service would be an api and then you can utilize railways replicas on this service for horizontal scaling
yup exactly. We’re also here to help with all that in case you run into any issues
What language is your app written in? Would be helpful to know in case we can recommend packages or frameworks
I use python
Everything is in python at the moment
I need to do some research on API
a simple flask api that accepts form data and returns the power point file would be easy enough break off from your all in one service you currently have
yep agreed
I’m well versed in python so happy to help look over your code or deal with any errors
This is helpful. Jus shortened my reseach scope. Thanks!
Thanks! I will ask you.
Hi
Hope you are doing well
I have splitted my code into two parts: file generator, telegram bot
Currently, I am running the FastAPI API endpoint on my laptop
When there is two requests at the same time, my file generator endpoint waits untill the first recieved process is finished
Then it processes the next request.
How does this behave on the railway server? Will there be an instance of my program for every request?
what command do you use to run the fastapi server locally?
when you use uvicorn on railway you would be able to modify the start command to use threaded async workers that will allow you to handle multiple concurrent requests, though i highly recommend using hypercorn instead of uvicorn https://hypercorn.readthedocs.io/en/latest/
hi @Brody . Thanks for the reply! I am using uvicorn.
Do I have to modify my code so that it can use multiple async workers?
i dont think so, just increase the worker count https://www.uvicorn.org/settings/#production
start with 4
My next question is when I send requests to an API using python requests library, the server endpoint code is blocked till the request receives an answer. I have wrapped all the requests into a python function and added it to fastAPI background tasks. Still the code is blocked by requests. What is the best way to keep my file generator endpoint responsive?
even if one call is blocked, that shouldnt block other api calls
are other api calls being blocked?
I run the endpoint on my laptop using uvicorn. Tried sending multiple request at the same time. The enpoint processed the request one by one. Waiting the first to finish. No sure if this is because of that python requests library.
this is an issue with the app, not the client
have you tried increasing the workers for uvicorn
Not yet. I will try that later today.
okay let me know how that goes
Okay.
Hi!, @Brody
I have tried setting uvicorn workers to 4. The file generator endPoint is handling concurrent requests at the same time. The design of my system is that I keep references to all file generation tasks. The tasks are stored in a dictionary that contains user_ids and active task statuses. The telegram bot periodically checks the task status by user_id. After, I deployed multiple uvicorn workers, the dictionary is not shared between uvicorn workers. The status checker is unable to find tasks by user_ids.
I added task_status checkers because I wanted to add cancelling functionality to my uvicorn endpoint. If a user cancels file genration bot sends a signal back to the endpoint to stop processing.
you likely want to use a proper task queue system, like celery, sounds like you are trying to roll your own?
Is it possible to save reference to MongoDB deployed on a railway server? I should save process status, and, once the file is ready, a path to download the file.
i think you should use celery, it still sounds like you are trying to roll your own task queue system
@Adam may have a different suggestion though
Okay. I am researching celery.
celery sounds good to me
@Adam I am trying to convert .pptx (PowerPoint) to .pdf file format. From what I find, my program should run on a windows machine, or linux with Microsoft PowerPoint installed. What kind operating system does Railway servers use?
Ubuntu linux
Can i install PowerPoint there?
Not that I know of. You can probably use the LibreOffice CLI, though.
also worth mentioning your app is ran in a docker container
so whatever you need to isntall has to be installable via a nix package or an apt package
I will keep this in mind! Thanks!
@Adam I guess I am ready to deploy my uvicorn fastAPI end point on Railway. Anything that I need to keep in mind?
I plan to set uvicorn workers to 4
I have not implemented pdf converter yet.
@Brody do I deploy my api endpoint in the same project with my telegram bot?
@Adam I have deployed my fast api endpoint. Now, I get this error: "HTTP/1.1 503 Service Unavailable"
please chill with the pings, we where both sleeping
do I deploy my api endpoint in the same project with my telegram bot?yes absolutely
I have deployed my fast api endpoint. Now, I get this error: "HTTP/1.1 503 Service Unavailable"please read this docs page https://docs.railway.app/troubleshoot/fixing-common-errors
ty Brody
Haha sorry.
Is replica instance automatically scaled up?
How does railway know when to scale?
it's not automatic, they run the amount to replicas you specify, and only that amount
I have set my uvocron workers number to 4 and the same number for replica size on the project settings
Deploy log shows only 4 instances
Is that number of instances or workers?
Oh sorry
my bad
I guess I did not specify the workers while starting my application.
if 4 replicas each ran 4 workers, that's 16 total workers
Does my bill get increased for every worker even though some of them stay idle? Not processing anything.
i think a worker does allocate some amount of minimum memory, and replicas are just clones of your service so yes either would raise the costs to some degree or another
Advice on file retrieval system?
I am plan to implement celery for queueing tasks. Telegram bot send request to the fastAPI endpoint. The enpoint adds the task to RabbitMQ, then celery gets tasks from the broker and execute. Once task finished, a url path to the generated file is returned. Not sure what to do after. My telegram bot needs to somehow retrieve this file.
I want to do this on the railway platform
send the file to the user is what you'd do next
Where is the generated file is saved, so that my bot could download it?
Bot is communicating with my fastapi endpoint
it would be saved in the fastapi's container, once the api is done creating the file, generate a unique id that pertains to the file and then setup a file handler that returns the correct file as requested by the id, this way your telegram bot can get the file from the api
Thanks!