R
Railway•8mo ago
pandas

Dockerized application on Railway (docker compose)

Below is example of django application using celery, redis and postgres db Railway has databases redis and postgres which are straight forward way to set it up But what about celery worker or any other extra workers, how to set them up? There is also some volumes
volumes:
postgres_data:
django_storage:
volumes:
postgres_data:
django_storage:
I read there will downtime during deployment as well because of volumes? Would be nice to clarify how it works and how to generally set more complex applications that require multiple docker services to run? Would I have each separate github repo just for each individual service such as celery? And how to make service talk to each other do we use internal railway network? Thanks
version: '3'

services:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data
env_file:
- ./server/.env

redis:
image: redis:6

web:
build: .
command: ["./entrypoint.sh"]
volumes:
- .:/app
- django_storage:/app/django_storage
ports:
- "8000:8000"
depends_on:
- db
- redis
environment:
DEBUG: ${DEBUG:-'True'}

celery:
build: .
command: celery -A server worker -l info -n p80 --pool=threads --concurrency=100
volumes:
- .:/app
depends_on:
- db
- redis

nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./certs:/etc/letsencrypt
- ./webroot:/var/www/certbot
depends_on:
- db
- redis
- web
- celery

certbot:
image: certbot/certbot
command: certbot --help
volumes:
- ./certs:/etc/letsencrypt
- ./webroot:/var/www/certbot
depends_on:
- db
- redis
- web
- celery
- nginx

volumes:
postgres_data:
django_storage:
version: '3'

services:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data
env_file:
- ./server/.env

redis:
image: redis:6

web:
build: .
command: ["./entrypoint.sh"]
volumes:
- .:/app
- django_storage:/app/django_storage
ports:
- "8000:8000"
depends_on:
- db
- redis
environment:
DEBUG: ${DEBUG:-'True'}

celery:
build: .
command: celery -A server worker -l info -n p80 --pool=threads --concurrency=100
volumes:
- .:/app
depends_on:
- db
- redis

nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./certs:/etc/letsencrypt
- ./webroot:/var/www/certbot
depends_on:
- db
- redis
- web
- celery

certbot:
image: certbot/certbot
command: certbot --help
volumes:
- ./certs:/etc/letsencrypt
- ./webroot:/var/www/certbot
depends_on:
- db
- redis
- web
- celery
- nginx

volumes:
postgres_data:
django_storage:
Solution:
so as you may know Railway does not yet support docker compose, you would have to manually deconstruct the compose file into individual railway services manually
Jump to solution
14 Replies
Percy
Percy•8mo ago
Project ID: N/A
pandas
pandas•8mo ago
N/A Dockerfile
# Use an official Python runtime as the base image
FROM python:3.8-slim

# Set environment variables for Django
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE server.settings

# Set the working directory in the container to /app
WORKDIR /app

# Add the current directory . into the container at /app
ADD . /app/

# Install system dependencies
RUN apt-get update && \
apt-get install -y --fix-missing --no-install-recommends \
libpq-dev \
libc6-dev \
build-essential \
gcc \
g++ && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Install Python dependencies
RUN pip install --upgrade pip && \
pip install -r requirements.txt

# Run the application
CMD ["gunicorn", "server.wsgi:application", "--bind", "0.0.0.0:8000"]
# Use an official Python runtime as the base image
FROM python:3.8-slim

# Set environment variables for Django
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE server.settings

# Set the working directory in the container to /app
WORKDIR /app

# Add the current directory . into the container at /app
ADD . /app/

# Install system dependencies
RUN apt-get update && \
apt-get install -y --fix-missing --no-install-recommends \
libpq-dev \
libc6-dev \
build-essential \
gcc \
g++ && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Install Python dependencies
RUN pip install --upgrade pip && \
pip install -r requirements.txt

# Run the application
CMD ["gunicorn", "server.wsgi:application", "--bind", "0.0.0.0:8000"]
zeroloc
zeroloc•8mo ago
Following. I have the same use case. Django, Celery, Redis, and Caddy deployed on a single AWS EC2. We do not have volumes though and DB is on RDS.
Solution
Brody
Brody•8mo ago
so as you may know Railway does not yet support docker compose, you would have to manually deconstruct the compose file into individual railway services manually
Brody
Brody•8mo ago
except you wouldn't need nginx or certbot since Railway handles SSL certs for you
pandas
pandas•8mo ago
Thanks, Brody What about communication between apps can IP be pointed to service name or is IP internal railway app?
Brody
Brody•8mo ago
communication between apps can use the private network domains
pandas
pandas•8mo ago
Ty and how does volume downtime work? Are deployments zero downtime if theres no volume attached?
Brody
Brody•8mo ago
a service with a volume will incur some downtime, even if you use a healthcheck, this is done to prevent data corruption
pandas
pandas•8mo ago
One suitable solution I found you can leave docker-compose for easier development and then when taking it to production railway you essentially deploy github repo twice once you allow railway to detect Dockerfile and deploy it and second time you change run command to run celery
Brody
Brody•8mo ago
thats exactly right, the services would essentially be the exact same, with one having a start command to run celery
pandas
pandas•8mo ago
Communication between them should be on internal railway address but unfortunately keep in mind if you use app sleep (currently in beta) service won't wake up on internal railway address team is working on that Can you mark it as solved please? Not sure how to do it, thank you for help appreciate it
Brody
Brody•8mo ago
thank you for also helping the other user in this thread 🙂
zeroloc
zeroloc•7mo ago
Thanks @pandas9 and @Brody for the pointers.