Help with Dockerfile, mouting volumes and executing python scripts from subfolders

Hi there. I would like to mount my media folder with its content (see screenshot) as a storage for my csvs and python scripts. My python scripts should be executable within the media/suppliers/allcam folder. I have below docker file but I dont seem to understand how volumes are handled in railway. My Railway volume configuration: /code/media (in the dashboard) My dockerfile:
# Pull the official base image
FROM python:3.10-slim-buster

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# The port environment variable is set by Railway when the container is run
# But we can default it to 8080 for local development
ENV PORT=8080

# Expose the port
EXPOSE $PORT

# Set work directory in the container
WORKDIR /code

# Install dependencies
# Copying the requirements file and installing dependencies separately
# from copying the entire project ensures that Docker's cache is used more effectively,
# and dependencies are not unnecessarily reinstalled upon every build.
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt


# Copy project all files
COPY . /code

# Command to run on container start ter
#CMD ["python", "main.py"]

# Command to run on container start
CMD uvicorn main:app --host 0.0.0.0 --port $PORT
# Pull the official base image
FROM python:3.10-slim-buster

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# The port environment variable is set by Railway when the container is run
# But we can default it to 8080 for local development
ENV PORT=8080

# Expose the port
EXPOSE $PORT

# Set work directory in the container
WORKDIR /code

# Install dependencies
# Copying the requirements file and installing dependencies separately
# from copying the entire project ensures that Docker's cache is used more effectively,
# and dependencies are not unnecessarily reinstalled upon every build.
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt


# Copy project all files
COPY . /code

# Command to run on container start ter
#CMD ["python", "main.py"]

# Command to run on container start
CMD uvicorn main:app --host 0.0.0.0 --port $PORT
productid=e7d7786b-ae28-46d6-ab8a-d8c5018004bb
No description
7 Replies
Percy
Percy11mo ago
Project ID: e7d7786b-ae28-46d6-ab8a-d8c5018004bb
ZeroxAdvanced
ZeroxAdvanced11mo ago
Using this configuration I cannot execute the scripts in my folder structure from my media/suppliers/allcam and it seems that not all files are copied. Question: Can someone provide an example how to make the dockerfile copy the whole project and setup the railway volume? My goals is to execute the scripts at runtime not from the root but media/suppliers/allcam folder. Locally the script exection is working fine with the workdir() which provides me the current workdir. I think something must be mis configured in the Volume or Dockerfile. Any help or suggestions are appriciated!
async def start_script():
log = ui.log(max_lines=1000).classes('w-full h-20')

#refer to the media/suppliers/allcam/get_full_data.py script
script_path = workdir() + 'media/suppliers/allcam/get_fulldata.py'
log.push('Starting script...')
try:
process = await asyncio.create_subprocess_exec(
'python', script_path,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT # Combine stdout and stderr
)

# Read output line by line as it is produced
while True:
line = await process.stdout.readline()
if not line:
break
# Push the output to the log object
log.push(line.decode().strip())

await process.wait()
if process.returncode == 0:
log.push('Script executed successfully')
else:
log.push('An error occurred while executing the script')

except OSError as e:
log.push('Execution failed: ' + str(e))
async def start_script():
log = ui.log(max_lines=1000).classes('w-full h-20')

#refer to the media/suppliers/allcam/get_full_data.py script
script_path = workdir() + 'media/suppliers/allcam/get_fulldata.py'
log.push('Starting script...')
try:
process = await asyncio.create_subprocess_exec(
'python', script_path,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT # Combine stdout and stderr
)

# Read output line by line as it is produced
while True:
line = await process.stdout.readline()
if not line:
break
# Push the output to the log object
log.push(line.decode().strip())

await process.wait()
if process.returncode == 0:
log.push('Script executed successfully')
else:
log.push('An error occurred while executing the script')

except OSError as e:
log.push('Execution failed: ' + str(e))
Brody
Brody11mo ago
for context volumes are not mounted during build, they are mounted only when your application starts. all your files are copied to the image when it's built, but then you're asking railway to mount an empty volume to /code/media so when railway starts your app it essentially replaces the /code/media folder with whats in the volume, an empty folder. instead mount your volume to /code/media/storage since this is not a pre-existing folder this mount point won't get in the way of anything, then have your code save the persistent files to that location, you do not need python files in this folder, just media type files
ZeroxAdvanced
ZeroxAdvanced11mo ago
@Brody ok i have archieved similar with below dockerfile. However I have one question: Here is my docker file.
# Pull the official base image
FROM python:3.10-slim-buster

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# The port environment variable is set by Railway when the container is run
# But we can default it to 8080 for local development
ENV PORT=8080

# Expose the port
EXPOSE $PORT

# Set work directory in the container
WORKDIR /code

# Install dependencies
# Copying the requirements file and installing dependencies separately
# from copying the entire project ensures that Docker's cache is used more effectively,
# and dependencies are not unnecessarily reinstalled upon every build.
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt


# Copy project all files to media directory
COPY . /code/


# Command to run on container start ter
#CMD ["python", "main.py"]

# Command to run on container start
CMD uvicorn main:app --host 0.0.0.0 --port $PORT
# Pull the official base image
FROM python:3.10-slim-buster

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# The port environment variable is set by Railway when the container is run
# But we can default it to 8080 for local development
ENV PORT=8080

# Expose the port
EXPOSE $PORT

# Set work directory in the container
WORKDIR /code

# Install dependencies
# Copying the requirements file and installing dependencies separately
# from copying the entire project ensures that Docker's cache is used more effectively,
# and dependencies are not unnecessarily reinstalled upon every build.
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt


# Copy project all files to media directory
COPY . /code/


# Command to run on container start ter
#CMD ["python", "main.py"]

# Command to run on container start
CMD uvicorn main:app --host 0.0.0.0 --port $PORT
and my volume path is /code/media I can access and save files fine e.g. csv files in my media folder that i create at runtime. However I would like to run scripts from my media folder, so you say the approach is to have my scripts in another folder? e..g media/supplier/allcam/products.csv <-- persistant file create new folder (that is not in media volume) scripts/supplier/allcam/import.py and deploy the scripts via github? then I only use the media/supplier folder as a persistant storage? So in general the volume is used to create only files at runtime and not place my scripts? My endgoals is that users also can modify and upload scripts in the volume and not only via deployment.
Brody
Brody11mo ago
My endgoals is that users also can modify and upload scripts in the volume and not only via deployment.
this sounds like the most insecure thing you could ever do however i think my previous answer still answers your additional questions as it lays out the basics for you, it is up to you on how you want to architecture your project with the new information
ZeroxAdvanced
ZeroxAdvanced11mo ago
@Brody hi the scripts will only be uploaded by our dev team. Nu public release but only internal usage. plus2 . Ok you mean i would have just place the scripts in another folder that is not mounted essentially? Can you confirm ? Then I know the right approach.
Brody
Brody11mo ago
if you need the scripts to persist between deployments they must be put in a volume
Want results from more Discord servers?
Add your server