R
RunPod•6mo ago
m0nkspade

Advice on Creating Custom RunPod Template

Can anyone point me to a good tutorial for creating my own RunPod templates?
35 Replies
ashleyk
ashleyk•6mo ago
For serverless or GPU cloud?
m0nkspade
m0nkspade•6mo ago
GPU could
justin
justin•6mo ago
RunPod Blog
DIY Deep Learning Docker Container
Are you tired of using someone else's container, only to find out that they have the wrong versions of your tools installed? Maybe you have just installed everything from scratch every time you wanted to start over and thought to yourself, "this is a waste of time"? I've personally gone
justin
justin•6mo ago
GitHub
FooocusRunpod/Dockerfile at master ¡ justinwlin/FooocusRunpod
Contribute to justinwlin/FooocusRunpod development by creating an account on GitHub.
justin
justin•6mo ago
what i like to do tho is as seen from my fooocus as a very easy ex. is turn on a gpu pod on runpod usually on their pytorch ones since they do a lot of stuff in the bg for openssh access and stuff and just follow the github commands for pips / apt installs and then i ask chatgpt to add those commands to my docker file 😂 since i can just use one of the base templates from runpod as a starting point makes it easy
ashleyk
ashleyk•6mo ago
This is a better example than the Deep Learning one: https://blog.runpod.io/creating-a-vlad-diffusion-template-for-runpod/
RunPod Blog
Creating a Vlad Diffusion Template for RunPod
The default Pod templates and models are pretty cool (if we say so ourselves), but play with them for too long and you'll start to get used to them. If you're looking for something new and exciting again, it might be time to create a new custom template. Here, I'll
justin
justin•6mo ago
oooohhh!
ashleyk
ashleyk•6mo ago
By the way @m0nkspade you posted your question in the wrong place, this is for serverless.
justin
justin•6mo ago
dang this one is way better good to know
m0nkspade
m0nkspade•6mo ago
So I want a template that already has the models I want. For Stable Diffusion. So I don't have to download them every time I create a new pod.
ashleyk
ashleyk•6mo ago
GitHub
containers/official-templates/stable-diffusion-webui/Dockerfile at ...
🐳 | Dockerfiles for the RunPod container images used for our official templates. - runpod/containers
justin
justin•6mo ago
was just telling @TenlĂŻs haha https://github.com/justinwlin/FooocusRunpod but my advice is: 1) start from a runpod base. ull have a lot more of everything u need / auto juoyter server / http web terminal 2) just add whatever git clones and copy to whatever folders and repository. chatgpt can help here 3) i like to spin up a gpu pod with the base runpod ill use then i record my steps i write in the terminal and whatever to get things working so i can tell chatgpt to help me modify the docker file 4) DO NOT OVERWRITE THE CMD bc ull delete the launch command from the base template runpod ur using that starts up jupyter server and stuff
GitHub
GitHub - justinwlin/FooocusRunpod
Contribute to justinwlin/FooocusRunpod development by creating an account on GitHub.
justin
justin•6mo ago
Can see my github for ex. i just start from a runpod base, do some clones, install some packages and done and u would just add additional download steps for ur models to whatever folder for ur stable diffusuon stuff
TenlĂŻs
Tenlïs•6mo ago
If you need help I can help 👋
m0nkspade
m0nkspade•6mo ago
Sure, I'm actually struggling to create my docker container. Keeps throwing an error: ERROR: Could not build wheels for psutil, which is required to install pyproject.toml-based projects
TenlĂŻs
Tenlïs•6mo ago
it seems that you're missing some lib // update i think
m0nkspade
m0nkspade•6mo ago
K,... I'm not sure what needs updating. Am running my container on an M1 MacBook.
TenlĂŻs
Tenlïs•6mo ago
can you share your code ? (dockfile)
m0nkspade
m0nkspade•6mo ago
m0nkspade
m0nkspade•6mo ago
Says 'gcc' failed, But it is installed
TenlĂŻs
Tenlïs•6mo ago
The error "ERROR: Could not build wheels for psutil, which is required to install pyproject.toml-based projects" you're encountering in your Docker build process is likely due to missing build dependencies required for compiling the psutil package. This package is a Python module providing an interface for retrieving information on system utilization (like CPU, memory, disks, network, sensors) in Python scripts. To resolve this issue in the context of your Dockerfile, you should ensure that all necessary system packages for building Python packages are installed. Here's how you can modify your Dockerfile to include these dependencies: 1. Add Build Dependencies: In the part of your Dockerfile where you're setting up your Python environment, make sure to install packages that are typically required for building Python modules, such as build-essential and python3-dev. Since you're using a Debian-based image (python:3.10.9-slim), you can use apt-get to install these:
RUN apt-get update && \
apt-get install -y build-essential python3-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

RUN apt-get update && \
apt-get install -y build-essential python3-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

2. Update pip and setuptools: Older versions of pip and setuptools can sometimes cause issues when building packages. Make sure they're updated:
RUN pip install --upgrade pip setuptools wheel

RUN pip install --upgrade pip setuptools wheel

3. Python and pip Version: Ensure that the version of Python and pip in your Docker container are compatible with psutil. You might want to check the psutil documentation or PyPI page to see which versions are supported. 4. Error Logging: If the above steps don't resolve the issue, try to get more detailed error logs. This can provide additional insight into what might be going wrong. By adding these build tools and ensuring everything is up to date, you should be able to resolve the issue with building wheels for psutil. Remember to place these commands in the appropriate part of your Dockerfile, where the Python environment is being set up.
m0nkspade
m0nkspade•6mo ago
Really appreciate this. I hope this solves everything and I run Stable Diffusion from my Docker container.
TenlĂŻs
Tenlïs•6mo ago
let us know
m0nkspade
m0nkspade•6mo ago
In my Dockerfile, line 49 looks like the code in the first snippet, under 1. Add Build Dependecies, am I to replace that with your suggested code snippet?
TenlĂŻs
Tenlïs•6mo ago
yea try it
m0nkspade
m0nkspade•6mo ago
Getting close: 1.163 OSError: /usr/local/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops9fft_irfft4callERKNS_6TensorEN3c108optionalIlEElNS6_INS5_17basic_string_viewIcEEEE It seems to be failing when trying: "RUN cd /stable-diffusion-webui && python cache.py --use-cpu=all --ckpt /model.safetensors" Not sure if this is related to the model I've selected.
justin
justin•6mo ago
Probably 1) Ask chatgpt to make this use a virtual environment for python in ur docker file. this will really help isolate dependencies. 2) I havent looked ur docker file looks like dependency issue getting torch / torch audio library. so prob like add a create venv, activate venv step to docker, make sure its a python 3.10, and yeah
minu
minu•5mo ago
hey , i am new to runpod and docker as well, can anyone guide me how i can create a custom template for stable-diffusion-xl-base-1.0 (i.e. I have to make some changes in the code e.g. output image and prompt etc, thats why i want to build my own image). p.s. my system does not have GPU, and you require GPU to run this stable-diffusion-xl-base-1.0 code
justin
justin•5mo ago
Okay, first, what do you mean code? I don't know what template you are using, but usually the runpod templates are more of an environment setup? If I am correct? Then, to set it up: 1. I highly recommend to read my FoooocusRunpod github, where I make a pretty basic one for Fooocus usage. https://github.com/justinwlin/FooocusRunpod Essentially, what I recommend is: a) use a base template from runpod official, I like to use their pytorch one: FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04 This gives you a bunch of things under the hood such as OpenSSH / Jupyter lab. WHAT THIS MEANS THOUGH IS NOT TO OVERRIDE THE CMD command. B/c if you overwrite the CMD command, then the base template won't be able to start it's start.sh script. Though you could modify it to still call it as this link says: https://docs.runpod.io/docs/customize-a-template 2) What I highly recommend is start up a GPU pod on runpod using the pytorch image. Keep track of what you need to do. Such as did you run a terminal command, did you pip install, so on. And this essentially is what will go into your docker image at the end of the day. You can use ChatGPT4 and also phind.com/, to help you code. 3) Make sure that you download Dockerhub and you are able to get it working locally calling the docker command, and successfully pushing an image to your docker hub account, even a hello world. ^But these are the general steps, if you can do it in the GPU pod, you can make a docker container (generally) to do the same thing. This is another one I have for ex. that is for audiocraft (sound / music generation)
# Use the updated base CUDA image
FROM runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04

WORKDIR /app

# Best practices for minimizing layer size and avoiding cache issues
RUN apt-get update && \
apt-get install -y --no-install-recommends ffmpeg && \
rm -rf /var/lib/apt/lists/* && \
pip install --no-cache-dir torch==2.1.2 torchvision torchaudio xformers audiocraft firebase-rest-api==1.11.0 noisereduce==3.0.0 runpod

COPY preloadModel.py /app/preloadModel.py
COPY handler.py /app/handler.py
COPY firebase_credentials.json /app/firebase_credentials.json
COPY suprepo /app/suprepo

RUN python /app/preloadModel.py
# Use the updated base CUDA image
FROM runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04

WORKDIR /app

# Best practices for minimizing layer size and avoiding cache issues
RUN apt-get update && \
apt-get install -y --no-install-recommends ffmpeg && \
rm -rf /var/lib/apt/lists/* && \
pip install --no-cache-dir torch==2.1.2 torchvision torchaudio xformers audiocraft firebase-rest-api==1.11.0 noisereduce==3.0.0 runpod

COPY preloadModel.py /app/preloadModel.py
COPY handler.py /app/handler.py
COPY firebase_credentials.json /app/firebase_credentials.json
COPY suprepo /app/suprepo

RUN python /app/preloadModel.py
Same thing applies to the serverless function, generally the serverless function are just gpu pods with a handler.py that you initialize. so you can test on a gpu call, manaulyly call it etc.
ashleyk
ashleyk•5mo ago
You can take a look at this, you can add your own models and it supports ControlNet, AfterDetailer etc as well. https://github.com/ashleykleynhans/runpod-worker-a1111
GitHub
GitHub - ashleykleynhans/runpod-worker-a1111: RunPod Serverless Wor...
RunPod Serverless Worker for the Automatic1111 Stable Diffusion API - GitHub - ashleykleynhans/runpod-worker-a1111: RunPod Serverless Worker for the Automatic1111 Stable Diffusion API
minu
minu•5mo ago
I want create API myself and use specific version of stable diffusion and by code i mean code provided on hugging face https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
minu
minu•5mo ago
Secondly how can i create a docker image on my own system when i system does not have enough resources? dont you need enough resources (like i need GPU for running https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 , and my system doesnt have any) in your own system to create an docker image?
ashleyk
ashleyk•5mo ago
You don't need a GPU to build most docker images.
ashleyk
ashleyk•5mo ago
You can also look at something like https://depot.dev
Depot
Depot
The fastest way to build Docker images.
justin
justin•5mo ago
as ashelyk said most docker images u dont need a gpu to build. if ur system is too slow u can use depot, i use it for my images i liked on github with commands. once the images are built to run it u can run it on runpod gpu pod. (there are some like vllm)
Want results from more Discord servers?
Add your server
More Posts
If my RunPod ran out of money and stopped running.Would runpod still have my old deleted instance?Vllm problem, cuda out of memory, ( im using 2 gpus, worker-vllm runpod's image )"dt":"2023-12-22 12:02:22.089336" "endpointid":"489pa1sglkvuhf" "level":"info" "message":"torch.cudaHello, i think my template downloaded the docker template image while running my requesti've deleted the worker because its still running and i cancelled the request but heres my endpointaccelerate launch best --num_cpu_threads_per_process value ?Hi guys, I try to do some lora training on a serverless endpoint and I wonder how many cpu cores areIssue with Request Count Scale TypeRequest Count is set to 15 and there are more than 15 requests but an additional worker is not beingbilling not adding upThe time listed in billing doesn't add up Hello guys. I see a bunch of charges listed as 1 minute bDo I need to keep Pod open after using it to setup serverless APIs for stable diffusion?Hi I'm following this tutorial on building serverless endpoints for running txt2img with ControlNet SSH key not workingHello, im trying to get SSH working. My pod is pre-configured. I added my key to the pod variables. how do you access the endpoint of a deployed llm on runpod webui and access it through Python?how do you access the endpoint of a deployed llm on runpod webui and access it through Python?Is runpod UI accurate when saying all workers are throttled?To be honest, I cannot tell if the image I see is correct? I have two endpoints both with max 3 work