R
RunPodβ€’6mo ago
derRaab

accelerate launch best --num_cpu_threads_per_process value ?

Hi guys, I try to do some lora training on a serverless endpoint and I wonder how many cpu cores are available with the different GPU types? Is there a specification on that somewhere? And / or what do you use? My first tests ran on a single thread but would love to maximize performance. πŸ™‚
Solution:
You can use this environment variable:
RUNPOD_CPU_COUNT=6
RUNPOD_CPU_COUNT=6
...
Jump to solution
3 Replies
Solution
ashleyk
ashleykβ€’6mo ago
You can use this environment variable:
RUNPOD_CPU_COUNT=6
RUNPOD_CPU_COUNT=6
derRaab
derRaabβ€’6mo ago
Thank you so much! Somehow I never saw https://docs.runpod.io/docs/pod-env-variables . 🀦🏻
RunPod
Pod Environment Variables
Environment variables are accessible within in a pod. You can access this page by clicking on the menu icon and Edit Pod.
ashleyk
ashleykβ€’6mo ago
serverless environment variables are a bit different
RUNPOD_WEBHOOK_POST_STREAM=https://api.runpod.ai/v2/12345657890/job-stream/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_ENDPOINT_ID=mpoacd7wrmv2fc
RUNPOD_CPU_COUNT=6
RUNPOD_POD_ID=p8btjjjjq865pi
RUNPOD_GPU_SIZE=AMPERE_24
RUNPOD_MEM_GB=62
RUNPOD_GPU_COUNT=1
RUNPOD_VOLUME_ID=hbsp3mav9e
RUNPOD_POD_HOSTNAME=p8btjjjjq865pi-64410f26
RUNPOD_DEBUG_LEVEL=INFO
RUNPOD_ENDPOINT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_DC_ID=EU-RO-1
RUNPOD_AI_API_ID=mpoacd7wrmv2fc
RUNPOD_AI_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_WEBHOOK_GET_JOB=https://api.runpod.ai/v2/12345657890/job-take/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_PING=https://api.runpod.ai/v2/12345657890/ping/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_POST_OUTPUT=https://api.runpod.ai/v2/12345657890/job-done/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_PING_INTERVAL=4000
CUDA_VERSION=11.8.0
NV_CUDNN_VERSION=8.9.6.50
RUNPOD_WEBHOOK_POST_STREAM=https://api.runpod.ai/v2/12345657890/job-stream/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_ENDPOINT_ID=mpoacd7wrmv2fc
RUNPOD_CPU_COUNT=6
RUNPOD_POD_ID=p8btjjjjq865pi
RUNPOD_GPU_SIZE=AMPERE_24
RUNPOD_MEM_GB=62
RUNPOD_GPU_COUNT=1
RUNPOD_VOLUME_ID=hbsp3mav9e
RUNPOD_POD_HOSTNAME=p8btjjjjq865pi-64410f26
RUNPOD_DEBUG_LEVEL=INFO
RUNPOD_ENDPOINT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_DC_ID=EU-RO-1
RUNPOD_AI_API_ID=mpoacd7wrmv2fc
RUNPOD_AI_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RUNPOD_WEBHOOK_GET_JOB=https://api.runpod.ai/v2/12345657890/job-take/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_PING=https://api.runpod.ai/v2/12345657890/ping/12345657890?gpu=NVIDIA+L4
RUNPOD_WEBHOOK_POST_OUTPUT=https://api.runpod.ai/v2/12345657890/job-done/12345657890/$ID?gpu=NVIDIA+L4
RUNPOD_PING_INTERVAL=4000
CUDA_VERSION=11.8.0
NV_CUDNN_VERSION=8.9.6.50
Want results from more Discord servers?
Add your server
More Posts
Issue with Request Count Scale TypeRequest Count is set to 15 and there are more than 15 requests but an additional worker is not beingbilling not adding upThe time listed in billing doesn't add up Hello guys. I see a bunch of charges listed as 1 minute bDo I need to keep Pod open after using it to setup serverless APIs for stable diffusion?Hi I'm following this tutorial on building serverless endpoints for running txt2img with ControlNet SSH key not workingHello, im trying to get SSH working. My pod is pre-configured. I added my key to the pod variables. how do you access the endpoint of a deployed llm on runpod webui and access it through Python?how do you access the endpoint of a deployed llm on runpod webui and access it through Python?Is runpod UI accurate when saying all workers are throttled?To be honest, I cannot tell if the image I see is correct? I have two endpoints both with max 3 workserverless: any way to figure out what gpu type a job ran on?trying to get data on speeds across gpu types for our jobs, and i'm wondering if the api exposes thiIs it possible to build an API for an automatic1111 extension to be used through Runpod serverless?I want to use the faceswaplab extension for automatic1111 as a serverless endpoint on Runpod. I manhosting mistral model in productionhi, I wish to host mistral model in runpod for production. what will happen to the app during scheduJobs suddenly queuing up: only 1 worker active, 9 jobs queued** Endpoint: vieo12phdoc8kh** Hi, are there any known issues at the moment with 4090s? Our processi