LordOdin
RRunPod
•Created by LordOdin on 4/17/2025 in #🔧|api-opensource
500 When trying to spawn pods too fast, is there a way to spawn multiple?
Ive managed to start 100 nodes with no issues using synchronus requests, but when I use async it gives me 500s quite often. I usually try to start 20-100 nodes at once but even 2 can cause the 500.
Payload
some extras payload sprinkles
Error starting node nkKkw: Server error '500 Internal Server Error' for url 'https://rest.runpod.io/v1/pods'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
Response: {"error":"create pod: Something went wrong. Please try again later or contact support.","status":500}
Error starting node nkKkw: Server error '500 Internal Server Error' for url 'https://rest.runpod.io/v1/pods'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
Response: {"error":"create pod: Something went wrong. Please try again later or contact support.","status":500}
GPU_TYPE_MAPPING = {
"3090": "NVIDIA GeForce RTX 3090",
"3090Ti": "NVIDIA GeForce RTX 3090 Ti",
"A5000": "NVIDIA RTX A5000",
"A6000": "NVIDIA RTX A6000",
"4000Ada": "NVIDIA RTX 4000 Ada Generation",
}
GPU_TYPES = list(GPU_TYPE_MAPPING.keys())
QUEUE_MANAGER_HOST = "http://127.0.0.1:7777"
DEFAULT_PAYLOAD = {
"allowedCudaVersions": [],
"cloudType": "SECURE",
"computeType": "GPU",
"containerDiskInGb": 50,
"containerRegistryAuthId": "",
"countryCodes": [""],
"cpuFlavorPriority": "availability",
"dataCenterPriority": "availability",
"dockerEntrypoint": [],
"dockerStartCmd": [],
"env": {
"QUEUE_MANAGER_HOST": QUEUE_MANAGER_HOST
},
"gpuCount": 1,
"gpuTypePriority": "availability",
"interruptible": False,
"locked": False,
"minDiskBandwidthMBps": 500,
"minDownloadMbps": 500,
"minRAMPerGPU": 32,
"minUploadMbps": 500,
"minVCPUPerGPU": 8,
"ports": [],
"supportPublicIp": False
}
GPU_TYPE_MAPPING = {
"3090": "NVIDIA GeForce RTX 3090",
"3090Ti": "NVIDIA GeForce RTX 3090 Ti",
"A5000": "NVIDIA RTX A5000",
"A6000": "NVIDIA RTX A6000",
"4000Ada": "NVIDIA RTX 4000 Ada Generation",
}
GPU_TYPES = list(GPU_TYPE_MAPPING.keys())
QUEUE_MANAGER_HOST = "http://127.0.0.1:7777"
DEFAULT_PAYLOAD = {
"allowedCudaVersions": [],
"cloudType": "SECURE",
"computeType": "GPU",
"containerDiskInGb": 50,
"containerRegistryAuthId": "",
"countryCodes": [""],
"cpuFlavorPriority": "availability",
"dataCenterPriority": "availability",
"dockerEntrypoint": [],
"dockerStartCmd": [],
"env": {
"QUEUE_MANAGER_HOST": QUEUE_MANAGER_HOST
},
"gpuCount": 1,
"gpuTypePriority": "availability",
"interruptible": False,
"locked": False,
"minDiskBandwidthMBps": 500,
"minDownloadMbps": 500,
"minRAMPerGPU": 32,
"minUploadMbps": 500,
"minVCPUPerGPU": 8,
"ports": [],
"supportPublicIp": False
}
url = f"{Runpod.BASE_URL}/pods"
payload["gpuTypeIds"] = list(GPU_TYPE_MAPPING.values())
payload["imageName"] = DOCKER_IMAGE
payload["name"] = f"{user_id}-{random_id}"
url = f"{Runpod.BASE_URL}/pods"
payload["gpuTypeIds"] = list(GPU_TYPE_MAPPING.values())
payload["imageName"] = DOCKER_IMAGE
payload["name"] = f"{user_id}-{random_id}"
5 replies