Connecting to FASTApi websocket route from Amazon EC2
Hi community
We have a websocket route implemented through FASTApi and this application exposes port 8000 and we deploy the application to run on 0.0.0.0:8000.
Now I am trying to connect to this websocket from a python backend that is running on an Amazon EC2 using the pod's proxy url.
For some reason, we don't get past the handshake with the websocket endpoint on the runpod while connecting from EC2.
However while connecting to the websocket endpoint running on the pod from the same python backend running locally not on the EC2, the connection works.
We create the pods programatically via API like
payload = {
"name": f"gpu-pod-{int(time.time())}",
"imageName": "runpod/pytorch:1.0.1-cu1281-torch280-ubuntu2404",
"computeType": "GPU",
"cloudType": "SECURE",
"interruptible": False,
"gpuCount": 1,
"gpuTypeIds": [gpu_type], # Only request this specific GPU type
"gpuTypePriority": "availability",
"dataCenterId": datacenter,
"allowedCudaVersions": ["12.8"],
"containerDiskInGb": 5,
"ports": [
"8000/http",
"9000/http"
],
"dockerStartCmd": [
"bash",
"/workspace/start_service.sh"
],
"supportPublicIp": True,
"volumeInGb": 0,
"volumeMountPath": "/workspace",
"networkVolumeId": network_volume_id
}
What are we doing wrong?
Thank you for your help 🫡
2 Replies
@MojoJojo Our proxy service doesn't currently support websockets, you can get around itpretty easily by changing the
/http on your port 8000 to /tcp and using the machines ip instead of the pods proxy url.@Dj hey there, thanks for getting back to me. But we could actually make the websocket handshake work with the proxy url. It turned out to be a CUDA mismatch error that was shutting the whole process down.
But now we have been trying to load a pod with allowed cuda 12.8 and matching it with pytorch template with cuda 12.8, and sometimes the official image also get downloaded destructing our quick uptime - why is that? I thought using the pytorch offical image should be fast as it reuses from your cache.