R
Runpod•16mo ago
Jas

"The port is not up yet"

Having problems again, I created a new pod about 1 hour ago. it took me 1 hour to cloud sync, and now the pod will not run anything. I have tried to restart a couple of times, but always get this error message
60 Replies
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
Stable Diffusion Ultimate template. System logs are useless. Look at container logs. There is a 99.999999999999999999999999999999999999% chance its probably syncing to workspace. The screenshot that says the port is not up yet tells you exactly what to do/what to check, you obviously didn't bother following the instructions.
Jas
JasOP•16mo ago
Stable Diffusion Kohya_ss ComfyUI Ultimate this template
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
Probably cloud synced an older version of the template 6.0.1 was released today. So it needs to sync again if the template changes.
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
I guarantee you that it is syncing to workspace.
Jas
JasOP•16mo ago
says container ready
digigoblin
digigoblin•16mo ago
Or not 🤣 Check the application log
Jas
JasOP•16mo ago
where can i access other logs? jupiter you mean
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
No, you need to open the log in Jupyter/terminal
Jas
JasOP•16mo ago
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
Run nvidia-smi, you probably have CUDA 11.8
Jas
JasOP•16mo ago
ok, do you think the issue came with the cloud sync?
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
No, you got a pod with CUDA 11.8 instead of 12.1+ You need to use the filter at the top of the page to select CUDA 12.1 , 12.2, 12.3, 12.4 It doesn't work on 12.1 @JM said that all machines are at least 12.1 but clearly they are not.
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
Jas
JasOP•16mo ago
yes community cloud
digigoblin
digigoblin•16mo ago
Did you run nvidia-smi ?
Jas
JasOP•16mo ago
ok, so anything above 12.1?
digigoblin
digigoblin•16mo ago
Hmm, that shouldn't happen then Pod is broken, I suggest logging a support ticket and creating a new pod.
Jas
JasOP•16mo ago
where should i run this command>?
digigoblin
digigoblin•16mo ago
just nvidia-smi not run
Jas
JasOP•16mo ago
ok
digigoblin
digigoblin•16mo ago
it said forwards compatibility not supported so its clearly not 12.1+
Jas
JasOP•16mo ago
digigoblin
digigoblin•16mo ago
its either 11.8 or 12.0 WTF
Jas
JasOP•16mo ago
it says 12.1
digigoblin
digigoblin•16mo ago
That pod is broken then
Jas
JasOP•16mo ago
should i report it?
digigoblin
digigoblin•16mo ago
Yeah and create a new pod. Give the Pod id to runpod, but you can terminate it once you've taken note of the pod id.
Jas
JasOP•16mo ago
Ok I will start again. annoying waste of time, thanks for your quick replies though
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
No, torch could not use GPU. Torch version is fine, the pod is broken. I used this template earlier today and its fine. Pod is just broken.
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
Jas
JasOP•16mo ago
I will start a new pod and test again, hopefully works next time
digigoblin
digigoblin•16mo ago
Let us know how it goes 🤞
Jas
JasOP•16mo ago
Would you recommend any particular country for community GPU 3090's? I was trying to use Canada or Belgium before as they seemed more reliable, but currently none available. US based ones are awful. The last one that had issues was in Sweden
digigoblin
digigoblin•16mo ago
I don't actually know, I think CZ worked well, but I don't use 3090's often.
Jas
JasOP•16mo ago
ok couldnt find any available 3090's so trying a A5000 instead, got it running now
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
Jas
JasOP•16mo ago
And now I am having problems on the new pod! I was using ComfyUI and it errored out, then I tried to restart the pod and got "the port is not up yet" message. I tried to do as the logs said "pip install transformers -U" and nothing happened @nerdylive @digigoblin
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
JM
JM•16mo ago
@digigoblin - Correction: I said we are moving towards being 12.1+ for all GPU, it's not fully done yet. - At the moment, we are completely done sunsetting cuda for 11.8 and older. - Currently working on sunsetting cuda 12.0 too, will take a couple weeks to finish 🙂 (we have now less than 4% of GPU on 12.0^^)
digigoblin
digigoblin•16mo ago
Thanks for the clarification, any still on 11.8?
JM
JM•16mo ago
Nope, not even a single GPU card! Nice milestone^^ 🔥
Jas
JasOP•16mo ago
Hey guys, I am getting problems again, I tried to start 2 pods today and both of them have the same "Port is not up yet" issue. It is getting very frustrating. I assume they are both from the same machine, because the internet connection speed is similar. @JM @nerdylive @digigoblin
No description
No description
No description
No description
No description
No description
No description
Jas
JasOP•16mo ago
No description
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
Please stop saying the template is broken @nerdylive , you are WRONG, its a problem with the pod. I use this template all the time and told you last time you said its broken to stop saying that yet you persist in saying it without getitng your facts straight. When it says forward compatibility attemped, it means the machine is running CUDA 11.8 and not CUDA 12.1. You need to select all CUDA versions in the filter excluding 11.8 and 12.0.
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
This machine nvidia-smi is incorrect if it says 12.1 but has foward incompatibility issues. The template uses CUDA 12.1 so forwards compatibility errors means you are definitely not getting 12.1+ and either getting 11.8 or 12.0. I suggest always using the filters to select CUDA version then you can avoid getting these issues. You can select all versions 12.1 and higher in the filter but not 11.8 or 12.0.
Jas
JasOP•16mo ago
all of these did not work, I think from the same machine.
No description
Jas
JasOP•16mo ago
i tried a pod in a different region and got it working now
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
Jas
JasOP•16mo ago
ok I will try using 12.1+ filter next time
Unknown User
Unknown User•16mo ago
Message Not Public
Sign In & Join Server To View
digigoblin
digigoblin•16mo ago
Select 12.1, 12.2, 12.3, 12,4 etc not just 12.1 just not anything less than 12.1

Did you find this page helpful?