Issue with llama-3.1:405b using https://console.runpod.io/hub/tanujdargan/runpod-worker-ollama

Hi I am stuck in a rollout in progress spinning wheel with no logs to see what is going on. using this repo https://console.runpod.io/hub/tanujdargan/runpod-worker-ollama I have made the following mods RUNPOD_INIT_TIMEOUT=800 OLLAMA_MODELS=/runpod-volume gpuIds=BLACKWELL_96 gpuCount=4 locations=US-KS-2 networkVolumeId=exoXXredactedXX I have two requests queued. But nothing is happening. The network volume has 900Gb of space.
3 Replies
Poddy
Poddy2d ago
samhodge
samhodgeOP2d ago
awesome
Dj
Dj2d ago
Could you share the endpoint id for this repo? Let me give this a look.

Did you find this page helpful?