Search
Star
Feedback
Setup for Free
© 2026 Hedgehog Software, LLC
Twitter
GitHub
Discord
System
Light
Dark
More
Communities
Docs
About
Terms
Privacy
Issue with llama-3.1:405b using https://console.runpod.io/hub/tanujdargan/runpod-worker-ollama - Runpod
R
Runpod
•
3mo ago
•
4 replies
samhodge
Issue with llama-3.1:405b using https://console.runpod.io/hub/tanujdargan/runpod-worker-ollama
Hi I am stuck in a rollout in progress spinning wheel with no logs to see what is going on
.
using this repo
https://console.runpod.io/hub/tanujdargan/runpod-worker-ollama
I have made the following mods
RUNPOD
_INIT
_TIMEOUT
=800
OLLAMA
_MODELS
=
/runpod
-volume
gpuIds
=BLACKWELL
_96
gpuCount
=4
locations
=US
-KS
-2
networkVolumeId
=exoXXredactedXX
I have two requests queued
. But nothing is happening
.
The network volume has 900Gb of space
.
Runpod
Join
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!
21,202
Members
View on Discord
Resources
ModelContextProtocol
ModelContextProtocol
MCP Server
Similar Threads
Was this page helpful?
Yes
No
Similar Threads
Llama 3.1 via Ollama
R
Runpod / ⚡|serverless
2y ago
LoRA adapter on Runpod.io (using vLLM Worker)
R
Runpod / ⚡|serverless
2y ago
Llama 3.1 + Serveless
R
Runpod / ⚡|serverless
2y ago
using meta-llama/Meta-Llama-3.1-8B-Instruct in servelss
R
Runpod / ⚡|serverless
1h ago