Search
Star
Feedback
Setup for Free
© 2026 Hedgehog Software, LLC
Twitter
GitHub
Discord
System
Light
Dark
More
Communities
Docs
About
Terms
Privacy
Llama 3.1 + Serveless - Runpod
R
Runpod
•
2y ago
•
11 replies
Gustavo Monti
Llama 3.1 + Serveless
I
´m trying to use this tutorial
:
Llama 3.1 via Ollama
Tried to use
: pooyaharatian
/runpod
-
o
l
l
a
m
a
:0
.0
.8 and override the default start with llama3
.1
but getting this error
:
{
"delayTime
"
: 16752
,
"error
"
:
"model
"llama3
.1
" not found
, try pulling it first
"
,
"executionTime
"
: 156
,
"id
"
:
"f3687a15
-700f
-4acf
-856a
-d7df024ad304
-u1
"
,
"status
"
:
"FAILED
"
}
into the logs
:
2024
-09
-02
1
4
:
5
2
:09
.063
[info
]
The model you are attempting to pull requires a newer version of Ollama
.
Tried to update to pooyaharatian
/runpod
-
o
l
l
a
m
a
:0
.0
.9 but getting some JSON decoded errors
.
Runpod
Join
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!
21,202
Members
View on Discord
Resources
ModelContextProtocol
ModelContextProtocol
MCP Server
Recent Announcements
Similar Threads
Was this page helpful?
Yes
No
Similar Threads
Llama 3.1 via Ollama
R
Runpod / ⚡|serverless
2y ago
using meta-llama/Meta-Llama-3.1-8B-Instruct in servelss
R
Runpod / ⚡|serverless
3d ago
Length of output of serverless meta-llama/Llama-3.1-8B-Instruct
R
Runpod / ⚡|serverless
11mo ago
Llama-3.1-Nemotron-70B-Instruct in Serverless
R
Runpod / ⚡|serverless
16mo ago