R
RunPod12mo ago
__den3b__

Workers configuration for Serverless vLLM endpoints: 1 hour lecture with 50 students

Hey there, I need to showcase 50 students how to do RAG with open-source LLMs (i.e., LLama3). Which type of configuration do you suggest? I wanna make sure they have a smooth experience. Thanks!
No description
Solution:
16GB isn't enough, you need 24GB
Jump to solution
11 Replies
digigoblin
digigoblin12mo ago
Depends on which LLama3 model
Madiator2011
Madiator201112mo ago
for 70b non quant you would need at least 2x80GB
Jason
Jason12mo ago
Or 8x 24 works Why don't use pods btw?
digigoblin
digigoblin12mo ago
Pods are expensive
Jason
Jason12mo ago
Ic
__den3b__
__den3b__OP12mo ago
8b params can also suffice
Jason
Jason12mo ago
1x 24gb vram gpu works, 16gb might work aswell
Solution
digigoblin
digigoblin12mo ago
16GB isn't enough, you need 24GB
digigoblin
digigoblin12mo ago
Unless you use a quantized version
Jason
Jason12mo ago
Oh

Did you find this page helpful?