Running llama 3.3 70b using vLLM and 160gb network volume
Hi, I want to check if 160 gb is enough for llama 70b and whether I can use use a smaller network volume
25 Replies
Or if I need a larger network volume
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
ok thanks a lot
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
I'm not sure atm, are the 24vram options fine?
I think I'm going to use the suggested option (A6000, A40) and use aqm quant
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
I set up with vllm template without quant for now using a6000,A40, using 210gb of volume in Canada. I posted an inital request. How long will this take to initialize roughly?

Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
It's past 20 mins now
with no quantization
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
5 items
"endpointId":"ikmbyelhctz06j"
"workerId":"2zeadzwvontveg"
"level":"error"
"message":"Uncaught exception | <class 'torch.OutOfMemoryError'>; CUDA out of memory. Tried to allocate 896.00 MiB. GPU 0 has a total capacity of 44.45 GiB of which 444.62 MiB is free. Process 1865701 has 44.01 GiB memory in use. Of the allocated memory 43.71 GiB is allocated by PyTorch, and 1.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables); <traceback object at 0x7f0a94eff580>;"
"dt":"2024-12-11 05:47:39.26656704"
CUDA semantics — PyTorch 2.5 documentation
A guide to torch.cuda, a PyTorch module to run CUDA operations
yes
something must be wrong with my setup
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
add workers?
ahh ok! Is 3 enough
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
ok!
thank you!
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
gonna try awq but I am a noob, gonna do some research
I'm using 5 gpus per worker, keeps exiting with error code 1
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View
ok I'll try creating a new endpoint

cant seem to get any responses
no errors in the 😦
logs
Unknown User•11mo ago
Message Not Public
Sign In & Join Server To View