Too many open files on GPU pod A6000
On Pod A6000, frequently i'm facing "Too many open files" and "cannot allocate memory" whereas Comfyui is not using all VRAM/RAM.
Usually, it happens after sampler generation, during an interpolation process.
OS: Ubuntu 22.04 LTS
Soft: Comfyui with venv
Model: Wan 2.2
Volume: Network volume
Any clue is welcome.

13 Replies
what cuda verision are you using? also, can you paste the outputs of nvidia-smi -q
@Ashtar
Escalated To Zendesk
The thread has been escalated to Zendesk!
Ticket ID: #23417
Unknown User•2mo ago
Message Not Public
Sign In & Join Server To View
We can’t modify them , I tried to run ulimit command to override it and it is not allowed
Cuda 12.8 . I notice that nvidia-smi command is not working well, the table with process is always empty even when python (comfyui) is running a workflow and using the gpu / vram.
can i see the -q output?
Unknown User•2mo ago
Message Not Public
Sign In & Join Server To View
Probably unrelated to nvidia stuff
here the output:
@wonzo255 does it sound good ?
sorry been busy. main thing that sticks out to me is that the driver version for CTK 12.8 should be >=570.26
that should at least help with the nvidia-smi output
lmk @Ashtar
Thanks . How to update ? I m using a template torch 12.8
I don’t have the privileges to upgrade , do I ?
nah u don't :/
So what can I do ?