Run multiple finetuning on same GPU POD

I am using
  • image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04
  • GPU: 1 x A40
While running qlora finetuning with 4 bit quantization the GPU uses approx 12 GB GPU Memory out of 48 GB, how can I run multiple finetunings simultaneously (in parallel) on the same POD GPU?
Was this page helpful?