R
Runpod13mo ago
Nova2k21

Depoying a model which is quantised with bitsandbytes(model config).

I have fintuned a 7B model by quantising in my local machine with 12 GB of VRAM with my custom dataset. And As I went to deploy my model on runpod with vLLM for faster inference. I found only 3 types of quantised model being deployed there namely GPTQ,AWQ and Squeeze LLM. Is there anything I am interpreting wrong or Runpod don't have the feature to deploy model that way? For now is there any other workaround that I can do to deploy my model as of now?
2 Replies
Nova2k21
Nova2k21OP13mo ago
Here is the log file for a post request.
Unknown User
Unknown User13mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?