R
Runpod12mo ago
star0129

LoRA path in vLLM serverless template

I want to attach a custom LoRA adapter to Llama-3.1-70B model. Usually while using vLLM, after the --enable-lora we also specify the --lora-modules name=lora_adapter_path, something like this. But in the template, it only gives option to enable LoRA, where do I add the path of the LoRA adapter?
3 Replies
Blake
Blake11mo ago
Also wondering - any luck @star0129
Blake
Blake11mo ago
i get "2024-12-02T18:22:06.702245094Z NotImplementedError: LoRA is currently not currently supported with encoder/decoder models."
No description
Unknown User
Unknown User11mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?