https://arxiv.org/abs/2402.09353

arXiv.org
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT). In this work, we first introduce a novel weight decomposition analysis t...
DoRA: Weight-Decomposed Low-Rank Adaptation
Was this page helpful?