
References
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ah-
mad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida,
Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.
Gpt-4 technical report. arXiv preprint arXiv:2303.08774,
2023. 1
[2] Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. In-
trinsic dimensionality explains the effectiveness of language
model fine-tuning. arXiv preprint arXiv:2012.13255, 2020.
3
[3] Yotam Alaluf, Elad Richardson, Sergey Tulyakov, Kfir Aber-
man, and Daniel Cohen-Or. Myvlm: Personalizing vlms for
user-specific queries. In European Conference on Computer
Vision (ECCV), pages 73–91. Springer, 2024. 5,11
[4] Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H
Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-
Khamy, and Salman Avestimehr. Slora: Federated param-
eter efficient fine-tuning of language models. arXiv preprint
arXiv:2308.06522, 2023. 2,3
[5] Klaudia Bałazy, Mohammadreza Banaei, Karl Aberer, and
Jacek Tabor. Lora-xs: Low-rank adaptation with ex-
tremely small number of parameters. arXiv preprint
arXiv:2405.17604, 2024. 3
[6] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Alt-
man, Simran Arora, Sydney von Arx, Michael S Bernstein,
Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al.
On the opportunities and risks of foundation models. arXiv
preprint arXiv:2108.07258, 2021. 1
[7] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen
Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakr-
ishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al.
Rt-1: Robotics transformer for real-world control at scale.
arXiv preprint arXiv:2212.06817, 2022. 1
[8] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen
Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding,
Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2:
Vision-language-action models transfer web knowledge to
robotic control. arXiv preprint arXiv:2307.15818, 2023. 1
[9] Kerim B¨
uy¨
ukaky¨
uz. Olora: Orthonormal low-rank
adaptation of large language models. arXiv preprint
arXiv:2406.01775, 2024. 1,3
[10] Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, and
Zhiqiang Shen. One-for-all: Generalized lora for parameter-
efficient fine-tuning. arXiv preprint arXiv:2306.07967,
2023. 2
[11] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr
Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter
Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdul-
mohsin, et al. Scaling vision transformers to 22 billion pa-
rameters. In International Conference on Machine Learning,
pages 7480–7512. PMLR, 2023. 1
[12] Tim Dettmers, Mike Lewis, Younes Belkada, and Luke
Zettlemoyer. Gpt3. int8 (): 8-bit matrix multiplication for
transformers at scale. Advances in Neural Information Pro-
cessing Systems, 35:30318–30332, 2022. 3
[13] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke
Zettlemoyer. Qlora: Efficient finetuning of quantized llms.
Advances in Neural Information Processing Systems, 36,
2024. 2
[14] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas-
cal Germain, Hugo Larochelle, Francois Laviolette, Mario
Marchand, and Victor Lempitsky. Domain-adversarial train-
ing of neural networks. Journal of Machine Learning Re-
search, 17(59):1–35, 2016. 2
[15] Xavier Glorot and Yoshua Bengio. Understanding the diffi-
culty of training deep feedforward neural networks. In Pro-
ceedings of the thirteenth international conference on artifi-
cial intelligence and statistics, pages 249–256. JMLR Work-
shop and Conference Proceedings, 2010. 3
[16] Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Effi-
cient low rank adaptation of large models. arXiv preprint
arXiv:2402.12354, 2024. 2
[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Delving deep into rectifiers: Surpassing human-level perfor-
mance on imagenet classification. In Proceedings of the
IEEE international conference on computer vision, pages
1026–1034, 2015. 3
[18] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-
Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.
Lora: Low-rank adaptation of large language models. arXiv
preprint arXiv:2106.09685, 2021. 1,2,3,5
[19] Damjan Kalajdzievski. A rank stabilization scaling factor
for fine-tuning with lora. arXiv preprint arXiv:2312.03732,
2023. 3
[20] Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki M
Asano. Elora: Efficient low-rank adaptation with random
matrices. In The Twelfth International Conference on Learn-
ing Representations, 2024. 2,3
[21] Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos
Karampatziakis, Weizhu Chen, and Tuo Zhao. Loftq: Lora-
fine-tuning-aware quantization for large language models.
arXiv preprint arXiv:2310.08659, 2023. 2
[22] Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta,
Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-
shot parameter-efficient fine-tuning is better and cheaper
than in-context learning. Advances in Neural Information
Processing Systems, 35:1950–1965, 2022. 3
[23] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.
Visual instruction tuning. arXiv preprint arXiv:2304.08485,
2023. 5,11
[24] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo
Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng,
and Min-Hung Chen. Dora: Weight-decomposed low-rank
adaptation. arXiv preprint arXiv:2402.09353, 2024. 2
[25] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar
Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettle-
moyer, and Veselin Stoyanov. Roberta: A robustly optimized
bert pretraining approach. arXiv preprint arXiv:1907.11692,
2019. 11,16
[26] Zequan Liu, Jiawen Lyn, Wei Zhu, Xing Tian, and
Yvette Graham. Alora: Allocating low-rank adaptation
for fine-tuning large language models. arXiv preprint
arXiv:2403.16187, 2024. 2
[27] Ilya Loshchilov and Frank Hutter. Decoupled weight decay
regularization. arXiv preprint arXiv:1711.05101, 2017. 5