On Fairness of Low-Rank Adaptation of Large Models

التفاصيل البيبلوغرافية
العنوان: On Fairness of Low-Rank Adaptation of Large Models
المؤلفون: Ding, Zhoujie, Liu, Ken Ziyu, Peetathawatchai, Pura, Isik, Berivan, Koyejo, Sanmi
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computers and Society
الوصف: Low-rank adaptation of large models, particularly LoRA, has gained traction due to its computational efficiency. This efficiency, contrasted with the prohibitive costs of full-model fine-tuning, means that practitioners often turn to LoRA and sometimes without a complete understanding of its ramifications. In this study, we focus on fairness and ask whether LoRA has an unexamined impact on utility, calibration, and resistance to membership inference across different subgroups (e.g., genders, races, religions) compared to a full-model fine-tuning baseline. We present extensive experiments across vision and language domains and across classification and generation tasks using ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B. Intriguingly, experiments suggest that while one can isolate cases where LoRA exacerbates model bias across subgroups, the pattern is inconsistent -- in many cases, LoRA has equivalent or even improved fairness compared to the base model or its full fine-tuning baseline. We also examine the complications of evaluating fine-tuning fairness relating to task design and model token bias, calling for more careful fairness evaluations in future work.
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2405.17512Test
رقم الانضمام: edsarx.2405.17512
قاعدة البيانات: arXiv