AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models.
Se Jung KwonJeonghoon KimJeongin BaeKang Min YooJin-Hwa KimBaeseong ParkByeongwook KimJung-Woo HaNako SungDongsoo LeePublished in: EMNLP (Findings) (2022)