Login / Signup

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Xiangyu QiYi ZengTinghao XiePin-Yu ChenRuoxi JiaPrateek MittalPeter Henderson
Published in: CoRR (2023)
Keyphrases