• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Xiangyu QiYi ZengTinghao XiePin-Yu ChenRuoxi JiaPrateek MittalPeter Henderson
Published in: CoRR (2023)
Keyphrases