Login / Signup

Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning.

Tiansheng HuangSihao HuFatih IlhanSelim Furkan TekinLing Liu
Published in: CoRR (2024)
Keyphrases