Login / Signup

"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak.

Lingrui MeiShenghua LiuYiwei WangBaolong BiJiayi MaoXueqi Cheng
Published in: CoRR (2024)
Keyphrases