Login / Signup

An LLM can Fool Itself: A Prompt-Based Adversarial Attack.

Xilie XuKeyi KongNing LiuLizhen CuiDi WangJingfeng ZhangMohan S. Kankanhalli
Published in: CoRR (2023)
Keyphrases