A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily.
Peng DingJun KuangDan MaXuezhi CaoYunsen XianJiajun ChenShujian HuangPublished in: NAACL-HLT (2024)
Keyphrases
- language model
- language modeling
- probabilistic model
- n gram
- document retrieval
- test collection
- speech recognition
- query expansion
- statistical language models
- retrieval model
- language modelling
- context sensitive
- ad hoc information retrieval
- information retrieval
- query terms
- pseudo relevance feedback
- language models for information retrieval
- smoothing methods
- vector space model
- language model for information retrieval
- document length
- translation model
- mixture model
- document ranking