RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models.
Jiongxiao WangJunlin WuMuhao ChenYevgeniy VorobeychikChaowei XiaoPublished in: ACL (1) (2024)
Keyphrases
- language model
- reinforcement learning
- language modeling
- document retrieval
- n gram
- speech recognition
- statistical language models
- retrieval model
- query expansion
- information retrieval
- context sensitive
- test collection
- query terms
- language modelling
- state space
- relevance feedback
- probabilistic model
- pseudo relevance feedback
- reward function
- reward signal
- document ranking
- smoothing methods
- optimal policy
- machine learning
- ad hoc information retrieval
- co occurrence
- language models for information retrieval
- language model for information retrieval
- active learning
- policy gradient
- learning agent
- translation model
- relevance model
- vector space model
- text mining