One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models.
Lin LiHaoyan GuanJianing QiuMichael W. SpratlingPublished in: CoRR (2024)
Keyphrases
- language model
- pre trained
- n gram
- translation model
- language modeling
- document retrieval
- word error rate
- language modelling
- speech recognition
- information retrieval
- retrieval model
- multiword
- probabilistic model
- test collection
- spoken term detection
- statistical language modeling
- out of vocabulary
- training examples
- training data
- vector space model
- statistical language models
- computer vision
- query expansion
- smoothing methods
- query terms
- co occurrence
- control signals
- word segmentation
- relevance model
- term weighting
- neural network
- pseudo relevance feedback
- active learning