Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings.
Ta-Chung ChiTing-Han FanLi-Wei ChenAlexander RudnickyPeter J. RamadgePublished in: ACL (2) (2023)
Keyphrases
- positional information
- language model
- language modeling
- n gram
- information retrieval
- probabilistic model
- language modelling
- document retrieval
- speech recognition
- retrieval model
- smoothing methods
- statistical language models
- position information
- context sensitive
- query expansion
- document ranking
- language models for information retrieval
- relevance model
- pseudo relevance feedback
- test collection
- focus of attention
- planar surfaces
- vector space model
- translation model
- vector space
- distance measure
- dimensionality reduction
- video sequences
- bayesian networks
- computer vision