Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.
Byung-Doh OhWilliam SchulerPublished in: EMNLP (Findings) (2023)
Keyphrases
- language model
- language modeling
- n gram
- information retrieval
- document retrieval
- probabilistic model
- speech recognition
- retrieval model
- query expansion
- test collection
- smoothing methods
- language modelling
- context sensitive
- ad hoc information retrieval
- mixture model
- statistical language models
- language model for information retrieval
- vector space model
- document ranking
- translation model
- dependency structure
- query terms
- document length
- retrieval effectiveness
- relevance model
- information extraction
- hidden markov models
- dirichlet prior
- active learning
- language models for information retrieval
- word clouds
- social networks