Login / Signup

Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.

Byung-Doh OhWilliam Schuler
Published in: CoRR (2023)
Keyphrases