​
Login / Signup
Byung-Doh Oh
Publication Activity (10 Years)
Years Active: 2021-2024
Publications (10 Years): 17
Top Topics
Language Model
Term Dependencies
Autoregressive
Training Data
Top Venues
CoRR
EMNLP (Findings)
CMLS
EMNLP
</>
Publications
</>
Byung-Doh Oh
,
Shisen Yue
,
William Schuler
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times.
EACL (1)
(2024)
Byung-Doh Oh
,
Shisen Yue
,
William Schuler
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times.
CoRR
(2024)
Byung-Doh Oh
,
William Schuler
Leading Whitespaces of Language Models' Subword Vocabulary Poses a Confound for Calculating Word Probabilities.
CoRR
(2024)
Byung-Doh Oh
,
William Schuler
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions.
ACL (1)
(2023)
Byung-Doh Oh
,
William Schuler
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.
EMNLP (Findings)
(2023)
Byung-Doh Oh
,
William Schuler
Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.
CoRR
(2023)
Byung-Doh Oh
,
William Schuler
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Trans. Assoc. Comput. Linguistics
11 (2023)
Byung-Doh Oh
,
William Schuler
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions.
CoRR
(2023)
Byung-Doh Oh
,
William Schuler
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
CoRR
(2022)
Byung-Doh Oh
,
Christian Clark
,
William Schuler
Comparison of Structural Parsers and Neural Language Models as Surprisal Estimators.
Frontiers Artif. Intell.
5 (2022)
Byung-Doh Oh
,
William Schuler
Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal.
CoRR
(2022)
Byung-Doh Oh
,
William Schuler
Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal.
EMNLP
(2022)
Byung-Doh Oh
,
Christian Clark
,
William Schuler
Surprisal Estimators for Human Reading Times Need Character Models.
ACL/IJCNLP (1)
(2021)
Byung-Doh Oh
,
William Schuler
Contributions of Propositional Content and Syntactic Category Information in Sentence Processing.
CMLS
(2021)
Byung-Doh Oh
Team Ohio State at CMCL 2021 Shared Task: Fine-Tuned RoBERTa for Eye-Tracking Data Prediction.
CMLS
(2021)
Lifeng Jin
,
Byung-Doh Oh
,
William Schuler
Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages.
EMNLP (Findings)
(2021)
Evan Jaffe
,
Byung-Doh Oh
,
William Schuler
Coreference-aware Surprisal Predicts Brain Response.
EMNLP (Findings)
(2021)