JUST-BLUE at SemEval-2021 Task 1: Predicting Lexical Complexity using BERT and RoBERTa Pre-trained Language Models.
Tuqa Bani YaseenQusai IsmailSarah Al-OmariEslam Al-SobhMalak AbdullahPublished in: SemEval@ACL/IJCNLP (2021)
Keyphrases
- language model
- pre trained
- language modeling
- context sensitive
- word sense disambiguation
- document retrieval
- language modelling
- probabilistic model
- n gram
- retrieval model
- speech recognition
- information retrieval
- query expansion
- test collection
- statistical language models
- query terms
- training examples
- natural language processing
- language models for information retrieval
- neural network
- keywords
- vector space model
- document ranking
- wordnet
- small number
- training data
- feature selection