Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
Ioana BaldiniDennis WeiKarthikeyan Natesan RamamurthyMikhail YurochkinMoninder SinghPublished in: CoRR (2021)
Keyphrases
- language model
- language modeling
- text classification
- n gram
- information retrieval
- speech recognition
- probabilistic model
- document retrieval
- retrieval model
- language modelling
- ad hoc information retrieval
- test collection
- statistical language models
- query expansion
- context sensitive
- smoothing methods
- vector space model
- statistical language modeling
- query terms
- text documents
- machine learning
- bag of words
- feature selection
- naive bayes
- pseudo relevance feedback
- relevance model
- text categorization
- knn
- natural language