PathologyBERT - Pre-trained Vs. A New Transformer Language Model for Pathology Domain.
Thiago SantosAmara TariqSusmita DasKavyasree VayalpatiGeoffrey H. SmithHari TrivediImon BanerjeePublished in: CoRR (2022)
Keyphrases
- language model
- pre trained
- language modeling
- n gram
- information retrieval
- language modelling
- probabilistic model
- retrieval model
- document retrieval
- smoothing methods
- query expansion
- translation model
- speech recognition
- mixture model
- ad hoc information retrieval
- test collection
- training data
- training examples
- expectation maximization
- small number
- high dimensional
- bayesian networks
- machine learning