Japanese ASR-Robust Pre-trained Language Model with Pseudo-Error Sentences Generated by Grapheme-Phoneme Conversion.
Yasuhito OhsugiItsumi SaitoKyosuke NishidaSen YoshidaPublished in: INTERSPEECH (2022)
Keyphrases
- language model
- speech recognition
- automatic speech recognition
- pre trained
- word error rate
- language modeling
- n gram
- information retrieval
- probabilistic model
- document retrieval
- error rate
- query expansion
- speech signal
- hidden markov models
- test collection
- dependency structure
- retrieval model
- document level
- mixture model
- training data
- relevance model
- natural language
- query terms
- smoothing methods
- face recognition
- training examples
- query specific
- feature extraction
- neural network
- data sets