Login / Signup
Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages.
Felix Wu
Kwangyoun Kim
Shinji Watanabe
Kyu J. Han
Ryan McDonald
Kilian Q. Weinberger
Yoav Artzi
Published in:
CoRR (2022)
Keyphrases
</>
low complexity
motion estimation
training set
noisy channel
probabilistic model
expressive power
language independent
rate allocation