Login / Signup
Adaptive Knowledge Distillation Between Text and Speech Pre-Trained Models.
Jinjie Ni
Yukun Ma
Wen Wang
Qian Chen
Dianwen Ng
Han Lei
Trung Hieu Nguyen
Chong Zhang
Bin Ma
Erik Cambria
Published in:
ICASSP (2023)
Keyphrases
</>
pre trained
prior knowledge
statistical model
real time
wide range
multi modal
text to speech