Login / Signup

Adaptive Knowledge Distillation Between Text and Speech Pre-Trained Models.

Jinjie NiYukun MaWen WangQian ChenDianwen NgHan LeiTrung Hieu NguyenChong ZhangBin MaErik Cambria
Published in: ICASSP (2023)
Keyphrases
  • pre trained
  • prior knowledge
  • statistical model
  • real time
  • wide range
  • multi modal
  • text to speech