Login / Signup

COLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders.

Heng-Jui ChangNing DongRuslan MavlyutovSravya PopuriYu-An Chung
Published in: ICASSP (2024)
Keyphrases
  • pre trained
  • learning algorithm
  • pairwise
  • high dimensional
  • speech recognition