Sign in

CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders.

Heng-Jui ChangNing DongRuslan MavlyutovSravya PopuriYu-An Chung
Published in: CoRR (2023)
Keyphrases
  • pre trained
  • viewpoint
  • speech recognition