Login / Signup
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders.
Heng-Jui Chang
Ning Dong
Ruslan Mavlyutov
Sravya Popuri
Yu-An Chung
Published in:
CoRR (2023)
Keyphrases
</>
pre trained
viewpoint
speech recognition