Login / Signup
Self-Distillation into Self-Attention Heads for Improving Transformer-based End-to-End Neural Speaker Diarization.
Ye-Rin Jeoung
Jeong-Hwan Choi
Ju-Seok Seong
Jehyun Kyung
Joon-Hyuk Chang
Published in:
INTERSPEECH (2023)
Keyphrases
</>
end to end
speaker diarization
admission control
network architecture
neural network
congestion control
application layer
speech recognition
real world
bayesian information criterion
internet protocol
multimedia
pattern recognition
context aware
speaker identification