Self-supervised Representation Fusion for Speech and Wearable Based Emotion Recognition.
Vipula DissanayakeSachith SeneviratneHussel SuriyaarachchiElliott WenSuranga NanayakkaraPublished in: INTERSPEECH (2022)
Keyphrases
- emotion recognition
- human computer interaction
- emotional speech
- information fusion
- audio visual
- facial expressions
- emotion classification
- speaker verification
- emotional state
- speech recognition
- sentiment analysis
- multi stream
- multi modal
- image data
- machine learning
- speech signal
- data fusion
- contextual information
- natural language processing
- feature extraction