Improving automatic emotion recognition from speech signals.
Elif BozkurtEngin ErzinÇigdem Eroglu ErdemA. Tanju ErdemPublished in: INTERSPEECH (2009)
Keyphrases
- emotion recognition
- speech signal
- speech recognition
- audio visual
- emotional speech
- automatic speech recognition
- automatic speech recognition systems
- speaker verification
- human computer interaction
- noisy environments
- facial expressions
- non stationary
- emotion classification
- speech enhancement
- information fusion
- data mining
- speaker identification
- pattern recognition
- facial images
- sentiment analysis
- emotional state
- sound source
- neural network
- sound signals
- vocal tract
- low level
- affective states
- face images
- multi modal