Towards automatic emotional state categorization from speech signals.
Arslan ShaukatKe ChenPublished in: INTERSPEECH (2008)
Keyphrases
- speech signal
- emotional state
- speech recognition
- automatic speech recognition
- automatic speech recognition systems
- linear prediction
- facial expressions
- vocal tract
- non stationary
- hidden markov models
- spectral analysis
- noisy environments
- background noise
- conversational agent
- virtual agents
- emotion recognition
- affect recognition
- speech enhancement
- blind source separation
- pattern recognition
- sound signals
- computer vision
- sound source
- fundamental frequency
- speech quality
- virtual environment
- neural network
- virtual characters
- formant frequencies