Multi-modal Emotion Recognition Based on Speech and Image.
Yongqiang LiQi HeYongping ZhaoHongxun YaoPublished in: PCM (1) (2017)
Keyphrases
- multi modal
- audio visual
- emotion recognition
- image data
- image features
- uni modal
- emotional speech
- input image
- image retrieval
- image classification
- image content
- image regions
- speaker verification
- single modality
- image representation
- image collections
- high resolution
- human computer interaction
- information fusion
- image annotation
- facial expressions
- low level
- similarity measure
- human subjects
- visual data
- multimedia
- emotional state
- object detection
- feature vectors