Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots.
Christiana TsiourtiAstrid WeissKatarzyna WacMarkus VinczePublished in: Int. J. Soc. Robotics (2019)
Keyphrases
- emotion recognition
- attitudes toward
- audio visual
- physiological signals
- emotional speech
- statistically significant
- facial expressions
- human computer interaction
- sentiment analysis
- facial images
- personal health information
- affective computing
- information fusion
- emotional state
- emotion classification
- visual information
- affective states
- multi modal
- data fusion
- context aware
- multi stream
- contextual information
- text classification
- face recognition
- multimedia