Multimodal human communication - Targeting facial expressions, speech content and prosody.
Christina RegenbogenDaniel A. SchneiderRaquel E. GurFrank SchneiderUte HabelThilo KellermannPublished in: NeuroImage (2012)
Keyphrases
- facial expressions
- human communication
- multi party
- facial expression recognition
- nonverbal communication
- conversational agent
- human computer interaction
- audio visual
- human faces
- emotional state
- multi stream
- emotion recognition
- face images
- hand movements
- human computer
- speech synthesis
- facial features
- facial images
- video sequences
- recognition of facial expressions
- multimedia
- text to speech
- face recognition
- head motion
- natural language
- expression recognition
- facial animation
- head movements
- cohn kanade
- expression classification
- affect sensing
- multi modal
- semantic information
- machine learning
- principal component analysis
- privacy preserving
- feature extraction
- user interface
- facial action units
- facial actions
- mental states
- recognizing facial expressions