Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional LSTM modeling.
Martin WöllmerAngeliki MetallinouFlorian EybenBjörn W. SchullerShrikanth S. NarayananPublished in: INTERSPEECH (2010)
Keyphrases
- emotion recognition
- context sensitive
- facial expressions
- audio visual
- facial expression analysis
- emotional speech
- facial expression recognition
- emotional state
- multi modal
- recognizing facial expressions
- video sequences
- human faces
- face images
- facial images
- language model
- sentiment analysis
- face recognition
- facial features
- expression recognition
- natural language
- information fusion
- information retrieval
- visual data
- human computer interaction
- head motion
- image retrieval
- multimedia