Tracking changes in continuous emotion states using body language and prosodic cues.
Angeliki MetallinouAthanassios KatsamanisYun WangShrikanth S. NarayananPublished in: ICASSP (2011)
Keyphrases
- text to speech synthesis
- cue integration
- text to speech
- human body
- real time
- prosodic features
- particle filter
- upper body
- natural language
- articulated body
- visual cues
- multiple visual cues
- specification language
- articulated objects
- particle filtering
- motion analysis
- video surveillance
- language learning
- facial expressions
- programming language
- high level
- hidden markov models
- finite state automaton
- multimodal fusion
- human computer interaction
- speech recognition
- multiple cues
- mean shift
- initial state
- robust tracking
- appearance model
- motion tracking
- target language