EmoTour: Multimodal Emotion Recognition using Physiological and Audio-Visual Features.
Yuki MatsudaDmitrii FedotovYuta TakahashiYutaka ArakawaKeiichi YasumotoWolfgang MinkerPublished in: UbiComp/ISWC Adjunct (2018)
Keyphrases
- visual features
- multimodal fusion
- audio visual
- visual information
- emotion recognition
- visual data
- physiological signals
- image classification
- visual content
- relevance feedback
- multimodal interfaces
- audio features
- low level
- image search
- image retrieval
- keywords
- semantic features
- low level features
- image annotation
- multimedia
- emotional state
- visual appearance
- image collections
- bridge the semantic gap
- web images
- semantic gap
- semantic concepts
- global features
- acoustic features
- visual properties
- visual similarity
- low level visual features
- music retrieval
- feature extraction
- speaker verification
- bag of features