Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances.
David GriolJosé Manuel MolinaZoraida CallejasPublished in: Neurocomputing (2019)
Keyphrases
- speech recognition
- speech sounds
- automatic speech recognition
- natural language
- support vector
- spoken language
- emotion recognition
- user interface
- lexical features
- human communication
- recommender systems
- end users
- text to speech synthesis
- broadcast news
- emotional state
- user interaction
- spoken dialogue systems
- naive bayes
- spoken documents
- emotion classification
- decision trees
- training data
- language understanding
- facial expressions
- training set
- multimodal interfaces
- natural language processing
- human computer interaction
- referring expressions
- user model
- user preferences
- feature selection
- information retrieval