Extracting Acoustic Features of Japanese Speech to Classify Emotions.
Takashi YamazakiMinoru NakayamaPublished in: FedCSIS (Communication Papers) (2017)
Keyphrases
- acoustic features
- speech signal
- speaker verification
- automatic speech recognition
- emotion recognition
- music genre classification
- visual features
- music information retrieval
- speech recognition
- audio stream
- cross correlation
- speaker recognition
- audio features
- mel frequency cepstral coefficients
- information retrieval
- noisy environments
- non stationary
- emotional state
- information retrieval systems
- video sequences
- neural network