Spectrotemporal cues and attention modulate neural networks for speech and music.
Felix HaidukLucas BenjaminBenjamin MorillonPhilippe AlbouyPublished in: CogSci (2021)
Keyphrases
- neural network
- audio signals
- speech music discrimination
- pattern recognition
- audio features
- speech recognition
- recurrent neural networks
- fuzzy logic
- autistic children
- prosodic features
- music information retrieval
- feed forward
- audio recordings
- artificial neural networks
- music retrieval
- speech signal
- back propagation
- visual cues
- fuzzy systems
- speaker verification
- multi layer
- hidden markov models
- spontaneous speech
- music collections
- appearance cues
- multilayer perceptron
- visual attention
- multi modal