Sensory Modality of Input Influences the Encoding of Motion Events in Speech But Not Co-Speech Gestures.
Ezgi MamusLaura J. SpeedAsli ÖzyürekAsifa MajidPublished in: CogSci (2021)
Keyphrases
- speech recognition
- speech signal
- text input
- speech synthesis
- spoken words
- hand movements
- text to speech
- audio visual
- humanoid robot
- human motion
- automatic speech recognition
- spoken language
- multi modal
- visual data
- body movements
- motion model
- english text
- vector quantization
- control signals
- hidden markov models
- moving objects