Why do people gesture more during disfluent speech? A pragmatic account.
Yagmur Deniz KisaCordelia AchenSusan Goldin-MeadowDaniel CasasantoPublished in: CogSci (2022)
Keyphrases
- multimodal interfaces
- speech recognition
- hand movements
- gesture recognition
- multi stream
- speech signal
- human computer interaction
- sign language
- human centered
- speech synthesis
- text to speech
- broadcast news
- gaze control
- recognition engine
- speech processing
- real time
- data sets
- speaker verification
- vocal tract
- spoken language
- audio visual