The Use of Co-Speech Gestures in Conveying Japanese Phrases with Verbs.
Yuki HandaTetsuya YasudaHarumi KobayashiPublished in: CogSci (2021)
Keyphrases
- hand movements
- multiword
- spoken words
- speech recognition
- hidden markov models
- gesture recognition
- automatic speech recognition
- natural language
- hand gestures
- sign language
- speech signal
- human communication
- recognition engine
- speech synthesis
- endpoint detection
- japanese language
- multimodal interfaces
- spoken language
- language acquisition
- highly ambiguous
- broadcast news
- keywords
- noisy environments
- human robot interaction
- word sense disambiguation
- wordnet
- pattern recognition
- semantic roles
- text entry
- distinctive features
- pointing gestures
- context sensitive