Creating a Gesture-Speech Dataset for Speech-Based Automatic Gesture Generation.
Kenta TakeuchiSouichirou KubotaKeisuke SuzukiDai HasegawaHiroshi SakutaPublished in: HCI (29) (2017)
Keyphrases
- multimodal interfaces
- speech recognition
- hand movements
- hidden markov models
- gesture recognition
- human computer interaction
- multimodal interaction
- speech synthesis
- speech signal
- text to speech
- multi stream
- audio visual
- spoken language
- endpoint detection
- learning mechanism
- broadcast news
- speaker recognition
- sign language
- automatic speech recognition
- emotion recognition
- human centered
- information retrieval
- speaker identification
- english text
- generation process
- fully automatic
- gaussian mixture model
- american sign language
- question answering
- recognition engine
- object detection
- hearing impaired
- speech music discrimination