On-line learning of lexical items and grammatical constructions via speech, gaze and action-based human-robot interaction.
Grégoire PointeauMaxime PetitXavier HinautGuillaume GibertPeter Ford DomineyPublished in: INTERSPEECH (2013)
Keyphrases
- human robot interaction
- spoken language
- gaze control
- lexical features
- human robot
- gesture recognition
- human centered
- speech recognition
- eye tracking
- humanoid robot
- eye movements
- text to speech
- parse tree
- hand movements
- robot programming
- pointing gestures
- service robots
- natural language
- automatic speech recognition
- speech signal
- context sensitive
- semantic relations
- head movements
- eye gaze
- gaze tracking
- natural interaction
- hidden markov models
- eye tracker
- multimodal interfaces
- wordnet
- spatio temporal
- computer vision