Using Gestures to Resolve Lexical Ambiguity in Storytelling with Humanoid Robots.
Susan McRoyCatelyn SchollPublished in: Dialogue Discourse (2019)
Keyphrases
- humanoid robot
- human robot interaction
- body movements
- biologically inspired
- multi modal
- motion planning
- motion primitives
- human robot
- motion patterns
- gesture recognition
- hidden markov models
- fully autonomous
- virtual environment
- wordnet
- domain specific
- walking speed
- motion capture
- digital storytelling
- sign language
- motor skills
- manipulation tasks
- human motion
- semantic relations
- context sensitive
- motor control
- imitation learning
- word sense disambiguation
- natural language processing
- action recognition
- keywords
- joint space
- lexical features
- card game
- video games
- hand gestures