Context-dependent human-robot interaction using indicating motion via Virtual-City interface.
Eri Sato-ShimokawaraYusuke FukusatoJun NakazatoToru YamaguchiPublished in: FUZZ-IEEE (2008)
Keyphrases
- context dependent
- human robot interaction
- humanoid robot
- natural interaction
- human robot
- semantic level
- motion planning
- gesture recognition
- context free
- image sequences
- robot programming
- virtual environment
- motion estimation
- low level
- human centered
- natural language
- optical flow
- human motion
- multi modal
- space time
- augmented reality
- user interface
- virtual reality
- pointing gestures
- spatial and temporal
- camera motion
- spatio temporal
- moving objects
- high level
- neural network
- human computer interaction
- three dimensional
- computer vision