Hands-Free Human-Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality.
Kyeong-Beom ParkSung Ho ChoiJae Yeol LeeYalda GhasemiMustafa MohammedHeejin JeongPublished in: IEEE Access (2021)
Keyphrases
- deep learning
- hands free
- mixed reality
- human robot interaction
- pointing gestures
- user interface
- augmented reality
- gesture recognition
- eye gaze
- virtual world
- text entry
- face tracking
- distance learning
- unsupervised learning
- intelligent environments
- machine learning
- human computer interaction
- humanoid robot
- eye tracker
- image processing
- head movements
- mobile phone
- mental models
- multi modal
- hand gestures
- software engineering
- hidden markov models
- eye tracking
- viewpoint
- expert systems
- reinforcement learning
- ambient intelligence