Login / Signup
Combining body pose, gaze, and gesture to determine intention to interact in vision-based interfaces.
Julia Schwarz
Charles Claudius Marais
Tommer Leyvand
Scott E. Hudson
Jennifer Mankoff
Published in:
CHI (2014)
Keyphrases
</>
input device
eye tracking
human body
augmented reality
hand gestures
human computer interaction
body pose
user interface
gesture recognition
eye movements
vision system
appearance model
computer vision
eye gaze
prior model
multimodal interfaces
markerless
body parts