Multimodal Object Categorization with Reduced User Load through Human-Robot Interaction in Mixed Reality.
Hitoshi NakamuraLotfi El HafiAkira TaniguchiYoshinobu HagiwaraTadahiro TaniguchiPublished in: IROS (2022)
Keyphrases
- human robot interaction
- object categorization
- mixed reality
- pointing gestures
- augmented reality
- gesture recognition
- object categories
- object recognition
- virtual world
- user interface
- multi modal
- distance learning
- intelligent environments
- multimodal interfaces
- bag of features
- visual words
- human body
- active learning
- multiscale