Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue Scenarios.
Sami Alperen AkgunMoojan GhafurianMark CrowleyKerstin DautenhahnPublished in: ICMI (2020)
Keyphrases
- multi modal
- human robot interaction
- humanoid robot
- urban search and rescue
- fully autonomous
- human robot
- gesture recognition
- search and rescue
- robot programming
- multi modality
- service robots
- cross modal
- audio visual
- pointing gestures
- natural interaction
- real world
- medical images
- video search
- reinforcement learning
- high dimensional
- human brain
- mobile robot
- software engineering
- state space