Intent based Multimodal Speech and Gesture Fusion for Human-Robot Communication in Assembly Situation.
Sheuli PaulMichael SintekVeton KëpuskaMarius SilaghiLiam RobertsonPublished in: ICMLA (2022)
Keyphrases
- human robot
- multimodal interfaces
- multimodal fusion
- human robot interaction
- dialogue system
- pointing gestures
- gesture recognition
- human computer interaction
- multimodal interaction
- robot behavior
- multi stream
- audio visual
- human communication
- hearing impaired
- human users
- robotic systems
- humanoid robot
- speech recognition
- action selection
- hand movements
- user interface
- data fusion
- multi party
- computer vision
- natural human computer interaction
- automatic speech recognition
- speech signal
- multi objective
- high dimensional
- natural language
- multi agent
- knowledge base
- decision making
- machine learning