Toward a One-interaction Data-driven Guide: Putting Co-speech Gesture Evidence to Work for Ambiguous Route Instructions.
Nicholas DePalmaJesse SmithSonia ChernovaJessica K. HodginsPublished in: HRI (Companion) (2021)
Keyphrases
- data driven
- multimodal interfaces
- human computer interaction
- multimodal interaction
- gesture recognition
- hand movements
- user interaction
- route planning
- empirical evidence
- human communication
- human centered
- user interface
- gaze control
- speech recognition
- audio visual
- evidential reasoning
- visual feedback
- emotional state
- speech signal
- multi stream
- recognition engine
- multiple interpretations
- computer programs
- learning mechanism
- automatic speech recognition
- neural network
- road network
- multi modal
- pattern recognition
- search engine
- information retrieval