Advancing Multimodal Fusion in Human-Computer Interaction: Integrating Eye Tracking, Lips Detection, Speech Recognition, and Voice Synthesis for Intelligent Cursor Control and Auditory Feedback.
Gayatri JagnadeSunil SableMitesh IkarPublished in: ICCCNT (2023)
Keyphrases
- eye tracking
- speech recognition
- human computer interaction
- multimodal interfaces
- noisy environments
- hidden markov models
- eye tracker
- visual attention
- emotion recognition
- user interface
- language model
- eye gaze
- speech signal
- object detection
- pattern recognition
- human computer
- human computer interface
- visual speech
- gesture recognition
- augmented reality
- signal processing
- facial expression recognition
- speaker identification
- software engineering
- interface design
- gaze estimation
- human activities
- eye movements
- visual information
- facial features
- multi modal
- relevance feedback