Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments.
Shehryar KhattakChristos PapachristosKostas AlexisPublished in: CoRR (2019)
Keyphrases
- landmark recognition
- visual perception
- robot navigation
- visual processing
- visual features
- service robots
- visual navigation
- human vision
- autonomous robots
- depth map
- robotic systems
- data fusion
- dynamic model
- image fusion
- image processing
- visual input
- dynamic environments
- image classification
- navigation systems
- visual scene
- indoor environments
- everyday objects
- visual cues
- motion blur
- depth information
- visual information
- low level