Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People.
Chen-Lung LuZi-Yan LiuJui-Te HuangChing-I HuangBo-Hui WangYi ChenNien-Hsin WuHsueh-Cheng WangLaura GiarréPei-Yi KuoPublished in: Frontiers Robotics AI (2021)
Keyphrases
- blind and visually impaired
- reinforcement learning
- mobile robot
- robot control
- real robot
- autonomous learning
- perceptual aliasing
- human robot interaction
- semantic web
- robotic systems
- robot navigation
- state space
- semantic information
- function approximation
- autonomous robots
- humanoid robot
- indoor environments
- communication systems
- motion planning
- multi band
- reinforcement learning algorithms
- unknown environments
- robot behavior
- machine learning
- multi agent
- optimal policy
- multi robot
- natural language
- ultra wideband
- dynamic programming
- robot manipulators
- relevance feedback
- mobile robotics
- obstacle avoidance
- model free
- semantic similarity
- vision system
- path planning
- service robots
- markov decision processes
- image retrieval
- position and orientation
- learning algorithm