Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions.
André Yuji YasutomiHideyuki IchiwaraHiroshi ItoHiroki MoriTetsuya OgataPublished in: CoRR (2023)
Keyphrases
- data driven
- reinforcement learning
- selective attention
- visual features
- real world conditions
- spatial and temporal
- visual information
- state space
- sufficient conditions
- machine learning
- spatial information
- spatio temporal
- space time
- visual field
- function approximation
- low level
- computer vision
- spatial data
- optimal policy
- partial occlusion
- learning process
- optimal control
- reinforcement learning algorithms
- human vision
- face recognition
- spatial configurations
- learning algorithm