Count-Based Exploration in Feature Space for Reinforcement Learning.
Jarryd MartinSuraj Narayanan SasikumarTom EverittMarcus HutterPublished in: CoRR (2017)
Keyphrases
- feature space
- reinforcement learning
- active exploration
- exploration strategy
- action selection
- exploration exploitation
- autonomous learning
- model based reinforcement learning
- dimensionality reduction
- function approximation
- high dimensional
- mean shift
- multi agent
- state space
- kernel function
- model free
- image representation
- input space
- multi agent reinforcement learning
- exploration exploitation tradeoff
- learning algorithm
- feature extraction
- feature vectors
- principal component analysis
- classification accuracy
- hyperplane
- image retrieval
- learning process
- input data
- support vector machine
- markov decision processes
- optimal policy
- training samples
- supervised learning
- multiscale
- active learning
- training set
- policy search
- data points
- high dimensional feature space
- kernel pca
- high dimensionality
- kernel methods
- feature set
- low dimensional