Count-Based Exploration in Feature Space for Reinforcement Learning.
Jarryd MartinSuraj Narayanan SasikumarTom EverittMarcus HutterPublished in: IJCAI (2017)
Keyphrases
- feature space
- reinforcement learning
- active exploration
- exploration strategy
- action selection
- exploration exploitation
- model based reinforcement learning
- state space
- autonomous learning
- markov decision processes
- feature vectors
- high dimensional
- function approximation
- mean shift
- reinforcement learning algorithms
- input space
- dimensionality reduction
- kernel function
- exploration exploitation tradeoff
- high dimensional feature space
- principal component analysis
- training samples
- low dimensional
- feature extraction
- high dimensionality
- model free
- machine learning
- feature selection
- training set
- image retrieval
- classification accuracy
- temporal difference
- data points
- dimension reduction
- feature set
- optimal policy
- input data
- multi agent
- balancing exploration and exploitation
- optimal control
- support vector machine
- hyperplane
- aggregation functions
- image classification
- policy search
- image representation
- learning algorithm
- particle filter