Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias.
Axel LaborieuxMaxence ErnoultBenjamin ScellierYoshua BengioJulie GrollierDamien QuerliozPublished in: CoRR (2020)
Keyphrases
- feed forward
- variance reduction
- gradient estimation
- maximum likelihood
- least squares
- back propagation
- game theory
- monte carlo
- deep learning
- importance sampling
- bias variance
- propagation model
- image sequences
- nash equilibrium
- estimation algorithm
- error estimation
- gradient information
- machine learning
- neural network
- artificial neural networks
- gradient method
- confidence intervals
- edge detection
- probabilistic model
- trade off