Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean.
Ahmet YilmazRiccardo PoliPublished in: Neural Networks (2022)
Keyphrases
- multi layer perceptron
- activation function
- single hidden layer
- neural network
- connection weights
- output layer
- hidden neurons
- hidden nodes
- artificial neural networks
- feedforward neural networks
- radial basis function
- feed forward neural networks
- neural architecture
- neural network model
- hidden units
- radial basis function network
- support vector machine
- extreme learning machine
- rbf network
- neuro fuzzy
- multilayer perceptron
- feed forward
- artificial intelligence
- data mining
- hidden layer
- rbf neural network
- data sets
- training phase
- learning rate
- input output
- linear combination