Taming the sign problem of explicitly antisymmetrized neural networks via rough activation functions.
Nilin AbrahamsenLin LinPublished in: CoRR (2022)
Keyphrases
- activation function
- neural network
- feed forward neural networks
- feed forward
- back propagation
- artificial neural networks
- hidden layer
- neural nets
- hidden neurons
- connection weights
- multilayer perceptron
- neural architecture
- feedforward neural networks
- rough sets
- learning rate
- hidden nodes
- radial basis function
- basis functions
- fuzzy logic
- network architecture
- training phase
- multi layer perceptron
- multi layer
- fault diagnosis
- fuzzy neural network
- neural network model
- low dimensional
- high dimensional