Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation.
Giulio MasettiFelicita Di GiandomenicoPublished in: PKDD/ECML Workshops (2020)
Keyphrases
- activation function
- feed forward
- neural network
- back propagation
- neural nets
- hidden layer
- artificial neural networks
- feed forward neural networks
- hidden neurons
- neural architecture
- recurrent neural networks
- backpropagation algorithm
- visual cortex
- pattern recognition
- feedforward neural networks
- multilayer perceptron
- single layer
- hidden nodes
- adaptive neural
- sigmoid function
- extreme learning machine
- output layer
- recurrent networks
- training algorithm
- fault diagnosis
- rough sets
- fuzzy logic