Understanding Deep Neural Networks via Linear Separability of Hidden Layers.
Chao ZhangXinyu ChenWensheng LiLixue LiuWei WuDacheng TaoPublished in: CoRR (2023)
Keyphrases
- hidden layer
- neural network
- linear separability
- back propagation
- activation function
- feed forward neural networks
- feedforward neural networks
- multilayer perceptron
- artificial neural networks
- feed forward
- recurrent neural networks
- backpropagation algorithm
- neural nets
- learning rate
- training algorithm
- pattern recognition
- linearly separable
- radial basis function
- rbf neural network
- hyperplane
- number of hidden layers
- genetic algorithm
- learning algorithm
- latent variables
- fuzzy logic
- feature space
- number of hidden units
- bp neural network
- input space
- data fusion
- fault diagnosis
- text categorization
- linear combination
- support vector machine
- support vector
- machine learning