Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces.
Pattarawat ChormaiJan HerrmannKlaus-Robert MüllerGrégoire MontavonPublished in: CoRR (2022)
Keyphrases
- finding relevant
- neural network
- pattern recognition
- neural network is trained
- entity search
- back propagation
- high dimensional
- artificial neural networks
- fault diagnosis
- genetic algorithm
- multi layer perceptron
- neural network model
- generating explanations
- feed forward neural networks
- activation function
- network architecture
- recurrent neural networks
- feed forward
- high dimensional data
- low dimensional
- feature space
- neural nets
- online auctions
- hidden layer
- training algorithm
- fuzzy neural network
- self organizing maps
- linear subspace
- fuzzy artmap
- nearest neighbor
- case based reasoning
- knn
- short term prediction