Pay Attention via Quantization: Enhancing Explainability of Neural Networks via Quantized Activation.
Yuma TashiroHiromitsu AwanoPublished in: IEEE Access (2023)
Keyphrases
- neural network
- pattern recognition
- transform coefficients
- genetic algorithm
- artificial neural networks
- fuzzy logic
- feed forward
- fuzzy systems
- neural network model
- multi layer
- vocabulary tree
- radial basis function
- back propagation
- network architecture
- transform domain
- quantization error
- recurrent neural networks
- expert systems