Linear Self-Attention Approximation via Trainable Feedforward Kernel.
Uladzislau YorshAlexander KovalenkoPublished in: CoRR (2022)
Keyphrases
- feed forward
- back propagation
- neural nets
- neural network
- artificial neural networks
- visual cortex
- closed form
- kernel function
- neural architecture
- artificial neural
- recurrent neural networks
- hidden layer
- biologically plausible
- kernel methods
- error tolerance
- polynomial kernels
- error back propagation
- recurrent networks
- neuron model
- primate visual cortex
- machine learning
- piecewise constant
- linear functions
- reproducing kernel hilbert space
- multilayer perceptron
- visual attention
- high dimensional
- object recognition