Login / Signup
On the Validity of Self-Attention as Explanation in Transformer Models.
Gino Brunner
Yang Liu
Damián Pascual
Oliver Richter
Roger Wattenhofer
Published in:
CoRR (2019)
Keyphrases
</>
artificial neural networks
statistical models
historical data
structural model
machine learning
social networks
e learning
website
multi agent
expert systems
prior knowledge
probabilistic model
visual attention
modeling framework
generating explanations