Rethinking Self-Attention: Towards Interpretability in Neural Parsing.
Khalil MriniFranck DernoncourtQuan Hung TranTrung BuiWalter ChangNdapa NakasholePublished in: EMNLP (Findings) (2020)
Keyphrases
- network architecture
- neural network
- machine learning
- focus of attention
- natural language
- natural language processing
- prediction accuracy
- visual attention
- linguistic analysis
- artificial intelligence
- dependency parsing
- neural model
- nonlinear predictive control
- biological vision systems
- penn treebank
- computational neuroscience
- biologically plausible
- context free
- bio inspired
- biologically inspired
- associative memory
- pattern matching
- vision system