Graph Expansion in Pruned Recurrent Neural Network Layers Preserves Performance.
Suryam Arnav KalraPabitra MitraArindam BiswasBiswajit BasuPublished in: Tiny Papers @ ICLR (2024)
Keyphrases
- recurrent neural networks
- neural network
- complex valued
- recurrent networks
- feed forward
- reservoir computing
- echo state networks
- directed graph
- hidden layer
- random walk
- graph theory
- graph structure
- artificial neural networks
- feedforward neural networks
- long short term memory
- adaptive neural
- neural model
- bipartite graph
- graph representation
- graph theoretic
- structured data
- training data
- hebbian learning
- machine learning
- directed acyclic graph
- multi layer
- neural network model
- decision making
- social networks