Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance.
Suryam Arnav KalraArindam BiswasPabitra MitraBiswajit BasuPublished in: CoRR (2024)
Keyphrases
- recurrent neural networks
- neural network
- complex valued
- recurrent networks
- feed forward
- long short term memory
- random walk
- graph structure
- hidden layer
- neural model
- weighted graph
- graph representation
- feedforward neural networks
- artificial neural networks
- echo state networks
- reservoir computing
- graph model
- structured data
- bipartite graph
- feed forward neural networks
- multi layer
- connected components
- directed acyclic graph
- back propagation
- viterbi algorithm
- machine learning