Barad-dur: Near-Storage Accelerator for Training Large Graph Neural Networks.
Jiyoung AnEsmerald AliajSang-Woo JunPublished in: PACT (2023)
Keyphrases
- neural network
- training process
- training algorithm
- feed forward neural networks
- feedforward neural networks
- multi layer perceptron
- backpropagation algorithm
- artificial neural networks
- neural network training
- self organizing maps
- graph structure
- back propagation
- connected components
- graph based algorithm
- pattern recognition
- directed acyclic graph
- data storage
- structured data
- neural nets
- graph representation
- error back propagation
- neural network structure
- training set
- fuzzy logic
- supervised learning
- random walk
- storage and retrieval
- training samples
- training phase
- neural network model
- associative memory
- bipartite graph
- parallel implementation
- graph theoretic
- graph matching
- recurrent networks
- multi layer
- active learning
- recurrent neural networks