SmartSAGE: training large-scale graph neural networks using in-storage processing architectures.
Yunjae LeeJinha ChungMinsoo RhuPublished in: ISCA (2022)
Keyphrases
- neural network
- training process
- training algorithm
- processing capabilities
- feedforward neural networks
- neural network training
- real time
- back propagation
- feed forward neural networks
- pattern recognition
- backpropagation algorithm
- storage and retrieval
- random access
- graph theoretic
- multi layer perceptron
- storage requirements
- directed graph
- graph structure
- scientific data analysis
- error back propagation
- graph theory
- file system
- small scale
- training set
- fuzzy logic
- supervised learning
- structured data
- data processing
- genetic algorithm
- memory management
- artificial neural networks
- graph representation
- information processing
- training phase
- graph model
- directed acyclic graph
- weighted graph
- recurrent neural networks
- graph partitioning
- random walk
- fault diagnosis
- graph matching
- recurrent networks
- training data
- decision trees
- neural network structure
- neural architectures
- learning algorithm