Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs.
Da ZhengXiang SongChengru YangDominique LaSalleQidong SuMinjie WangChao MaGeorge KarypisPublished in: CoRR (2021)
Keyphrases
- neural network
- training process
- graph structure
- graph theory
- directed graph
- distributed sensor networks
- graph representation
- graph matching
- weighted graph
- graph theoretic
- graph structures
- graph databases
- graph mining
- graph partitioning
- labeled graphs
- training algorithm
- graph clustering
- graph classification
- feedforward neural networks
- series parallel
- graph model
- graph construction
- graph theoretical
- data transfer
- graph properties
- adjacency matrix
- graphics processors
- undirected graph
- graphics processing units
- graph isomorphism
- feed forward neural networks
- subgraph isomorphism
- graph search
- spanning tree
- bipartite graph
- multilayer neural network
- graph patterns
- structural pattern recognition
- random walk
- distributed systems
- random graphs
- dynamic graph
- multi layer perceptron
- gpu implementation
- finding the shortest path
- graph kernels
- graph data
- pattern recognition
- edge weights
- graph representations
- query graph
- connected components
- planar graphs
- web graph
- parallel implementation
- neighborhood graph
- real world graphs
- artificial neural networks
- general purpose
- structured data
- real time
- maximum clique
- graphics hardware
- reachability queries
- graph layout
- connected graphs