Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Heterogeneous Graphs.
Da ZhengXiang SongChengru YangDominique LaSalleGeorge KarypisPublished in: KDD (2022)
Keyphrases
- neural network
- graph theory
- graph representation
- directed graph
- graph structure
- weighted graph
- training process
- graph matching
- graph model
- distributed sensor networks
- heterogeneous computing
- graph theoretic
- graph databases
- graph clustering
- graph mining
- graph construction
- training algorithm
- adjacency matrix
- graph structures
- data transfer
- labeled graphs
- graph properties
- graphics processing units
- graph data
- bipartite graph
- subgraph isomorphism
- distributed systems
- undirected graph
- feedforward neural networks
- graph classification
- series parallel
- random graphs
- graph search
- graph partitioning
- graph theoretical
- heterogeneous environments
- edge weights
- spanning tree
- pattern recognition
- graph isomorphism
- multi layer perceptron
- reachability queries
- multilayer neural network
- random walk
- back propagation
- graphics processors
- heterogeneous data
- gpu implementation
- structured data
- graph representations
- planar graphs
- parallel processing
- graph patterns
- graph kernels
- general purpose
- artificial neural networks
- web graph
- maximal cliques
- connected graphs
- finding the shortest path
- dynamic graph
- polynomial time complexity
- query graph
- real world graphs
- small world
- maximum clique
- neighborhood graph
- real time