Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation.
Cole HawkinsVassilis N. IoannidisSoji AdeshinaGeorge KarypisPublished in: CoRR (2021)
Keyphrases
- neural network
- training process
- training algorithm
- feed forward neural networks
- feedforward neural networks
- multi layer perceptron
- training set
- neural network training
- artificial neural networks
- neural network ensemble
- back propagation
- pattern recognition
- backpropagation algorithm
- global consistency
- neural network model
- graph representation
- graph model
- random forests
- structured data
- feed forward
- activation function
- neural nets
- graph theory
- graph structure
- random walk
- error back propagation
- training data
- radial basis function network
- training phase
- competitive learning
- hidden layer
- ensemble learning
- graph matching
- test set
- multilayer neural network
- akaike information criterion
- spanning tree
- directed acyclic graph
- random forest
- bipartite graph
- ensemble methods
- training examples
- fuzzy logic
- support vector
- feature selection
- learning algorithm