You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets.
Tianjin HuangTianlong ChenMeng FangVlado MenkovskiJiaxu ZhaoLu YinYulong PeiDecebal Constantin MocanuZhangyang WangMykola PechenizkiyShiwei LiuPublished in: LoG (2022)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- graph structure
- pattern recognition
- back propagation
- backpropagation algorithm
- neural network training
- multi layer perceptron
- maximum clique
- graph theory
- random walk
- feed forward neural networks
- fuzzy logic
- genetic algorithm
- strongly connected
- linear combination
- decision trees
- neural network model
- directed graph
- training set
- self organizing maps
- recurrent networks
- fault diagnosis
- edge weights
- hidden neurons
- training samples
- error back propagation
- structured data
- graph representation
- activation function
- graph model
- linearly combined
- feed forward
- number of hidden units
- recurrent neural networks
- radial basis function network
- connected components
- bipartite graph