You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets.
Tianjin HuangTianlong ChenMeng FangVlado MenkovskiJiaxu ZhaoLu YinYulong PeiDecebal Constantin MocanuZhangyang WangMykola PechenizkiyShiwei LiuPublished in: CoRR (2022)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- weighted graph
- graph structure
- multi layer perceptron
- edge weights
- neural network training
- pattern recognition
- maximum clique
- feed forward neural networks
- graph representation
- graph model
- random walk
- graph theoretic
- artificial neural networks
- structured data
- recurrent networks
- backpropagation algorithm
- activation function
- training phase
- connection weights
- weight update
- training data
- hidden layer
- training set
- fuzzy logic
- linear combination
- training samples
- back propagation
- connected components
- graph theory
- directed graph
- radial basis function
- test set
- feed forward
- graph matching
- recurrent neural networks
- strongly connected
- supervised learning
- hidden neurons
- neural network structure
- graph mining
- linearly combined
- feedforward artificial neural networks