Parallel growing and training of neural networks using output parallelism.
Sheng-Uei GuanShanchun LiPublished in: IEEE Trans. Neural Networks (2002)
Keyphrases
- neural network
- training process
- shared memory
- parallel processing
- training algorithm
- parallel computation
- feed forward neural networks
- backpropagation algorithm
- feedforward neural networks
- parallel execution
- data parallelism
- genetic algorithm
- pattern recognition
- multi layer perceptron
- back propagation
- massively parallel
- parallel computing
- training set
- hidden units
- parallel architectures
- artificial neural networks
- neural network training
- parallel implementation
- level parallelism
- recurrent networks
- distributed memory
- input data
- self organizing maps
- activation function
- parallel algorithm
- hidden layer
- neural network structure
- parallel programming
- coarse grain
- error back propagation
- multicore processors
- number of hidden units
- fine grain
- fuzzy logic
- parallel architecture
- learning rate
- online learning
- test set
- radial basis function
- feed forward
- rbf network
- recurrent neural networks
- multi layer