Polynomial differentiation decreases the training time complexity of physics-informed neural networks and strengthens their approximation power.
Juan Esteban Suarez CardonaMichael HechtPublished in: Mach. Learn. Sci. Technol. (2023)
Keyphrases
- neural network
- training process
- approximation error
- training algorithm
- feed forward neural networks
- feedforward neural networks
- backpropagation algorithm
- neural network training
- computer science
- worst case
- polynomial hierarchy
- back propagation
- randomized approximation
- closed form
- artificial intelligence
- genetic algorithm
- multi layer perceptron
- error bounds
- error tolerance
- increase in computational complexity
- pattern recognition
- computational complexity
- neural network model
- power consumption
- multi layer
- vapnik chervonenkis dimension
- artificial neural networks
- neural network structure
- computational cost
- supervised learning
- training samples
- test set
- decision problems
- feed forward
- recurrent neural networks
- training patterns
- training examples
- fuzzy neural network
- space complexity
- hidden layer
- smooth functions