​
Login / Signup
Adrian Riekert
ORCID
Publication Activity (10 Years)
Years Active: 2020-2024
Publications (10 Years): 15
Top Topics
Neural Network
Convergence Rate
Convergence Proof
Optimization Method
Top Venues
CoRR
J. Mach. Learn. Res.
J. Complex.
</>
Publications
</>
Arnulf Jentzen
,
Adrian Riekert
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks.
CoRR
(2024)
Steffen Dereich
,
Arnulf Jentzen
,
Adrian Riekert
Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses.
CoRR
(2024)
Arnulf Jentzen
,
Adrian Riekert
,
Philippe von Wurstemberger
Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations.
CoRR
(2023)
Adrian Riekert
Deep neural network approximation of composite functions without the curse of dimensionality.
CoRR
(2023)
Simon Eberle
,
Arnulf Jentzen
,
Adrian Riekert
,
Georg S. Weiss
Normalized gradient flow optimization in the training of ReLU artificial neural networks.
CoRR
(2022)
Patrick Cheridito
,
Arnulf Jentzen
,
Adrian Riekert
,
Florian Rossmannek
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
J. Complex.
72 (2022)
Arnulf Jentzen
,
Adrian Riekert
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
J. Mach. Learn. Res.
23 (2022)
Arnulf Jentzen
,
Adrian Riekert
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions.
CoRR
(2021)
Arnulf Jentzen
,
Adrian Riekert
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks.
CoRR
(2021)
Patrick Cheridito
,
Arnulf Jentzen
,
Adrian Riekert
,
Florian Rossmannek
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
CoRR
(2021)
Simon Eberle
,
Arnulf Jentzen
,
Adrian Riekert
,
Georg S. Weiss
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR
(2021)
Arnulf Jentzen
,
Adrian Riekert
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR
(2021)
Martin Hutzenthaler
,
Arnulf Jentzen
,
Katharina Pohl
,
Adrian Riekert
,
Luca Scarpa
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions.
CoRR
(2021)
Arnulf Jentzen
,
Adrian Riekert
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
CoRR
(2021)
Arnulf Jentzen
,
Adrian Riekert
Strong overall error analysis for the training of artificial neural networks via random initializations.
CoRR
(2020)