Conjugate gradient solvers on Intel Xeon Phi and NVIDIA GPUs.
Olaf KaczmarekChristian SchmidtP. SteinbrecherM. WagnerPublished in: CoRR (2014)
Keyphrases
- conjugate gradient
- intel xeon
- graphics processing units
- high performance computing
- training algorithm
- general purpose
- convergence rate
- maximum likelihood estimation
- massively parallel
- parallel computing
- computing systems
- levenberg marquardt
- faster convergence
- kernel machines
- neural network
- reproducing kernel hilbert space
- computing environments
- conjugate gradient algorithm
- real valued
- maximum likelihood
- distance measure
- least squares
- search space