A new hybrid GPU-CPU sparse LDLT factorization algorithm with GPU and CPU factorizing concurrently.
Yunmou LiuHui DuZhuogen LiPu ChenPublished in: J. Comput. Sci. (2024)
Keyphrases
- floating point
- graphics processing units
- memory bandwidth
- gpu implementation
- graphics processors
- graphics hardware
- sparse matrix
- parallel computation
- compute unified device architecture
- data sets
- parallel architectures
- sparse data
- processing units
- compressive sensing
- database
- linear combination
- scientific computing
- image classification
- general purpose
- feature selection
- gpu accelerated
- neural network
- real time