Toward a Generic Hybrid CPU-GPU Parallelization of Divide-and-Conquer Algorithms.
Alejandro López-OrtizAlejandro SalingerRobert SudermanPublished in: Int. J. Netw. Comput. (2014)
Keyphrases
- learning algorithm
- theoretical analysis
- recently developed
- orders of magnitude
- significant improvement
- computational cost
- data sets
- computational efficiency
- computationally efficient
- machine learning
- worst case
- data streams
- machine learning algorithms
- decision trees
- combinatorial optimization
- real time
- graphics processing units
- limited memory
- gpu implementation