会议专题

Map Reduce for Machine Learning Algorithms on GPUs

GPUs are fast replacing CPUs when it comes to data parallel operations. GPUs have been used for gaming and rendering applications for a while now. The immense computational capability of GPUs have recently been harnessed for general purpose applications. With the release of CUDA by Nvidia, GPUs are being used for compute intensive applications like never before. In this paper, we bring together two milestones in parallel and distributed computing, i.e. GPGPUs and the map-reduce programming model in order to solve machine learning problems. The parallel nature of machine learning algorithms have not been explored much. We propose a map-reduce model for some of the popular machine learning algorithms having a statistical query model, on GPGPUs. We envision that the extreme parallel nature of GPUs can provide considerable speed-up to these machine learning algorithms.

Map Reduce GPGPUs Machine Learning K-means CUDA Phoenix MARS

Aparna Sasidharan Harshit Kharbanda

School of Computer Science and Engineering VIT University Vellore, India

国际会议

2010 International Conference on Information Security and Artificial Intelligence(2010年信息安全与人工智能国际会议 ISAI 2010)

成都

英文

970-974

2010-12-17(万方平台首次上网日期,不代表论文的发表时间)