Graphics Processing Units (GPUs) are becoming popular for achieving high performance of computation intensive workloads. The GPU offers thousands of cores for floating point computation. This is beneficial to machine learning algorithms that are computation intensive and are parallelizable on the Spark platform. While the current execution strategy of Spark is to execute computations for the workload across nodes, only CPUs on each node execute computation. If Spark could use GPUs on each node, users benefit from GPUs that can reduce the execution time of the algorithm and reduce the number of nodes in a cluster. Recently, while application programmers use DataFrame APIs for their application, machine learning algorithms work with RDDs that keep data across nodes for distributed computation on CPU cores. A RDD keeps data as a Scala collection class in a row-based format. The computation model of GPU can achieve high performance for contiguous data in a column-based format. For enabling efficient GPU computation on Spark, we present a column-based RDD that can keep data as an array. When we execute them on the GPU, our implementation simply copies data in the column-based RDD to the GPU device memory. Then, each GPU cores execute operations faster on the device memory. CPU cores can execute existing functions on the column-based RDD. In this session, we will give the following contribution to the Spark community: (1) we give a brief design overview of transparent GPU exploitations from programmers (2) we show our APIs to build a GPU-accelerated library using column-based RDD and show the performance gain of some programs (3) we discuss current work for transparent GPU code generation from DataFrame APIs The package for (2) is available at http://github.com/IBMSparkGPU/GPUEnabler