Abstract
Apache Spark has become one of the most popular engines for big data processing. Spark provides a platform-independent, high-abstraction programming paradigm for large-scale data processing by leveraging the Java framework. Though it provides software portability across various machines, Java also limits the performance of distributed environments, such as Spark. While it may be unrealistic to rewrite platforms like Spark in a faster language, a more viable approach to mitigate its poor performance is to accelerate the computations while still working within the Java-based framework. This paper demonstrates the feasibility of incorporating FPGA acceleration into Spark,, uses a MapReduce implementation of the k-means clustering algorithm to show that acceleration is possible even when using a hardware platform that is not well-optimized for performance. An important feature of our approach is that the use of FPGAs is completely transparent to the user through the use of library functions, which is a common way by which users access functions provided by Spark. Power users can further develop other computations using high-level synthesis.