More Information
Big Data processing has gained a great popularity in the last several years. That is the case of MapReduce, a programming model originally developed by Google which is used for generating and processing large data sets over a cluster of commodity machines. Apache Hadoop is an open-source, Java-based project that is considered the de-facto standard MapReduce implementation.
HPC-oriented MapReduce frameworks
Hadoop is oriented to commodity hardware clusters, which hinders it from totally leveraging high-performance resources typically available in High Performance Computing (HPC) systems, like Solid State Drive (SSD) disks or InfiniBand networks. This situation has caused the emergence of many frameworks oriented to HPC systems. The suitability of each framework to a particular cluster depends on its design and implementation, the underlying system resources and the type of application to be run. Therefore, the appropriate selection of one of these solutions generally involves the execution of multiple experiments in order to assess their performance, scalability and resource efficiency. These frameworks can be divided into three categories: transparent solutions, modifications over Hadoop and implementations from scratch.
The use of transparent solutions only involves changing the configuration of the Hadoop frameworks. One of these approaches consists in the modification of the network configuration to make use of the IPoIB interface instead of the Ethernet one. Another transparent solution for Hadoop is Mellanox UDA, a plugin that accelerates the communications between mappers and reducers. Based on the network levitated merge algorithm [1], it optimizes the original overlapping of Hadoop phases and uses RDMA communications.
Currently, the most popular modification over Hadoop is RDMA-Hadoop, which is framed within the High Performance Big Data (HiBD) project and includes RDMA-based implementations of Hadoop 1, 2 and 3. RDMA-Hadoop adapts different components of Hadoop to use RDMA communications: HDFS [2], MapReduce [3] and Hadoop RPC [4]. The communications between mappers and reducers are redesigned to take full advantage of the RDMA interconnect, while also implementing data prefetching and caching mechanisms.
DataMPI is the most representative framework developed from scratch. It aims to take advantage from the Message-Passing Interface (MPI) and to avoid the use of traditional TCP/IP communications. To do so, it implements the MPI-D specification [5], a proposed extension of the MPI standard which uses the (key,value) semantics derived from the MapReduce model.
New Big Data frameworks
New frameworks have been developed to overcome the limitations of Hadoop. By redesigning its architecture and implementation, they are capable of increasing the performance of the workloads. These new solutions widen the range of operations that can be applied to the data being processed, while also supporting traditional MapReduce algorithms.
Apache Spark is the most representative example of this kind of frameworks. It outperforms Hadoop in terms of performance for most use cases, thanks to the use of in-memory data processing [6]. This kind of processing is specially suited for iterative applications, which perform more than one transformation over the same data set. Spark optimizes the performance of these workloads by avoiding the use of HDFS as intermediate storage system.
Another Big Data framework, Apache Flink, goes one step further to achieve the best performance possible. Apart from implementing in-memory data transformations like Spark, it manages the memory usage in an efficient way and applies query optimization techniques to reduce expensive operations like shuffles and sorts.
References
- [1] Yandong Wang, Xinyu Que, Weikuan Yu, Dror Goldenberg, and Dhiraj Sehgal. Hadoop acceleration through network levitated merge. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’11), pages 57:1–57:10. Seattle, WA, USA, 2011.
- [2] Nusrat S. Islam, Md Wasi-Ur-Rahman, Jithin Jose, Raghunath Rajachandrasekar, Hao Wang, Hari Subramoni, Chet Murthy, and Dhabaleswar K. Panda. High performance RDMA-based design of HDFS over InfiniBand. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’12), pages 35:1–35:12. Salt Lake City, UT, USA, 2012.
- [3] Md Wasi-Ur-Rahman, Nusrat S. Islam, Xiaoyi Lu, Jithin Jose, Hari Subramoni, Hao Wang, and Dhabaleswar K. Panda. High-Performance RDMA-based design of Hadoop MapReduce over InfiniBand. In Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium Workshops and PhD Forum (IPDPSW’13), pages 1908–1917. Boston, MA, USA, 2013.
- [4] Xiaoyi Lu, Nusrat S. Islam, Md Wasi-Ur-Rahman, Jithin Jose, Hari Subramoni, Hao Wang, and Dhabaleswar K. Panda. High-Performance design of Hadoop RPC with RDMA over InfiniBand. In Proceedings of the 42nd International Conference on Parallel Processing (ICPP’13), pages 641–650. Lyon, France, 2013.
- [5] Xiaoyi Lu, Bing Wang, Li Zha, and Zhiwei Xu. Can MPI benefit Hadoop and MapReduce applications? In Proceedings of the 7th International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS’11), pages 371–379. Taipei, Taiwan, 2011.
- [6] Zaharia, Matei and Chowdhury, Mosharaf and Franklin, Michael J and Shenker, Scott and Stoica, Ion. Spark: cluster computing with working sets. In Proceedings of the 2nd USENIX Workshop on Hot topics in Cloud Computing (HotCloud '10), pages 10:1–10:7, Boston, MA, USA, 2010.