![]() ![]() MapReduce algorithms have been shown to scale from single servers all the way to hundreds of thousands of cores while at the same time delivering transparent fault tolerance to the end user. However, unlike in MPI programming, the details of the underlying parallelization in MapReduce are hidden from the programmer, making it easier to use. While the concept of MapReduce was motivated initially by functional programming languages like LISP with its map and reduce primitives, it is also closely related to the message-passing interface (MPI) concepts of scatter and reduce for distributed-memory architectures. MapReduce is a simple programming model for enabling distributed computations, including data processing on very large input datasets, in a highly scalable and fault-tolerant way. Maciej Brodowicz, in High Performance Computing, 2018 19.1 Introduction ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |