Home | Libraries | People | FAQ | More |
The design philosophy of the Parallel MPI library is very simple: be both convenient
and efficient. MPI is a library built for high-performance applications, but
it's FORTRAN-centric, performance-minded design makes it rather inflexible
from the C++ point of view: passing a string from one process to another is
inconvenient, requiring several messages and explicit buffering; passing a
container of strings from one process to another requires an extra level of
manual bookkeeping; and passing a map from strings to containers of strings
is positively infuriating. The Parallel MPI library allows all of these data
types to be passed using the same simple send()
and recv()
primitives. Likewise, collective operations
such as reduce()
allow arbitrary data types
and function objects, much like the C++ Standard Library would.
The higher-level abstractions provided for convenience must not have an impact
on the performance of the application. For instance, sending an integer via
send
must be as efficient as
a call to MPI_Send
, which means
that it must be implemented by a simple call to MPI_Send
;
likewise, an integer reduce()
using std::plus<int>
must
be implemented with a call to MPI_Reduce
on integers using the MPI_SUM
operation: anything less will impact performance. In essence, this is the "don't
pay for what you don't use" principle: if the user is not transmitting
strings, s/he should not pay the overhead associated with strings.
Sometimes, achieving maximal performance means foregoing convenient abstractions and implementing certain functionality using lower-level primitives. For this reason, it is always possible to extract enough information from the abstractions in Boost.MPI to minimize the amount of effort required to interface between Boost.MPI and the C MPI library.