Welcome to the thread communication with MPI tutorial. If you’re new to MPI, I suggest you go back and read the previous tutorials first. Otherwise, continue on to learn basic thread communication with MPI!
1) Getting Started with MPI using Visual Studio
2) Debugging an MPI application using Visual Studio
Thread communication is a vital topic for virtually all multi-threaded applications. Since MPI is specifically for multi-threaded applications, it is important to understand how threads communicate with each other. It is important to remember that MPI applications are not limited to one machine. One instance of an application can run across multiple processors on multiple computers. This means that we can not rely on system memory, or global variables for thread communication. Instead, we must rely on MPI functions to move data from thread to thread.
For this tutorial, we’ll be building an application which will accept an integer which is input by the user. The program will then computer 1+2+3+4+….+n, with n being the user input. Because addition is commutative, we can assign a different portion of the work to each different processor. Then at the end of the application, each thread can send the result to the master thread, thread 0, and thread 0 will simply add the results computed by itself and all other threads. Please note that I’m away there is a formula to quickly do this computation without the need for parallel processors, and without the need for n-1 additions. We’re going to take the computationally inefficient way of solving this specific problem, simply because this tutorial is focused on thread communication with MPI, not mathematical tricks.
Getting the user input
As in the debug tutorial, we will start of the program while waiting for user input. The threads will not continue until the user has input a valid number for thread 0.
int maxNum; int nTasks, rank; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) { printf ("Number of threads = %d\n", nTasks); do { cout << "Please input number between 0 and 50000: "; cin >> maxNum; } while(maxNum < 0 || maxNum > 50000); } MPI_Barrier(MPI_COMM_WORLD);
Broadcast the user input from thread 0 to all other threads
Next, thread 0 will have to communicate the number to all other threads, so the threads will be able to determine exactly what range of integers they need to add. Because one thread will need to send a value to all other threads, the best way to accomplish this is to use the MPI_Bcast function.
MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int source, MPI_Comm comm);
MPI_Bcast(&maxNum, 1, MPI_INT, 0, MPI_COMM_WORLD);
As you see in the example above, we pass the address of the user input variable, maxNum, to MPI_Bcast. Because it is an integer, the count variable we pass in should be one. Similarly, we pass MPI_INT as the datatype being sent. The source of the broadcast is thread 0, and the destination is all other threads, MPI_COMM_WORLD. After calling this function, all threads will have the user input, stored in variable maxNum.
Now that all threads have the number the user input, each thread can calculate which portion of numbers it needs to add. Each thread will have a different rank, so this is what is used to determine which numbers to add. Great care should be taken to ensure that all the numbers are added once, and only once. Failure to carefully plan which threads compute which numbers is the source of many problems in MPI programs.
int portion = maxNum / nTasks; int startNum = rank * portion; // calculate the starting number for this thread int endNum = (rank + 1) * portion; // calculate the ending number +1 for this thread if (rank == nTasks-1) { // if We're the last thread, then we should automatically set the ending number. // This is because when we divided maxNum by nTasks, there could have been a remainder left off. // My manually setting the ending number, we ensure that all numbers are properly computed. endNum = maxNum+1; }
Next, each thread can simply perform the calculation.
int total = 0; for (int i=startNum; i < endNum; i++) { total += i; } printf("Thread %d computed %d\n", rank, total);
Have all threads send their results to thread 0
After all of the threads, including thread 0, have computed their values, each thread will communicate that value back to thread 0. This can be accomplished by using the MPI_Recv function, coupled with the MPI_Send function. Please note that these are blocking functions, meaning that when a thread starts executing one of these functions, it will not continue until the transaction is complete. Thus, it is critically important that Sends are exactly matched with Receives. For example, if thread 1 sends a value to thread 2, and thread 2 doesn’t have a receive function, thread 1 will wait forever because the send function will never complete. This is a common problem which needs to be debugged during development of MPI programs. Below is the rest of the source code.
if (rank == 0) { // The master thread will need to receive all computations from all other threads. MPI_Status status; // MPI_Recv(void *buf, int count, MPI_DAtatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) // We need to go and receive the data from all other threads. // The arbitrary tag we choose is 1, for now. for (int i=1; i < nTasks; i++) { int temp; MPI_Recv(&temp, 1, MPI_INT, i,1, MPI_COMM_WORLD, &status); //printf("RECEIVED %d from thread %d\n", temp, i); total += temp; } } else { // We are finished with the results in this thread, and need to send the data to thread 1. // MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) // The destination is thread 0, and the arbitrary tag we choose for now is 1. MPI_Send(&total, 1, MPI_INT, 0, 1, MPI_COMM_WORLD); } if (rank == 0) { // Display the final calculated value printf("The calculated value is %d\n", total); } MPI_Finalize();
Wrapping up
In closing this article, there are two main concepts that you should take home with you. When one thread needs to send data to all other threads, the easiest, and fastest way to do this is with the MPI_Bcast function. When sending data from one thread to another, the easiest way is to simply use MPI_Recv, and MPI_Send. These are blocking functions, which means great care must be taken to match every MPI_Send with a corresponding MPI_Recv. It is very important that you do not rely on using global variables of main memory in order to communicate between threads. Even if this approach works on your machine, it may not work on other people’s machines. And even if this works for you while compiling in debug mode, it may not work in release mode. So please, when writing an MPI application, always use proper MPI functions to send data to and from threads.
Download the source code here
Next tutorial: Sending large datasets in MPI