等待其他处理器在MPI中完成任务(Waiting for other processors to finish their tasks in MPI)

使用MPI,你如何等待线程完成?

例如:

for (int i = localstart; i < localend; i++) { // do stuff that is computationally intensive } // I need to wait for all other threads to finish here if (rank == 0) do_something();

Using MPI, how do you wait for threads to finish?

For example:

for (int i = localstart; i < localend; i++) { // do stuff that is computationally intensive } // I need to wait for all other threads to finish here if (rank == 0) do_something();

最满意答案

如果您通过线程表示进程/排名,那么答案是MPI_Barrier 。

但是看看其他集体操作:它们可能在您的应用程序中有意义,并提供比手动编码通信更好的性能。 例如,您可以使用MPI_Allgather将所有数据传递给所有排名,依此类推。

如果你的意思是线程(比如pthreads),那么你必须使用线程库提供的任何东西。

If by threads you meant processes/ranks, then the answer is MPI_Barrier.

But look at the other collective operations too: they might make sense in your application, and offer better performance than hand-coding communication. For example, you could use MPI_Allgather to communicate all data to all ranks, and so on.

If you meant threads (like pthreads), then you'd have to use whatever the threading library offers.

等待其他处理器在MPI中完成任务(Waiting for other processors to finish their tasks in MPI)

使用MPI,你如何等待线程完成?

例如:

for (int i = localstart; i < localend; i++) { // do stuff that is computationally intensive } // I need to wait for all other threads to finish here if (rank == 0) do_something();

Using MPI, how do you wait for threads to finish?

For example:

for (int i = localstart; i < localend; i++) { // do stuff that is computationally intensive } // I need to wait for all other threads to finish here if (rank == 0) do_something();

最满意答案

如果您通过线程表示进程/排名,那么答案是MPI_Barrier 。

但是看看其他集体操作:它们可能在您的应用程序中有意义,并提供比手动编码通信更好的性能。 例如,您可以使用MPI_Allgather将所有数据传递给所有排名,依此类推。

如果你的意思是线程(比如pthreads),那么你必须使用线程库提供的任何东西。

If by threads you meant processes/ranks, then the answer is MPI_Barrier.

But look at the other collective operations too: they might make sense in your application, and offer better performance than hand-coding communication. For example, you could use MPI_Allgather to communicate all data to all ranks, and so on.

If you meant threads (like pthreads), then you'd have to use whatever the threading library offers.