Commit 014e9dcb authored by Kloss's avatar Kloss
Browse files

update mpi tutorial

parent d4e63784
......@@ -3,36 +3,63 @@
Parallelization with MPI
========================
Running code in parallel with MPI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Particularly with larger simulation times and system sizes,
complex calculations are computationally intensive.
In order to keep the actual simulation time manageable,
tkwant is parallelized with the Message Passing Interface (MPI).
With MPI, tkwant can run it in parallel, e.g. on the local computer or a cluster.
Since compute-intensive tkwant routines are natively MPI parallelized, only
minor changes to a simulation script are required.
The additional MPI commands usually only affect plotting and printing.
They should be added to the simulation script so that it works similarly in
both serial and parallel execution.
An example can be found in the script
:download:`1d_wire.py <1d_wire.py>`. There, a few additional lines of code
redirect all plotting and printing to a single MPI rank that serves as a root (see below).
For example, to execute the simulation script ``1d_wire.py`` on 8 cores, the scripts
With MPI, tkwant can run it in parallel, e.g. on the local computer or on a cluster.
Parallel programming and in particular MPI is a vast subjet
and the following tutorial cannot explain these techniques here, but we refer
to dedicated material which can be found in the web.
This tutorial will explain however the very basic concept which is sufficient to run tkwant
simulations in parallel without having a deeper knowledge in MPI.
This is possible since compute-intensive routines are natively MPI parallelized in tkwant,
such that only minor changes to a simulation script are required.
We will explain them in the following.
Running code in parallel with MPI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As an example, let us focus on a tkwant script with name ``1d_wire.py``, e.g. the
one which can be found in the folder ``doc/example/`` of the tkwant repository.
To execute this script on 8 cores, the scripts
must be called with the following command:
::
mpirun -n 8 python3 1d_wire.py
Calling the script with:
::
python3 1d_wire.py
will run it in well-known serial mode.
Enabling output only from the MPI root rank
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Calling with ``python3 1d_wire.py`` runs the code in well-known serial mode.
Running a script with the prefix ``mpirun -n x`` will execute the entire
script *x* times. While tkwant is designed to benefit from this parallelization,
any output in the script as printing, plotting or writing to a file
will be also repeated *x* times. This is unpractical, as e.g. a ``print()`` call in the simulation
script will print some information *x* times instead of only once.
Moreover, not all of the *x* parallel runs are equivalent,
but the result of a calculations from the tkwant solvers is returned
by default only on one MPI rank, the so-called master or root rank, which has the rank index 0.
It is sufficient however to add a few additional lines of code, in order to
redirect all plotting and printing to the MPI root rank, such that both serial and parallel execution
will lead to the same result.
The result of calculations from the solvers it typically only returned
to the master rank (by default rank zero). To plot or save the result
only from the master rank, the following block of codes can be used:
As an example, we look again at the script with the name ``1d_wire.py``
in the folder ``doc/example/`` of the tkwant repository.
In this script, a few additional lines of code
redirect all plotting and printing to the MPI root rank.
For plotting and saving the result, the following block of code can be used:
.. jupyter-execute::
......@@ -48,7 +75,7 @@ only from the master rank, the following block of codes can be used:
# plot or save result
pass
Quite similar, printing a message only by the master rank is possible by the
Quite similar, printing a message only by the master rank is possible by
the following lines of code:
.. jupyter-execute::
......@@ -67,6 +94,8 @@ Note the flush command to prevent buffering of the messages.
MPI communicator
~~~~~~~~~~~~~~~~
The following information is not relevant for tkwant users, but inteded for
tkwant developers working with MPI.
Tkwant initializes automatically the MPI communicator, if needed.
To uses MPI, the function ``mpi.get_communicator()`` returns tkwant's global
MPI communicator which is used by all routines by default:
......@@ -93,6 +122,6 @@ communicator:
my_comm = MPI.COMM_WORLD
tkwant.mpi.communicator_init(my_comm)
Note that the MPI communicator must be set after importing the tkwant module and
Note that the MPI communicator must be set *after* importing the tkwant module and
*before* executing any tkwant code.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment