Commit 197bbe42 authored by Kloss's avatar Kloss
Browse files

move mpi doc to own section

parent 38940241
......@@ -3,7 +3,7 @@
.. module:: tkwant.mpi
MPI Helper functions
MPI helper functions
--------------------
.. autosummary::
......
Examples
========
Parallelization
---------------
Particularly with larger simulation times and system sizes,
complex calculations are computationally intensive.
In order to keep the actual simulation time manageable,
tkwant is parallelized with the Message Passing Interface (MPI).
With MPI, tkwant can run it in parallel, e.g. on the local computer or a cluster.
Since compute-intensive tkwant routines are natively MPI parallelized, only
minor changes to a simulation script are required.
The additional MPI commands usually only affect plotting and printing.
They should be added to the simulation script so that it works similarly in
both serial and parallel execution.
An example can be found in the script
:download:`1d_wire.py <1d_wire.py>`. There, a few additional lines of code
redirect all plotting and printing to a single MPI rank that serves as a root.
For example, to execute the simulation script ``1d_wire.py`` on 8 cores, the scripts
must be called with the following command:
::
mpirun -n 8 python3 1d_wire.py
Calling with ``python3 1d_wire.py`` runs the code in well-known serial mode.
Example problems
----------------
Find below several example problems, highlighting different aspects of tkwant:
:ref:`closed_system`
......
......@@ -8,4 +8,5 @@ Tutorial
manybody_advanced
boundary_condition
logging
mpi
examples
Parallelization with MPI
========================
Particularly with larger simulation times and system sizes,
complex calculations are computationally intensive.
In order to keep the actual simulation time manageable,
tkwant is parallelized with the Message Passing Interface (MPI).
With MPI, tkwant can run it in parallel, e.g. on the local computer or a cluster.
Since compute-intensive tkwant routines are natively MPI parallelized, only
minor changes to a simulation script are required.
The additional MPI commands usually only affect plotting and printing.
They should be added to the simulation script so that it works similarly in
both serial and parallel execution.
An example can be found in the script
:download:`1d_wire.py <1d_wire.py>`. There, a few additional lines of code
redirect all plotting and printing to a single MPI rank that serves as a root.
For example, to execute the simulation script ``1d_wire.py`` on 8 cores, the scripts
must be called with the following command:
::
mpirun -n 8 python3 1d_wire.py
Calling with ``python3 1d_wire.py`` runs the code in well-known serial mode.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment