Parallelization
Name MeshPartition
Section Execution::Parallelization
Type integer
When using METIS to perform the mesh partitioning, decides which
algorithm is used. By default, graph partitioning
is used for 8 or more partitions, and rcb for fewer.
Options:
- rcb:
Recursive coordinate bisection partitioning.
- graph:
Graph partitioning (called 'k-way' by METIS).
Name MeshPartitionPackage
Section Execution::Parallelization
Type integer
Decides which library to use to perform the mesh partition.
By default ParMETIS is used when available, otherwise METIS is used.
Options:
- metis:
METIS library.
- parmetis:
(Experimental) Use ParMETIS libary to perform the mesh partition.
Only available if the code was compiled with ParMETIS support.
- part_hilbert:
Use the ordering along the Hilbert curve for partitioning.
Name MeshPartitionStencil
Section Execution::Parallelization
Type integer
Default stencil_star
To partition the mesh, it is necessary to calculate the connection
graph connecting the points. This variable selects which stencil
is used to do this.
Options:
- stencil_star:
An order-one star stencil.
- laplacian:
The stencil used for the Laplacian is used to calculate the
partition. This in principle should give a better partition, but
it is slower and requires more memory.
Name MeshPartitionVirtualSize
Section Execution::Parallelization
Type integer
Default mesh mpi_grp size
Gives the possibility to change the partition nodes.
Afterward, it crashes.
Name MeshUseTopology
Section Execution::Parallelization
Type logical
Default false
(experimental) If enabled, Octopus will use an MPI virtual
topology to map the processors. This can improve performance
for certain interconnection systems.
Name ParallelizationNumberSlaves
Section Execution::Parallelization
Type integer
Default 0
Slaves are nodes used for task parallelization. The number of
such nodes is given by this variable multiplied by the number
of domains used in domain parallelization.
Name ParallelizationOfDerivatives
Section Execution::Parallelization
Type integer
Default non_blocking
This option selects how the communication of mesh boundaries is performed.
Options:
- blocking:
Blocking communication.
- non_blocking:
Communication is based on non-blocking point-to-point communication.
Name ParallelizationPoissonAllNodes
Section Execution::Parallelization
Type logical
Default true
When running in parallel, this variable selects whether the
Poisson solver should divide the work among all nodes or only
among the parallelization-in-domains groups.
Name ParallelXC
Section Execution::Parallelization
Type logical
Default true
When enabled, additional parallelization
will be used for the calculation of the XC functional.
Name ParDomains
Section Execution::Parallelization
Type integer
Default auto
This variable controls the number of processors used for the
parallelization in domains.
The special value auto, the default, lets Octopus
decide how many processors will be assigned for this
strategy. To disable parallelization in domains, you can use
ParDomains = no (or set the number of processors to
1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
- auto:
The number of processors is assigned automatically.
- no:
This parallelization strategy is not used.
Name ParKPoints
Section Execution::Parallelization
Type integer
Default auto
This variable controls the number of processors used for the
parallelization in K-Points and/or spin.
The special value auto lets Octopus decide how many processors will be
assigned for this strategy. To disable parallelization in
KPoints, you can use ParKPoints = no (or set the
number of processors to 1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
- auto:
The number of processors is assigned automatically.
- no:
This parallelization strategy is not used.
Name ParOther
Section Execution::Parallelization
Type integer
Default auto
This variable controls the number of processors used for the
‘other’ parallelization mode, that is CalculatioMode
dependent. For CalculationMode = casida, it means
parallelization in electron-hole pairs.
The special value auto, the default, lets Octopus decide how many processors will be assigned for this strategy. To disable parallelization in Other, you can use ParOther = no (or set the number of processors to 1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
- auto:
The number of processors is assigned automatically.
- no:
This parallelization strategy is not used.
Name ParStates
Section Execution::Parallelization
Type integer
This variable controls the number of processors used for the
parallelization in states. The special value auto lets
Octopus decide how many processors will be assigned for this
strategy. To disable parallelization in states, you can use
ParStates = no (or set the number of processors to 1).
The default value depends on the CalculationMode. For CalculationMode = td the default is auto, while for for other modes the default is no.
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
- auto:
The number of processors is assigned automatically.
- no:
This parallelization strategy is not used.
Name PartitionPrint
Section Execution::Parallelization
Type logical
Default true
(experimental) If disabled, Octopus will not compute
nor print the partition information, such as local points,
no. of neighbours, ghost points and boundary points.
Name ReorderRanks
Section Execution::Parallelization
Type logical
Default no
This variable controls whether the ranks are reorganized to have a more
compact distribution with respect to domain parallelization which needs
to communicate most often. Depending on the system, this can improve
communication speeds.
Name ScaLAPACKCompatible
Section Execution::Parallelization
Type logical
Whether to use a layout for states parallelization which is compatible with ScaLAPACK.
The default is yes for CalculationMode = gs, unocc, go without k-point parallelization,
and no otherwise. (Setting to other than default is experimental.)
The value must be yes if any ScaLAPACK routines are called in the course of the run;
it must be set by hand for td with TDDynamics = bo.
This variable has no effect unless you are using states parallelization and have linked ScaLAPACK.
Note: currently, use of ScaLAPACK is not compatible with task parallelization (i.e. slaves).