FromScratch
Section: Execution
Type: logical
Default: false
When this variable is set to true, Octopus will perform a
calculation from the beginning, without looking for restart
information.
NOTE: If available, mesh partitioning information will be used for
initializing the calculation regardless of the set value for this variable.
AccelBenchmark
Section: Execution::Accel
Type: logical
Default: no
If this variable is set to yes, Octopus will run some
routines to benchmark the performance of the accelerator device.
AccelDevice
Section: Execution::Accel
Type: integer
Default: gpu
This variable selects the OpenCL or CUDA accelerator device
that Octopus will use. You can specify one of the options below
or a numerical id to select a specific device.
Values >= 0 select the device to be used. In case of MPI enabled runs
devices are distributed in a round robin fashion, starting at this value.
Options:
AccelPlatform
Section: Execution::Accel
Type: integer
Default: 0
This variable selects the OpenCL platform that Octopus will
use. You can give an explicit platform number or use one of
the options that select a particular vendor
implementation. Platform 0 is used by default.
This variable has no effect for CUDA.
Options:
AllowCPUonly
Section: Execution::Accel
Type: logical
In order to prevent waste of resources, the code will normally stop when the GPU is disabled due to
incomplete implementations or incompatibilities. AllowCPUonly = yes overrides this and allows the
code execution also in these cases.
CudaAwareMPI
Section: Execution::Accel
Type: logical
If Octopus was compiled with CUDA support and MPI support and if the MPI
implementation is CUDA-aware (i.e., it supports communication using device pointers),
this switch can be set to true to use the CUDA-aware MPI features. The advantage
of this approach is that it can do, e.g., peer-to-peer copies between devices without
going through the host memory.
The default is false, except when the configure switch --enable-cudampi is set, in which
case this variable is set to true.
DisableAccel
Section: Execution::Accel
Type: logical
Default: yes
If Octopus was compiled with OpenCL or CUDA support, it will
try to initialize and use an accelerator device. By setting this
variable to yes you force Octopus not to use an accelerator even it is available.
InitializeGPUBuffers
Section: Execution::Accel
Type: logical
Initialize new GPU buffers to zero on creation (use only for debugging, as it has a performance impact!).
Debug
Section: Execution::Debug
Type: flag
Default: no
This variable controls the amount of debugging information
generated by Octopus. You can use include more than one option
with the + operator.
Options:
DebugTrapSignals
Section: Execution::Debug
Type: logical
Default: yes
If true, trap signals to handle them in octopus itself and
print a custom backtrace. If false, do not trap signals; then,
core dumps can be produced or gdb can be used to stop at the
point a signal was produced (e.g. a segmentation fault).
ExperimentalFeatures
Section: Execution::Debug
Type: logical
Default: no
If true, allows the use of certain parts of the code that are
still under development and are not suitable for production
runs. This should not be used unless you know what you are doing.
See details on
wiki page.
ForceComplex
Section: Execution::Debug
Type: logical
Default: no
Normally Octopus determines automatically the type necessary
for the wavefunctions. When set to yes this variable will
force the use of complex wavefunctions.
Warning: This variable is designed for testing and
benchmarking and normal users need not use it.
MPIDebugHook
Section: Execution::Debug
Type: logical
Default: no
When debugging the code in parallel it is usually difficult to find the origin
of race conditions that appear in MPI communications. This variable introduces
a facility to control separate MPI processes. If set to yes, all nodes will
start up, but will get trapped in an endless loop. In every cycle of the loop
each node is sleeping for one second and is then checking if a file with the
name node_hook.xxx (where xxx denotes the node number) exists. A given node can
only be released from the loop if the corresponding file is created. This allows
to selectively run, e.g., a compute node first followed by the master node. Or, by
reversing the file creation of the node hooks, to run the master first followed
by a compute node.
ReportMemory
Section: Execution::Debug
Type: logical
Default: no
If true, after each SCF iteration Octopus will print
information about the memory the code is using. The quantity
reported is an approximation to the size of the heap and
generally it is a lower bound to the actual memory Octopus is
using.
MaxwellRestartWriteInterval
Section: Execution::IO
Type: integer
Default: 50
Restart data is written when the iteration number is a multiple of the
MaxwellRestartWriteInterval variable. (Other output is controlled by MaxwellOutputInterval.)
RestartOptions
Section: Execution::IO
Type: block
Octopus usually stores binary information, such as the wavefunctions, to be used
in subsequent calculations. The most common example is the ground-state states
that are used to start a time-dependent calculation. This variable allows to control
where this information is written to or read from. The format of this block is the following:
for each line, the first column indicates the type of data, the second column indicates
the path to the directory that should be used to read and write that restart information, and the
third column, which is optional, allows one to set some flags to modify the way how the data
is read or written. For example, if you are running a time-dependent calculation, you can
indicate where Octopus can find the ground-state information in the following way:
%RestartOptions
restart_gs | "gs_restart"
restart_td | "td_restart"
%
The second line of the above example also tells Octopus that the time-dependent restart data
should be read from and written to the "td_restart" directory.
In case you want to change the path of all the restart directories, you can use the restart_all option.
When using the restart_all option, it is still possible to have a different restart directory for specific
data types. For example, when including the following block in your input file:
%RestartOptions
restart_all | "my_restart"
restart_td | "td_restart"
%
the time-dependent restart information will be stored in the "td_restart" directory, while all the remaining
restart information will be stored in the "my_restart" directory.
By default, the name of the "restart_all" directory is set to "restart".
Some CalculationModes also take into account specific flags set in the third column of the RestartOptions
block. These are used to determine if some specific part of the restart data is to be taken into account
or not when reading the restart information. For example, when restarting a ground-state calculation, one can
set the restart_rho flags, so that the density used is not built from the saved wavefunctions, but is
instead read from the restart directory. In this case, the block should look like this:
%RestartOptions
restart_gs | "restart" | restart_rho
%
A list of available flags is given below, but note that the code might ignore some of them, which will happen if they
are not available for that particular calculation, or might assume some of them always present, which will happen
in case they are mandatory.
Finally, note that all the restart information of a given data type is always stored in a subdirectory of the
specified path. The name of this subdirectory is fixed and cannot be changed. For example, ground-state information
will always be stored in a subdirectory named "gs". This makes it safe in most situations to use the same path for
all the data types. The name of these subdirectories is indicated in the description of the data types below.
Currently, the available restart data types and flags are the following:
Options:
RestartWallTimePeriod
Section: Execution::IO
Type: float
Default: 120
Period Time (in minutes) at which the restart file will be written.
If a finite time (in minutes) is specified, the code will write the restart file every period.
RestartWrite
Section: Execution::IO
Type: logical
Default: true
If this variable is set to no, restart information is not
written. Note that some run modes will ignore this
option and write some restart information anyway.
RestartWriteInterval
Section: Execution::IO
Type: integer
Default: 50
Restart data is written when the iteration number is a multiple
of the RestartWriteInterval variable. For
time-dependent runs this includes the update of the output
controlled by the TDOutput variable. (Other output is
controlled by OutputInterval.)
RestartWriteTime
Section: Execution::IO
Type: float
Default: 5
The RestartWriteTime (in minutes) will be subtracted from the WallTime to allow time for writing the restart file.
In huge calculations, this value should be increased.
SlakoDir
Section: Execution::IO
Type: string
Default: "./"
Folder containing the Slako files
Walltime
Section: Execution::IO
Type: float
Default: 0
Time in minutes before which the restart file will be written. This is to make sure that at least one restart
file can be written before the code might be killed to to exceeding the given CPU time.
If a finite time (in minutes) is specified, the code will write the restart file when the next
iteration (plus the RestartWriteTime) would exceed the given time.
A value less than 1 second (1/60 minutes) will disable the timer.
WorkDir
Section: Execution::IO
Type: string
Default: "."
By default, all files are written and read from the working directory,
i.e. the directory from which the executable was launched. This behavior can
be changed by setting this variable. If you set WorkDir to a name other than ".",
the following directories are written and read in that directory:
stderr
Section: Execution::IO
Type: string
Default: "-"
The standard error by default goes to, well, to standard error. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
stdout
Section: Execution::IO
Type: string
Default: "-"
The standard output by default goes to, well, to standard output. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
HamiltonianApplyPacked
Section: Execution::Optimization
Type: logical
Default: yes
If set to yes (the default), Octopus will 'pack' the
wave-functions when operating with them. This might involve some
additional copying but makes operations more efficient.
See also the related StatesPack variable.
MemoryLimit
Section: Execution::Optimization
Type: integer
Default: -1
If positive, Octopus will stop if more memory than MemoryLimit
is requested (in kb). Note that this variable only works when
ProfilingMode = prof_memory(_full).
MeshBlockDirection
Section: Execution::Optimization
Type: integer
Determines the direction in which the dimensions are chosen to compute
the blocked index for sorting the mesh points (see MeshBlockSize).
The default is increase_with_dimensions, corresponding to xyz ordering
in 3D.
Options:
MeshBlockSize
Section: Execution::Optimization
Type: block
To improve memory-access locality when calculating derivatives,
Octopus arranges mesh points in blocks. This variable
controls the size of this blocks in the different
directions. The default is selected according to the value of
the StatesBlockSize variable. (This variable only affects the
performance of Octopus and not the results.)
MeshLocalBlockDirection
Section: Execution::Optimization
Type: integer
Determines the direction in which the dimensions are chosen to compute
the blocked index for sorting the mesh points (see MeshLocalBlockSize).
The default is increase_with_dimensions, corresponding to xyz ordering
in 3D.
Options:
MeshLocalBlockSize
Section: Execution::Optimization
Type: block
To improve memory-access locality when calculating derivatives,
Octopus arranges mesh points in blocks. This variable
controls the size of this blocks in the different
directions. The default is selected according to the value of
the StatesBlockSize variable. (This variable only affects the
performance of Octopus and not the results.)
MeshLocalOrder
Section: Execution::Optimization
Type: integer
Default: blocks
This variable controls how the grid points are mapped to a
linear array. This influences the performance of the code.
Options:
MeshOrder
Section: Execution::Optimization
Type: integer
This variable controls how the grid points are mapped to a
linear array for global arrays. For runs that are parallel
in domains, the local mesh order may be different (see
MeshLocalOrder).
The default is blocks when serial in domains and cube when
parallel in domains with the local mesh order set to blocks.
Options:
NLOperatorCompactBoundaries
Section: Execution::Optimization
Type: logical
Default: no
(Experimental) When set to yes, for finite systems Octopus will
map boundary points for finite-differences operators to a few
memory locations. This increases performance, however it is
experimental and has not been thoroughly tested.
OperateAccel
Section: Execution::Optimization
Type: integer
Default: map
This variable selects the subroutine used to apply non-local
operators over the grid when an accelerator device is used.
Options:
OperateComplex
Section: Execution::Optimization
Type: integer
Default: optimized
This variable selects the subroutine used to apply non-local
operators over the grid for complex functions.
Options:
OperateDouble
Section: Execution::Optimization
Type: integer
Default: optimized
This variable selects the subroutine used to apply non-local
operators over the grid for real functions.
Options:
ProfilingAllNodes
Section: Execution::Optimization
Type: logical
Default: no
This variable controls whether all nodes print the time
profiling output. If set to no, the default, only the root node
will write the profile. If set to yes, all nodes will print it.
ProfilingMode
Section: Execution::Optimization
Type: integer
Default: no
Use this variable to run Octopus in profiling mode. In this mode
Octopus records the time spent in certain areas of the code and
the number of times this code is executed. These numbers
are written in ./profiling.NNN/profiling.nnn with nnn being the
node number (000 in serial) and NNN the number of processors.
This is mainly for development purposes. Note, however, that
Octopus should be compiled with --disable-debug to do proper
profiling. Warning: you may encounter strange results with OpenMP.
Options:
ProfilingOutputTree
Section: Execution::Optimization
Type: logical
Default: yes
This variable controls whether the profiling output is additionally
written as a tree.
ProfilingOutputYAML
Section: Execution::Optimization
Type: logical
Default: no
This variable controls whether the profiling output is additionally
written to a YAML file.
StatesBlockSize
Section: Execution::Optimization
Type: integer
Some routines work over blocks of eigenfunctions, which
generally improves performance at the expense of increased
memory consumption. This variable selects the size of the
blocks to be used. If GPUs are used, the default is 32;
otherwise it is 4.
StatesCLDeviceMemory
Section: Execution::Optimization
Type: float
Default: -512
This variable selects the amount of OpenCL device memory that
will be used by Octopus to store the states.
A positive number smaller than 1 indicates a fraction of the total
device memory. A number larger than one indicates an absolute
amount of memory in megabytes. A negative number indicates an
amount of memory in megabytes that would be subtracted from
the total device memory.
StatesPack
Section: Execution::Optimization
Type: logical
When set to yes, states are stored in packed mode, which improves
performance considerably. Not all parts of the code will profit from
this, but should nevertheless work regardless of how the states are
stored.
If GPUs are used and this variable is set to yes, Octopus
will store the wave-functions in device (GPU) memory. If
there is not enough memory to store all the wave-functions,
execution will stop with an error.
See also the related HamiltonianApplyPacked variable.
The default is yes.
MeshPartition
Section: Execution::Parallelization
Type: integer
When using METIS to perform the mesh partitioning, decides which
algorithm is used. By default, graph partitioning
is used for 8 or more partitions, and rcb for fewer.
Options:
MeshPartitionPackage
Section: Execution::Parallelization
Type: integer
Decides which library to use to perform the mesh partition.
By default ParMETIS is used when available, otherwise METIS is used.
Options:
MeshPartitionStencil
Section: Execution::Parallelization
Type: integer
Default: stencil_star
To partition the mesh, it is necessary to calculate the connection
graph connecting the points. This variable selects which stencil
is used to do this.
Options:
MeshPartitionVirtualSize
Section: Execution::Parallelization
Type: integer
Default: mesh mpi_grp size
Gives the possibility to change the partition nodes.
Afterward, it crashes.
MeshUseTopology
Section: Execution::Parallelization
Type: logical
Default: false
(experimental) If enabled, Octopus will use an MPI virtual
topology to map the processors. This can improve performance
for certain interconnection systems.
ParDomains
Section: Execution::Parallelization
Type: integer
Default: auto
This variable controls the number of processors used for the
parallelization in domains.
The special value auto, the default, lets Octopus
decide how many processors will be assigned for this
strategy. To disable parallelization in domains, you can use
ParDomains = no (or set the number of processors to
1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
ParKPoints
Section: Execution::Parallelization
Type: integer
Default: auto
This variable controls the number of processors used for the
parallelization in K-Points and/or spin.
The special value auto lets Octopus decide how many processors will be
assigned for this strategy. To disable parallelization in
KPoints, you can use ParKPoints = no (or set the
number of processors to 1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
ParOther
Section: Execution::Parallelization
Type: integer
Default: auto
This variable controls the number of processors used for the
'other' parallelization mode, that is CalculatioMode
dependent. For CalculationMode = casida, it means
parallelization in electron-hole pairs.
The special value auto,
the default, lets Octopus decide how many processors will be
assigned for this strategy. To disable parallelization in
Other, you can use ParOther = no (or set the
number of processors to 1).
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
ParStates
Section: Execution::Parallelization
Type: integer
This variable controls the number of processors used for the
parallelization in states. The special value auto lets
Octopus decide how many processors will be assigned for this
strategy. To disable parallelization in states, you can use
ParStates = no (or set the number of processors to 1).
The default value depends on the CalculationMode. For
CalculationMode = td the default is auto, while
for for other modes the default is no.
The total number of processors required is the multiplication
of the processors assigned to each parallelization strategy.
Options:
ParallelXC
Section: Execution::Parallelization
Type: logical
Default: true
When enabled, additional parallelization
will be used for the calculation of the XC functional.
ParallelizationNumberSlaves
Section: Execution::Parallelization
Type: integer
Default: 0
Slaves are nodes used for task parallelization. The number of
such nodes is given by this variable multiplied by the number
of domains used in domain parallelization.
ParallelizationOfDerivatives
Section: Execution::Parallelization
Type: integer
Default: non_blocking
This option selects how the communication of mesh boundaries is performed.
Options:
ParallelizationPoissonAllNodes
Section: Execution::Parallelization
Type: logical
Default: true
When running in parallel, this variable selects whether the
Poisson solver should divide the work among all nodes or only
among the parallelization-in-domains groups.
PartitionPrint
Section: Execution::Parallelization
Type: logical
Default: true
(experimental) If disabled, Octopus will not compute
nor print the partition information, such as local points,
no. of neighbours, ghost points and boundary points.
ReorderRanks
Section: Execution::Parallelization
Type: logical
Default: no
This variable controls whether the ranks are reorganized to have a more
compact distribution with respect to domain parallelization which needs
to communicate most often. Depending on the system, this can improve
communication speeds.
ScaLAPACKCompatible
Section: Execution::Parallelization
Type: logical
Whether to use a layout for states parallelization which is compatible with ScaLAPACK.
The default is yes for CalculationMode = gs, unocc, go without k-point parallelization,
and no otherwise. (Setting to other than default is experimental.)
The value must be yes if any ScaLAPACK routines are called in the course of the run;
it must be set by hand for td with TDDynamics = bo.
This variable has no effect unless you are using states parallelization and have linked ScaLAPACK.
Note: currently, use of ScaLAPACK is not compatible with task parallelization (i.e. slaves).
SymmetriesCompute
Section: Execution::Symmetries
Type: logical
If disabled, Octopus will not compute
nor print the symmetries.
By default, symmetries are computed when running in 3
dimensions for systems with less than 100 atoms.
For periodic systems, the default is always true, irrespective of the number of atoms.
SymmetriesTolerance
Section: Execution::Symmetries
Type: float
For periodic systems, this variable controls the tolerance used by the symmetry finder
(spglib) to find the spacegroup and symmetries of the crystal.
Units
Section: Execution::Units
Type: virtual
Default: atomic
(Virtual) These are the units that can be used in the input file.
UnitsOutput
Section: Execution::Units
Type: integer
Default: atomic
This variable selects the units that Octopus use for output.
Atomic units seem to be the preferred system in the atomic and
molecular physics community. Internally, the code works in
atomic units. However, for output, some people like
to use a system based on electron-Volts (eV) for energies
and Angstroms (Å) for length.
Normally time units are derived from energy and length units,
so it is measured in \(\hbar\)/Hartree or
\(\hbar\)/eV.
Warning 1: All files read on input will also be treated using
these units, including XYZ geometry files.
Warning 2: Some values are treated in their most common units,
for example atomic masses (a.m.u.), electron effective masses
(electron mass), vibrational frequencies
(cm-1) or temperatures (Kelvin). The unit of charge is always
the electronic charge e.
Options:
UnitsXYZFiles
Section: Execution::Units
Type: integer
Default: angstrom_units
This variable selects in which units I/O of XYZ files should be
performed.
Options: