Accel
Name AccelBenchmark
Section Execution::Accel
Type logical
Default no
If this variable is set to yes, Octopus will run some
routines to benchmark the performance of the accelerator device.
Name AccelDevice
Section Execution::Accel
Type integer
Default gpu
This variable selects the OpenCL or CUDA accelerator device
that Octopus will use. You can specify one of the options below
or a numerical id to select a specific device.
Values >= 0 select the device to be used. In case of MPI enabled runs
devices are distributed in a round robin fashion, starting at this value.
Options:
- gpu:
If available, Octopus will use a GPU.
- cpu:
If available, Octopus will use a CPU (only for OpenCL).
- accelerator:
If available, Octopus will use an accelerator (only for OpenCL).
- accel_default:
Octopus will use the default device specified by the implementation.
implementation.
Name AccelPlatform
Section Execution::Accel
Type integer
Default 0
This variable selects the OpenCL platform that Octopus will
use. You can give an explicit platform number or use one of
the options that select a particular vendor
implementation. Platform 0 is used by default.
This variable has no effect for CUDA.
Options:
- amd:
Use the AMD OpenCL platform.
- nvidia:
Use the Nvidia OpenCL platform.
- ati:
Use the ATI (old AMD) OpenCL platform.
- intel:
Use the Intel OpenCL platform.
Name AllowCPUonly
Section Execution::Accel
Type logical
In order to prevent waste of resources, the code will normally stop when the GPU is disabled due to
incomplete implementations or incompatibilities. AllowCPUonly = yes overrides this and allows the
code execution also in these cases.
Name DisableAccel
Section Execution::Accel
Type logical
Default yes
If Octopus was compiled with OpenCL or CUDA support, it will
try to initialize and use an accelerator device. By setting this
variable to yes you force Octopus not to use an accelerator even it is available.
Name GPUAwareMPI
Section Execution::Accel
Type logical
If Octopus was compiled with GPU support and MPI support and if the MPI
implementation is GPU-aware (i.e., it supports communication using device pointers),
this switch can be set to true to use the GPU-aware MPI features. The advantage
of this approach is that it can do, e.g., peer-to-peer copies between devices without
going through the host memory.
The default is false, except when the configure switch –enable-cudampi is set, in which
case this variable is set to true.
Name InitializeGPUBuffers
Section Execution::Accel
Type logical
Initialize new GPU buffers to zero on creation (use only for debugging, as it has a performance impact!).