The grid is one of the essential components of Octopus. In general there are several objects that define the grid, but the most important ones are
In Octopus terminology a mesh, described by the
mesh_t type from the
mesh_m module, is an array of points is used to represent a function. The mesh is composed of three types of points:
- Normal (or inner) points: The points of the functions to be calculated.
- Boundary points: Extra points around the grid that are required to calculate differential operators over a function defined in the grid. Its value is given by the type of boundary conditions, for finite systems it is zero and for periodic boundary conditions the value is given by inner points.
- Ghost points: When domain parallelization is used these points are in the border of the domains and its values are to be copied from another processors.
To define a function in the mesh you have to define a normal Fortran array of size
mesh_t::np of types
CMPLX. If you want to apply differential operators over the function you need to give it size
mesh_t::np_part to account for the extra space required for boundary and ghost points.
Mesh_t contains additional information about the mesh, in particular information on the types of grid (uniform or curvilinear), the spacing, the volume of each point, etc. It also contains the
mesh_t::x(1:mesh_t::np, 1:MAX_DIM) array that gives the real space coordinates of each points.
Common operations over mesh functions
There are functions to perform typical operations for functions defined over the mesh. They belong to the
mesh_function_m module. The most important are:
- Dot product:
One of Octopus parallelization schemes is domain parallelization: the mesh is separated by regions and each region is assigned to a processor. The good thing is that it is almost transparent to the developers, all the functions above work directly with domain parallelization, also all local operations over arrays. In some special cases the developer will have to write additional code to ensure that the code work properly with domain parallelization.
To know if the code is running in parallel you can use the
mesh_t::parallel_in_domains logical variable.