Eigensolvers
The ground state calculations of Octopus require the solution of the eigenvalue problem in which the eigenstates and eigenvalues of the Kohn-Sham Hamiltonian are obtained. Octopus relies on iterative eigensolvers for doing this. Below are some details about the different eigensolvers and the different input variables related to each of them.
Overview
Octopus implements several iterative eigensolvers. The choice of the eigensolver is dictated by the variable Eigensolver. Currently the different eigensolvers implemented by Octopus are:
- Chebyshev filtering
- Conjugate gradients (CG)
- Residual minimization scheme, direct inversion in the iterative subspace (RMMDIIS)
- Preconditioned Lanczos
- Imaginary time evolution
The different eigensolvers have their strengths and weaknesses, but overall, the Chebyshev filtering is found to be the most efficient solver (faster, more scalable, more reliable), but it requires adding some unoccupied states, see below. Currently, the CG algorithm is the default eigensolver.
As explained below, some eigensolver employ a preconditioner, which is used to speed-up the convergence of the calculation. This is controlled by the variable Preconditioner.
The eigensolvers support some general variables:
- EigensolverMaxIter: the maximum number of iterations for the iterative eigensolver
- EigensolverTolerance: the tolerance for controlling convergence of the iterative eigensolver
- EigensolverMinimizationIter: RMMDIIS does some minimization steps before the actual RMMDIIS steps; this variable controls how many
At the end of the eigensolver run, Octopus performs a diagonalization in the subspace of the eigenstates found by the eigensolvers. This is controlled by the variable SubspaceDiagonalization.
Chebyshev Filtering
Chebyshev filtering is a subspace iteration method, which avoids most of the explicit computation of eigenvectors, and results in a significant speedup over iterative diagonalization methods. This method may be viewed as an approach to solve the original nonlinear Kohn-Sham equation by a nonlinear subspace iteration technique, without emphasizing the intermediate linearized Kohn-Sham eigenvalue problems. This eigensolver requires almost no orthogonalization, so it can be considerably faster than the other options for large systems.
Choice of ExtraStates
Chebyshev filtering defines spectral bounds to construct an effective Chebyshev filtering polynomial, defining the subspace of eigenstates one solves for. A large number of ExtraStates is required (around 10-20% of the number of occupied states) to make the algorithm efficient because the topmost eigenstates lie close to the spectral upper bound, and will therefore be the slowest to converge.
Automated Choice of Polynomial Degree
Larger polynomial degrees improve convergence, but at the cost of more operations (as the polynomial is applied recursively), per SCF step. Furthermore, one typically does not want the same polynomial degree to be used per SCF step, or even per batch of states. In order to optimise the convergence, Octopus implements an estimator for the optimal polynomial degree, for each batch of states, per SCF step. This is enabled by default using OptimizeChebyshevFilterDegree. The user can also set the maximum degree used by Octopus with ChebyshevFilterDegree, preventing the estimator from exceeding this value. The input variable description suggests a reasonable range of values for ChebyshevFilterDegree.
Large polynomial degrees also result in a better separation of the subspace, which suggests an inverse relation between the maximum degree required, and the total number of empty states used.
The First SCF Step
The first SCF step is treated specially, as this requires an initial estimation of the bounds of the Hamiltonian (minimum and maximum eigenvalues) in order to construct the polynomial filter. Bounds estimation is performed using a k-step Lanczos decomposition, which is efficient at finding extremal eigenvalues of a matrix. This is an iterative procedure (as indicated by k), however as we are only interested in the smallest and largest eigenvalues, and their precision is not essential to the first SCF step, only a few iterations are required. Zhou et. al. report that 4 - 10 iterations is sufficient. If the user wishes to change Octopus’s default, it can be set with ChebyshevFilterLanczosOrder.
The lower bound of the spectral window is defined as a linear combination of the smallest and largest eigenvalues found from the k-step Lanczos decomposition. The linear mixing coefficient can be set by the user with ChebyshevFilterBoundMixing, however as long as the bound is not too small, which can cause to miss some occupied states, or too large, which may magnify unoccupied states, this definition works fine.
Related variables:
- ChebyshevFilterBoundMixing: Control the mixing of the estimated bounds in the first SCF step.
- ChebyshevFilterDegree: Order of the Chebyshev polynomial used to filter the states.
- ChebyshevFilterLanczosOrder: Number of Lanczos iterations used to estimate the bounds of the Hamiltonian.
- ChebyshevFilterNIter: Maximum number of iterations in the first SCF step.
- OptimizeChebyshevFilterDegree: Controls if we use or not the Chebyshev oracle.
This eigensolver does not use any preconditioner.
Conjugate gradients (CG)
The conjugate-gradients algorithm follows the canonical work by Payne et al. Payne, M. C., Teter, M. P., Allan, D. C., Arias, T. A. Joannopoulos, J. D., Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients., Rev. Mod. Phys. 64 1045–1097 (1992); , section V, B (pp 1072ff). The conjugate-gradients minimization is done for each state sequentially and during the algorithm, each state is orthogonalized against the previous states. This makes the algorithm quite slow, moreover, it cannot be run parallel in states.
If very high accuracy is needed, the variable CGEnergyChangeThreshold can be reduced in order to avoid the early exit condition which normally helps to speed up the convergence, but may sometimes be in the way for high accuracy.
Related variables:
- CGAdditionalTerms: Add additional terms during the line minimization, see PTA92, eq. 5.31ff.
- CGDirection: How to find the new search direction.
- CGEnergyChangeThreshold: Early exit condition is the eigenenergy changes by less than this value.
- CGOrthogonalizeAll: Orthogonalize against all bands or only the lower bands.
RMMDIIS
Residual minimization scheme, direct inversion in the iterative subspace eigensolver, based on the implementation of Kresse and Furthmüller G. Kresse and J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, PRB 54 11169 (1996); . This eigensolver requires almost no orthogonalization ,so it can be considerably faster than the other options for large systems, with the notable exception of the Chebyshev filtering. To improve its performance a large number of ExtraStates are required (around 10-20% of the number of occupied states). Note: with unocc, you will need to stop the calculation by hand, since the highest states will probably never converge. Usage with more than one block of states per node is experimental, unfortunately.
Preconditioned Lanczos
Octopus implements the preconditioned Lanczos scheme as proposed in [Y. Saad, A. Stathopoulos, J. Chelikowsky, K. Wu and S. Ogut, “Solution of Large Eigenvalue Problems in Electronic Structure Calculations”, BIT 36, 1 (1996).]
In practice, a very large number of iterations is required to take advantage of this eigensolver.
Imaginary-time evolution
It is also possible to perform a time evolution on the imaginary time axis. Octopus thus performs a normal time evolution with a time multiplied by $i$. The usual variables to control the propagator and the method to evaluate the exponential of the Hamiltonian can be employed to time the time evolution and make it efficient. This algorithm should always converge to the ground state, but the rate of convergence can be quite slow. Please note that the use of this eigensolver is still marked as experimental.
In order to use this eigensolver, one needs to set in the input file:
As shown in the example, one currently needs to “deactivate” the mixing for using this eigensolver.Related variables:
- EigensolverImaginaryTime selects the time step for the imaginary-time evolution.
This eigensolver does not use any preconditioner.
Preconditioners
Some eigensolvers can use preconditioners to speed up the convergence (e.g. CG, RMMDIIS). The type of preconditioner is controlled by the variable Preconditioner. The different options which are possible are:
- No preconditioner
- Filter preconditioner
- Jacobi preconditioner
- Poisson preconditioner
- Multigrid preconditioner
The default preconditioner is the filter preconditioner. Usually, this leads to the fastest convergence in CPU time. With the multigrid preconditioner, usually less SCF iterations are needed until convergence is reached, but it is quite costly, so overall it is nevertheless slower than the filter preconditioner. The filter preconditioner can also be understood as applying two Jacobi iterations for the kinetic term of the Hamiltonian and in this way it can also be applied well to non-orthogonal grids. The other preconditioners are usually not very helpful.
Related variables:
- PreconditionerFilterFactor: weight of the filter preconditioner, needs to be between 0.5 and 1. The best choice for this variable depends on the system. In theory, 0.75 should provide an optimal damping of high and low spatial frequencies.
- PreconditionerIterationsMiddle,PreconditionerIterationsPost, PreconditionerIterationsPre: Number of pre, post and middle smoothing iterations for the multigrid preconditioner.