- 1 Time evolution
- 2 Performing a time-propagation
- 3 External fields
- 4 Symmetries
td, the code performs the time-propagation of the electronic orbitals and – if required – the ionic positions. The latter task does not pose major algorithmical problems (the usual Verlet algorithms deal with that task); however the best way to propagate a Schrödinger-like equation is still unclear. Due to this fact, we provide with a rather excessive selection of possibilities for that purpose. Before describing the set of variables necessary to specify the way in which the time evolution is to be performed, it is worth making a brief introduction to the problem.
We are concerned with a set of Schrödinger-like equations for the electronic orbitals :
Because this equation is linear (the time derivative and the Hamiltonian are both linear operators), one may formally define a linear “evolution” operator, which transforms the initial vector into the solution at time :
Moreover, there is the formally exact expression for the evolution operator
where is the time-ordered exponential, which is a short-hand for:
If the Hamiltonian commutes with itself at different times, we can drop the time-ordering product, and leave a simple exponential. If the Hamiltonian is time-independent, which makes it trivially self commuting, the solution is simply written as:
Unfortunately, this is not the case for TDDFT when the system is exposed to external time-dependent perturbations like electric and magnetic fields or pulsed lasers. But even without an external time-dependency, there remains the intrinsic time-dependency of the Kohn-Sham Hamiltonian, which is built “self-consistently” from the varying electronic density.
The first step to tackle this problem is to split the propagation of the long interval into smaller steps by utilizing the group property
of the time-evolution operator. This yields the following time discretization:
where , , . So at each time step we are dealing with the problem of performing the short-time propagation:
In this way, one can monitor the evolution in the interior of . In fact, the possibility of monitoring the evolution is generally a requirement.
This requirement imposes a natural restriction on the maximum size of : if is the maximum frequency that we want to discern, should be no larger than . Below this , we are free to choose considering performance reasons: technically, the reason for the discretization is two-fold: the time-dependence of is alleviated, and the norm of the exponential argument is reduced (the norm increases linearly with ).
Since we cannot drop the time-ordering product, the desired algorithm cannot be reduced, in principle, to the calculation of the action of the exponential of an operator over the initial vector. Some algorithms tailored to approximate the evolution operator, in fact, do not even need to peform such operator exponentials. Most of them, however, do rely on the calculation of one or more exponentials, such as the ones used by Octopus. This is why in principle we need to specify two different issues: the “evolution method”, and the “exponential method”. In other words: we need an algorithm to approximate the evolution operator – which will be specified by variable– and, if this algorithm requires it, we will also need an algorithm to approximate the exponential of a matrix operator – which will be specified by variable .
Propagators for the time-dependent Kohn-Sham equations
In the following, we describe the propagators available in Octopus for the time-dependent Kohn-Sham equations. These propagators solve the problem of approximating the orbitals from the knowledge of and for . Some methods require the knowledge of the Hamiltonian at some points in time between and .
This quantity can be approximated by the following procedure:
- Approximate through extrapolation of a polynomial fit to a certain number of previous time steps.
- Propagate to get .
- Calculate from the orbitals .
- Interpolate the required from and .
- Repeat the steps 2 to 4 until self-consistency is reached.
In Octopus, however, the above scheme is dropped for performance reasons and only step 1 is implemented, via a second-order extrapolation, except for the first two steps where the extrapolation obviously cannot be trusted. Instead, we rely on a sufficiently small .
The implicit midpoint rule or Crank-Nicolson method (
crank_nicolson) calculates the exponential of the Hamiltonian by a first-order Padé approximation. The Hamiltonian is evaluated at and the integral is dropped:
The calculation of the matrix fraction is transformed into the solution of the linear system
with the known quantities
The Crank-Nicolson scheme is unitary and preserves time-reversal symmetry.
A similar but similar scheme is the exponential midpoint rule
exp_mid), which is unitary and time-reversal-symmetry-preserving, but has the drawback that it requires fairly small time steps:
Magnus expansion (
magnus) is the most sophisticated propagation implemented in Octopus. According to Magnus, there exists an operator for which holds
given by the series
that converges at least for some local environment of .
The operators are generated with the recursive procedure
where are Bernoulli numbers.
In Octopus we have implemented a fourth-order Magnus expansion, which means that the operator series is truncated to second order and the integrals are calculated by a second-order quadrature formula. The resulting operator is
with the quadrature sampling points .
With a modified Hamiltonian
the propagator takes the form
Note that only the nonlocal components of the Kohn-Sham Hamiltonian contribute to the commutator and, furthermore, that we make the assumption that the nonlocal potential does not vary significantly in the interval . This nonlocal component stems from the ionic pseudopotentials. Consequently, its variation is caused by the ionic movement, which is negligible on the electronic time scale determining .
Time-reversal-symmetry based propagation
Due to time-reversal symmetry, propagating backwards by starting from should lead to the same result as propagating forwards by starting from . Using the simplest approximation to the evolution operator, we end up with the condition
Rearranging the terms gives an approximation for the propagator:
This enforced time-reversal symmetry method is available in Octopus by setting (
It is worthwhile to give a sidenote on the approximation of : for the
etrs method the Hamiltonian at time is not extrapolated, but rather built from the density , which, in turn, is calculated from the orbital estimate
There exists, however, also a variant
aetrs (approximated enforced time-reversal symmetry) that performs a second-order polynomial extrapolation of , , to get , which is about 40 % faster than
Approximations to the exponential of an operator
Most of the evolution methods described in the previous section require the calculation of the exponential of a matrix. Before going into the details of thethat Octopus provides, one general difficulty deserves mentioning.
In principle, we would like to calculate by first calculating and then applying this result to any vector . Unfortunately, the size of our Hamiltonian matrix is of the order which forbids its full storage in matrix form. Apart from that, methods to calculate the exponential of a matrix explicitly are limited to a few thousand matrix elements. For this reason, we have to turn to iterative solutions that calculate for a particular vector .
The simplest possibility is to use the definition of the exponential of a matrix
to get the approximation of order (
amounts to the expansion of the exponential in the standard polynomial base .
Experience shows that the choice gives particularly good results for our TDDFT implementation.
The standard polynomial basis is not the only possibility; Octopus also implements the expansion of the exponential in the Chebyshev basis (
with being the Chebyshev polynomial of order .
The truncation for both these expansions can be set by the input variable.
The Krylov subspace of order for a given operator and vector is defined as
The idea is to find the element of that optimally approximates in a least-squares sense. It can be proven that this is given by
with being an orthonormal basis of calculated with the Lanczos procedure. To get rid of the exponential, the approximation
is introduced. The object is also a result of the Lanczos procedure and of the small dimension which allows for direct calculation of the remaining exponential. So, we have
which can be selected by setting
lanczos. When actually calculating the exponential, Octopus recursively generates the quantities and until some convergence criterion – controlled by – is met or the order exceeds the maximum allowed order controlled by . In the latter case, a warning is written. Care should be taken in choosing the convergence parameter small enough to avoid faulty evolutions.
Performing a time-propagation
To actually perform a time-propagation, a necessary prerequisite is to have the ground-state density (see Ground State) of the system under consideration.
Besides the already mentioned input variables concerning the propagation method, two more important parameters have to be set:and . The first one sets , the second sets the number of iterations. It is convenient to define these two variables like this:
T = 0.1 # Length of propagation. dt = 0.002 # Length of time step. TDMaximumIter = T/dt TDTimeStep = dt
In the absence of a perturbation, the system's total energy should be a constant. This check to determine a reasonable time step has to be done before any other calculation.
The relevant parts of Octopus' output might look like
Iter Time Energy Elapsed Time 1 0.002000 -541.542073 4.800 2 0.004000 -541.542073 4.560 3 0.006000 -541.542073 3.880 . . . 48 0.096000 -541.542073 7.420 49 0.098000 -541.542073 7.620 50 0.100000 -541.542073 7.490
The energy is printed in the third column and should remain constant. In general, larger time steps are desirable to shorten computational propagation time but keep in mind that the size of the time step limits to the maximum frequency you will be able to observe, and, consequently, also any external perturbation to which you might wish to expose the system.
Delta kick: Calculating an absorption spectrum
To obtain the linear optical absorption spectrum of the system, we follow the scheme proposed by Yabana and Bertsch, and excite all frequencies of the system by giving some small momentum () to the electrons. This is achieved by transforming the ground-state wavefunctions according to:
and then propagating these wavefunctions for some (finite) time. The spectrum can then be obtained from the expression for the dipole strength function :
where the dynamical polarizability, , is essentially the Fourier transform of the dipole moment of the system :
With this definition, the Thomas-Reiche-Kuhn f-sum rule for the number of electrons, , is given by the integral:
This sum rule can be used to check the quality of the calculations. Another check is energy conservation, which TDDFT respects when no external field is applied.
To obtain a spectrum with such a recipe in Octopus one has to follow the steps:
- Choose a system of linearly independent (can be non-orthogonal) axes (defined by the variable ). The defaults are the 3 Cartesian axes.
- Add an electric kick using the variable . Its value should be small so that we remain in the linear regime, but large enough to avoid numerical errors.
- Run a time-dependent simulation for the 3 independent directions. This involves running Octopus 3 times, changing the value of each time. After each run, move the file td.general/multipoles to td.general/multipoles.X<tt>, where <tt>X is 1, 2, or 3.
- Run the utility oct-propagation_spectrum.
To calculate non-linear optical properties, we follow the evolution of the system under the influence of a laser field that is treated in the dipole approximation (although this constraint can be removed). The harmonic emission spectrum can then be calculated from the acceleration of the dipole moment:
During the propagation, charge density is absorbed at the boundaries of the simulation region, either by an imaginary absorbing potential or a mask function. In the first case, we add to the Kohn-Sham potential a term
where is zero in the inner region of the simulation box, and rises smoothly till the edges. By adjusting both the height and the shape of the potential, we can select which momenta are absorbed and prevent the unwanted reflections at the boundary. When using a mask, the wavefunction is multiplied in each time-step by a function which is 1 in the inner simulation region and gradually goes to 0 at the borders:
The absorbed charge can be interpreted as an ionization probability and can be used to estimate the photo-electron spectra. The box size has to be big enough so that the physical system is not perturbed by the absorbing boundaries. Note that the wavefunctions are no longer normalized as the system slowly gets charged.
The dynamic polarizability (trivially related to optical absorption) is, in its most general form, a 3x3 tensor. The reason is that we can shine light on the system polarized in any of the three Cartesian axes, and for each of these three cases measure how the dipole of the molecule oscillates along the three Cartesian axes. This usually means that to obtain the full dynamic polarizability of the molecule we usually need to apply 3 different perturbations along , by settingto 1, 2, or 3.
However, if the molecule has some symmetries, it is in general possible to reduce the total number of calculations from 3 to 2, or even 1. This is explained in detail here. To use this formalism in Octopus, you can use the variables , , , and . The block defines a basis (not necessarily orthogonal) chosen to maximize the number of equivalent axes, and , and are vectors specified in that basis. Let us give a couple of examples.
The methane molecule has symmetry, which means that the response is identical for all directions. This means that we only need one propagation to obtain the whole tensor. This propagation can be performed in any direction we wish. So we could use the input
%1 | 0 | 0 0 | 1 | 0 0 | 0 | 1 % = 1 = 3 % 0 | 0 | 1 %
Note that we could have omitted the blocksand in the previous input file, as these are their default values.
Now let us look at a linear molecule. In this case, you might think that we need two calculations to obtain the whole tensor, one for the direction along the axis of the molecule, and another for the axis perpendicular to the molecule. The fact is that we need only one, in a specially chosen direction, so that our field has components both along the axis of the molecule and perpendicular to it. Let us assume that the axis of the molecule is oriented along the -axis. Then we can use
%1/sqrt(2) | -1/sqrt(2) | 0 1/sqrt(2) | 1/sqrt(2) | 0 1/sqrt(2) | 0 | 1/sqrt(2) % = 1 = 3 % 0 | 0 | 1 %
You should try to convince yourself that the three axes are indeed equivalent and linearly independent.
Finally, let us look at a general planar molecule (in the xy plane). In principle we need only two calculations (reduced to one if more symmetries are present, as for benzene). In this case we chose one of the polarization axes on the plane, and the other two rotated 45 degrees out of plane:
%1/sqrt(2) | 0 | 1/sqrt(2) 1/sqrt(2) | 0 |-1/sqrt(2) 0 | 1 | 0 % = 2
In this case, we need two runs, one forequal to 1, and another for it equal to 3. Note that if there are fewer than 3 equivalent axes, is irrelevant.