Difference between revisions of "FAQ"

From OctopusWiki
Jump to navigation Jump to search
 
(46 intermediate revisions by 6 users not shown)
Line 1: Line 1:
This is a collection of frequently asked questions about [[octopus]]. Up to now, we cannot really make a list of commonly found problems. So if something
+
This is a collection of frequently asked questions about [[octopus]]. For other queries, please subscribe to <tt>octopus-users</tt> mailing list, and ask.
else went wrong, please subscribe to <tt>octopus-users</tt> mailing list, and ask.
 
  
 
==Compilation==
 
==Compilation==
Line 7: Line 6:
 
This is probably the most common error you can get. [[octopus]] uses several different libraries, the most important of which are <tt>gsl</tt>, <tt>fftw</tt>, and <tt>blas</tt>/<tt>lapack</tt>. We assume that you have already installed these libraries but, for some reason, you were not able to compile the code. So, what went wrong?
 
This is probably the most common error you can get. [[octopus]] uses several different libraries, the most important of which are <tt>gsl</tt>, <tt>fftw</tt>, and <tt>blas</tt>/<tt>lapack</tt>. We assume that you have already installed these libraries but, for some reason, you were not able to compile the code. So, what went wrong?
  
* Did you pass the correct <tt>--with-XXXX</tt> (where XXXX is gsl, fftw or lapack in lowercase) to the configure script? If your libraries are installed in a non-standard directory (like <tt>/opt/lapack</tt>), you will have to pass the script the location of the library (in this example, you could try <tt>./configure --with-lapack='-L/opt/lapack -llapack'</tt>.
+
* Did you pass the correct <tt>--with-XXXX</tt> (where <tt>XXXX</tt> is <tt>gsl</tt>, <tt>fftw</tt> or <tt>lapack</tt> in lowercase) to the configure script? If your libraries are installed in a non-standard directory (like <tt>/opt/lapack</tt>), you will have to pass the script the location of the library (in this example, you could try <tt>./configure --with-lapack='-L/opt/lapack -llapack'</tt>.
  
 
* If you are working on an alpha station, do not forget that the <tt>CXML</tt> library includes <tt>BLAS</tt> and <tt>LAPACK</tt>, so it can be used by <tt>octopus</tt>. If needed, just set the correct path with <tt>--with-lapack</tt>.
 
* If you are working on an alpha station, do not forget that the <tt>CXML</tt> library includes <tt>BLAS</tt> and <tt>LAPACK</tt>, so it can be used by <tt>octopus</tt>. If needed, just set the correct path with <tt>--with-lapack</tt>.
  
* If the configuration script can not find <tt>FFTW</tt>, it is probable that you did not compile <tt>FFTW</tt> with the same Fortran compiler or with the same compiler options. The basic problem is that Fortran sometimes converts the function names to uppercase, at other times to lowercase, and it can add an "_" to them, or even two. Obviously all libraries and the program have to use the same convention, so the best is to compile everything with the same Fortran compiler/options. If you are a power user, you can check the convention used by your compiler using the command <tt>nm &lt;library&gt;</tt>.
+
* If the configuration script cannot find <tt>FFTW</tt>, it is probable that you did not compile <tt>FFTW</tt> with the same Fortran compiler or with the same compiler options. The basic problem is that Fortran sometimes converts the function names to uppercase, at other times to lowercase, and it can add an "_" to them, or even two. Obviously all libraries and the program have to use the same convention, so the best is to compile everything with the same Fortran compiler/options. If you are a power user, you can check the convention used by your compiler using the command <tt>nm &lt;library&gt;</tt>.
  
 
* Unfortunately, the libraries compiled with one Fortran compiler are very often not compatible with file compiled with another Fortran compiler. In order to avoid problems, please make sure that <i>all</i> libraries are compiles using the same Fortran compiler.
 
* Unfortunately, the libraries compiled with one Fortran compiler are very often not compatible with file compiled with another Fortran compiler. In order to avoid problems, please make sure that <i>all</i> libraries are compiles using the same Fortran compiler.
Line 28: Line 27:
 
This is a classical problem of the dynamical linker. [[Octopus]] was compiled dynamically, but the dynamical libraries (the .so files) are in a place that is not recognized by the dynamical loader. So, the solution is to tell the dynamical linker where the library is.
 
This is a classical problem of the dynamical linker. [[Octopus]] was compiled dynamically, but the dynamical libraries (the .so files) are in a place that is not recognized by the dynamical loader. So, the solution is to tell the dynamical linker where the library is.
  
The first thing you should do is to find where the library is situated in your own system. Try doing a "locate libXXX.so", for example. Let us imagine that the libarry is in the directory <tt>/opt/intel/compiler70/ia32/lib/</tt> (this is where the ifc7 libraries are by default). Now, if you do not have root access to the machine, just type (using the bash shell)
+
The first thing you should do is to find where the library is situated in your own system. Try doing a <tt>locate libXXX.so</tt>, for example. Let us imagine that the library is in the directory <tt>/opt/intel/compiler70/ia32/lib/</tt> (this is where the ifc7 libraries are by default). Now, if you do not have root access to the machine, just type (using the bash shell)
  
 
<pre>
 
<pre>
Line 40: Line 39:
 
</pre>
 
</pre>
  
If you have root control over the machine, you can use a more permanent alternative. Just add a line containing "/opt/intel/compiler70/ia32/lib/" to the file <tt>/etc/ld.so.conf</tt>. This file tells the dynamic linker where to find the <tt>.so</tt> libraries. Then you have to update the cache of the linker by typing
+
If you have root control over the machine, you can use a more permanent alternative. Just add a line containing <tt>/opt/intel/compiler70/ia32/lib/</tt> to the file <tt>/etc/ld.so.conf</tt>. This file tells the dynamic linker where to find the <tt>.so</tt> libraries. Then you have to update the cache of the linker by typing
  
 
<pre>
 
<pre>
Line 50: Line 49:
 
=== What is METIS? ===
 
=== What is METIS? ===
  
When running parallel in "domains", octopus divides the simulation region (the box) in separate regions (domains) and assigns each of of these to a different processor. This allows not only to speed up the calculation, but also to divide the memory among the different processors. The first step of this process, the splitting of the box, is in general a very complicated process. Note that we are talking about a simulation box of an almost arbitrary shape, and of an arbitrary number of processors. Furthermore, as the communication between processors grows proportionally to the surface of the domains, one should use an algorith that divides the box such that each domain has the same number of points, and at the same time minimizes the total area of the domains. The [http://www-users.cs.umn.edu/~karypis/metis/ METIS] library does just that. If you want to run octopus parallel in domains you are required use it (or to write anothe rmesh partitioning routine by yourself).
+
When running parallel in "domains", octopus divides the simulation region (the box) into separate regions (domains) and assigns each of these to a different processor. This lets you not only speed up the calculation, but also divide the memory among the different processors. The first step of this process, the splitting of the box, is in general a very complicated process. Note that we are talking about a simulation box of an almost arbitrary shape, and of an arbitrary number of processors. Furthermore, as the communication between processors grows proportionally to the surface of the domains, one should use an algorithm that divides the box such that each domain has the same number of points, and at the same time minimizes the total area of the domains. The [http://www-users.cs.umn.edu/~karypis/metis/ METIS] library does just that. If you want to run octopus parallel in domains, you are required to use it (or the parallel version, ParMETIS). Currently METIS is included in the Octopus distribution and is compiled by default when MPI is enabled.
  
==Units==
+
== Running ==
===What are these <i>atomic units</i>===
 
  
Atomic units are a Gaussian system of units (by "Gaussian" I mean that the
+
=== Sometimes I get a segmentation fault when running {{octopus}} ===
vacuum dielectric constant has no dimensions and is set to be
 
<math>\epsilon_0 = {1 \over {4\pi}}</math>),
 
in which the numerical values of the Bohr radius, the electronic
 
charge, the electronic mass, and the reduced Planck's constant are set to one:
 
  
<math>
+
The most typical cause for segmentation faults are caused by a limited stack size. Some compilers, especially the Intel one, use the stack to create temporary arrays, and when running large calculations the default stack size might not be enough. The solution is to remove the limitation in the size of the stack by running the command
(1)\qquad a_0 = 1; e^2 = 1; m_e = 1; \hbar = 1.
 
</math>
 
  
This simplifies formulae. (Although, in my opinion, it seriously hazards dimensionality analysis,
+
ulimit -s unlimited
formulae interpretation and understanding, and Physics in general.
 
But this is just a personal taste.)
 
This sets directly two fundamental units: the atomic units of length and of mass:
 
  
<math>
+
Segmentation faults can also be caused by other problems, like a wrong compilation (linking with libraries compiled with a different fortran compiler, for example) or a bug in the code.
(2)\qquad {\rm au}_{\rm length} = a_0 = 5.2917721\times 10^{-11}~{\rm m};\quad
 
{\rm au}_{\rm mass} = m_e = 9.1093819\times 10^{-31}~{\rm kg}.
 
</math>
 
  
Since the squared charge must have units of energy times length, we can thus
+
=== How do I run parallel in domains? ===
set the atomic unit of energy:
 
  
<math>
+
First of all, you '''must''' have a version of [[octopus]] compiled with support to the [http://www-users.cs.umn.edu/~karypis/metis/ METIS] library. This is a very useful library that takes care of the division of the space into domains. Then you just have to run octopus in parallel (this step depends on your actual system, you may have to use mpirun or mpiexec to accomplish it).  
(3)\qquad {\rm au}_{\rm energy} = {e^2 \over a_0} = 4.3597438\times 10^{-18}~{\rm J},
 
</math>
 
  
which it is called Hartree, Ha. And, since the energy has units of mass times
+
In some run modes (e.g., <tt>td</tt>), you can use multi-level parallelization, ''i.e.'', run in parallel in more than one way at the same time. In the <tt>td</tt> case, you can run parallel in states and in domains at the same time. In order to fine-tune this behavior, please take a look at the variables <tt>ParallelizationStrategy</tt> and <tt>ParallelizationGroupRanks</tt>. In order to check if everything is OK, take a look at the output of octopus in section  "Parallelization". This is an example:
length squared per time squared, this help us get the atomic unit of time:
 
  
<math>
+
<pre>
(4)\qquad {\rm Ha} = m_e { a_0^2 \over {\rm} {\rm au}_{\rm time}^2} \to
+
************************** Parallelization ***************************
{\rm au}_{\rm time} = a_0 \sqrt{m_e \over {\rm Ha}} = {a_0 \over e} \sqrt{m_e a_0}
+
Octopus will run in *parallel*
= 2.4188843\times 10^{-17}~{\rm s}.
+
Info: Number of nodes in par_states  group:    8 (     62)
</math>
+
Info: Octopus will waste at least  9.68% of computer time
 +
**********************************************************************
 +
</pre>
  
Now the catch is: what about Planck's constant? Its dimensions are of energy
+
In this case, [[octopus]] runs in parallel only in states, using 8 processors (for 62 states). Furthermore, [[octopus]] some of the processors will be idle for 9.68% of the time (this is not so great, so maybe a different number of processors would be better in this case).
times time, and thus we should be able to derive its value by now. But at the
 
beginning we set it to one! The point is that from the four physics constants
 
used (<math>a_0, m_e, e^2, \hbar</math>) are not independent, since:
 
  
<math>
+
=== How to center the molecules? ===
(5)\qquad a_0 = { \hbar^2 \over {m_e \; {e^2 \over {4 \pi \epsilon_0} } } }.
 
</math>
 
  
In this way, we could actually have derived the atomic unit of time in an
+
See [[Manual:External utilities:oct-center-geom]].
easier way, using Planck's constant:
 
  
<math>
+
=== How do I visualize 3D stuff? ===
(6)\qquad \hbar = 1\; {\rm Ha}\,{\rm au}_{\rm time} \Rightarrow {\rm au}_{\rm time}
 
      = { \hbar \over {\rm Ha}} =  
 
{ {\hbar a_0} \over e^2}\,.
 
</math>
 
  
And combining (6) and (5) we retrieve (4).
+
Our preferred visualization tool is [http://www.opendx.org openDX]. This is perhaps the most powerful 3D visualization tool for scientific data, and is highly versatile and sophisticated. However, this does not come for free: [http://www.opendx.org openDX] is notoriously difficult to learn and to use. Anyway, in our opinion, its advantages clearly compensate for this problem.
  
 +
The good news is that we made most of the dirty work for you! We have developed a small dx application that takes care of most of the details. If you want to try, start by installing [http://www.opendx.org openDX]. This is simpler in some machines than in others. For example, in my Fedora Core 6, I simply have to type
  
===What are these <i>convenient units</i>?===
+
  yum install dx dx-devel dx-samples
  
A lot of the literature in this field is written using Angströms and electron-volts
+
Note that we also need the development package. Next we have to install the chemical extensions to [http://www.opendx.org openDX]. You can find some instructions [[Releases#OpenDX|here]].
as the units of length and of energy, respectively. So it may be "convenient"
 
to define a system of units, derived from the atomic system of units, in which
 
we make that substitution. And so we will call it "convenient".
 
  
The unit mass remains the same, and thus the unit of time must change, being
+
Now generate some files for visualization. These can be either in <tt>.dx</tt> or <tt>.ncdf</tt> format (see {{Variable|Output|Output}} and {{Variable|OutputHow|Output}}). Next copy the files {{code|[prefix]/octopus/share/util/mf.net}} and {{code|[prefix]/octopus/share/util/mf.cfg}} to your working directory and type
now <math>\hbar /{\rm eV}\,</math>,
 
with <math>\hbar = 6.582\,1220(20)\times 10^{-16}~\rm eV\,s</math>.
 
  
 +
dx mf.net
  
===How do I set a laser pulse of x W/cm2 of intensity?===
+
Then in the dx menus, choose {{code|Windows>Open Control Panel by Name>Main}}. You will see a dialog box with some options. Play with it!
  
The relation between the peak intensity, <math>I_0\,</math>,
+
Also see [[Manual:Visualization]].
and the peak electric field, <math>E_0\,</math>,
 
is given by
 
  
<math>
+
=== Out of memory? ===
  I_0 = \frac{1}{2} \; c \; \epsilon_0 \; E_0^2
 
  \,.
 
</math>
 
  
That is, in a Gaussian system of units
+
'''Q:''' From time to time, I obtain the following error when I perform some huge memory-demanding calculations:
(<math>\epsilon_0 = {{1} \over {4\pi}}</math>\,), we have
 
<math>E_0 = \sqrt{ {{8\pi} \over {c}} \; I_0\,}</math>.
 
  
The amplitude of the laser field is specified in the input file through
+
**************************** FATAL ERROR *****************************
a numerical value <math>x</math>
+
*** Fatal Error (description follows)
in the lasers block. The question is thus how to relate this value to watts per
+
*--------------------------------------------------------------------
squared centimeter. The two formulae that produce this task are:
+
* Failed to allocate    185115 Kb in file 'gs.F90' line    62
 +
*--------------------------------------------------------------------
 +
* Stack:
 +
**********************************************************************
  
* If atomics units are used in the input file: <math>I_{0} = 3.5094\times 10^{16}\,x^2\rm~W/cm^2</math>
+
Could it be related to the fact that the calculation demands more memory than available in the computer?
  
* If convenient units are used in the input file: <math>I_{0} = 1.3272\times 10^{13}\,x^2\rm~W/cm^2</math>
+
'''A:''' Octopus doesn't allocate memory in a big piece, but allocates small chunks
 +
as needed. So when you ask for more memory than is available, the allocation can fail even when asking for an innocent amount of memory. So yes, if you see this it is likely that you are running out of memory.
  
These formulae may be derived from previous equation and a bit of units handling. It is
+
If you compiled on a 32-bit machine, you will be limited to a little more than 2 GB.
important to note that the amplitude introduced in the input file does not correspond to the
 
electric field amplitude <math>E\,</math>, but to the corresponding force,
 
<math>e\,E</math>.
 
This is numerically irrelevant in the case of atomic units, but it is important to derive the
 
expression for convenient units. In this case, one should just note that
 
<math>e^2 = 14.399644\rm~eV\,\AA</math>.
 
  
 
== Varia ==
 
== Varia ==
=== How do I cite octopus? ===
 
  
[[octopus]] is a free program, so you have the right to use, change, distribute, and to publish papers with it without citing anyone (for as long as you follow the GPL license). However, the developers of octopus are also scientists that need citations to bump their CVs. Therefore, we would be very happy if you could cite one or more papers concerning octopus in your work. The main reference is
+
=== How do the version numbers in {{octopus}} work? ===
  
* {{cite
+
Each stable release is identified by two numbers (for example x.y). The first number indicates a particular version of {{octopus}} and the second one the release. Before the 6.0 release (2016) Octopus used a three-number scheme, with the first two numbers being the version and the third one the release.
|title=octopus: a first-principles tool for excited electron-ion dynamics
 
|authors=M.A.L. Marques, Alberto Castro, George F. Bertsch, and Angel Rubio
 
|journal=Comput. Phys. Commun.
 
|volume=151
 
|pages=60-78
 
|year=2003
 
}} [http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TJ5-473HYF7-3&_user=10&_handle=W-WA-A-A-AU-MsSAYVW-UUA-AUVZEBEVBU-WWEDBZBZY-AU-U&_fmt=summary&_coverDate=03%2F01%2F2003&_rdoc=6&_orig=browse&_srch=%23toc%235301%232003%23998489998%23376961!&_cdi=5 Science Direct]
 
  
There is also a paper describing the propagation methods used in octopus:
+
An increase in the revision number indicates that this release contains bug fixes and minor changes over the previous version, but that no new features were added (it may happen that we disable some features that are found to contain serious bugs).
  
* {{cite
+
Development versions (that you can get from the git repository) do not have a version number, they only have a code name. For code names we use the scientific names of [http://en.wikipedia.org/wiki/Octopus_%28genus%29 Octopus species]. These are the code names that we have used so far (this scheme was started after the release of version 3.2):
|title=Propagators for the time-dependent Kohn-Sham equations
 
|authors=A. Castro, M.A.L. Marques, and A. Rubio
 
|journal=J. Chem. Phys
 
|volume=121
 
|pages=3425-3433
 
|year=2004}} [http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=JCPSA6&ONLINE=YES&smode=strresults&sort=chron&maxdisp=25&threshold=0&possible1=Castro&possible1zone=author&bool1=and&possible2=Rubio&possible2zone=author&OUTLOG=NO&key=DISPLAY&docID=1&page=1&chapter=0 AIP]
 
  
Finally, two general references on TDDFT, written by some of us:
+
* ''Octopus selene'' (moon octopus): current development version.
 +
* ''Octopus australis'' (hammer octopus): 9.x
 +
* ''[http://en.wikipedia.org/wiki/Octopus_wolfi Octopus wolfi]'' (star-sucker pygmy octopus): 8.x
 +
* ''[http://en.wikipedia.org/wiki/Octopus_mimus Octopus mimus]'' (Gould octopus): 7.x
 +
* ''[http://en.wikipedia.org/wiki/Octopus_tetricus Octopus tetricus]'' (common Sydney octopus or gloomy octopus): 6.x
 +
* ''Octopus superciliosus'' (frilled pygmy octopus): 5.0.x
 +
* ''Octopus nocturnus'': 4.1.x
 +
* ''[http://en.wikipedia.org/wiki/Common_octopus Octopus vulgaris]'' (common octopus): 4.0.x
  
* <i>Time-dependent density functional theory</i>, M.A.L. Marques and E.K.U. Gross, Annu. Rev. Phys. Chem. <b>55</b>, 427-455 (2004) [http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.physchem.55.091602.094449 ARJournals]
+
=== Units ===
* <i>Optical properties of nanostructures from time-dependent density functional theory</i>, Alberto Castro, M.A.L. Marques, Julio A. Alonso, and Angel Rubio, J. Comp. Theoret. Nanoscience <b>1</b>, 231-255 (2004) [http://www.aspbs.com/html/a1304jct.htm ASPBS]
 
  
You can find a more extensive list of publications [[Articles|here]].
+
See [[Manual:Units|Units]]
  
=== How do I run parallel in domains? ===
+
=== How do I cite octopus? ===
  
First of all, you '''must''' have a version of [[octopus]] compiled with support to the [http://www-users.cs.umn.edu/~karypis/metis/ METIS] library. This is a very useful library that takes care of the division of the space into domains. Then you just have to run octopus in parallel (this step depends on your actual system, you may have to use mpirun or mpiexec to accomplish it).
+
See [[Citing Octopus]].
 
 
In some run modes (e.g., <tt>td</tt>), you can use the multi-level parallelization, i.e., to run in parallel in more than one way at the same time. In the <tt>td</tt> case, you can run parallel in states and in domains at the same time. In order to fine tune this behavior, please take a look at the variables <tt>ParallelizationStrategy</tt> and <tt>ParallelizationGroupRanks</tt>. In order to check if everything is OK, take a look at the output of octopus in section  "Paralellization". This is an example:
 
 
 
<pre>
 
************************** Paralellization ***************************
 
Octopus will run in *parallel*
 
Info: Number of nodes in par_states  group:    8 (      62)
 
Info: Octopus will waste at least  9.68% of computer time
 
**********************************************************************
 
</pre>
 
 
 
In this case, [[octopus]] runs in parallel only in states, using 8 processors (for 62 states). Furthermore, [[octopus]] some of the processors will be idle for 9.68% of the time (this is not so great, so maybe a different number of processors would be better in this case).
 
 
 
=== How to center the molecules? ===
 
 
 
See [[Manual:External utilities:oct-center-geom]].
 

Latest revision as of 18:50, 22 April 2019

This is a collection of frequently asked questions about octopus. For other queries, please subscribe to octopus-users mailing list, and ask.

Compilation

Could not find library...

This is probably the most common error you can get. octopus uses several different libraries, the most important of which are gsl, fftw, and blas/lapack. We assume that you have already installed these libraries but, for some reason, you were not able to compile the code. So, what went wrong?

  • Did you pass the correct --with-XXXX (where XXXX is gsl, fftw or lapack in lowercase) to the configure script? If your libraries are installed in a non-standard directory (like /opt/lapack), you will have to pass the script the location of the library (in this example, you could try ./configure --with-lapack='-L/opt/lapack -llapack'.
  • If you are working on an alpha station, do not forget that the CXML library includes BLAS and LAPACK, so it can be used by octopus. If needed, just set the correct path with --with-lapack.
  • If the configuration script cannot find FFTW, it is probable that you did not compile FFTW with the same Fortran compiler or with the same compiler options. The basic problem is that Fortran sometimes converts the function names to uppercase, at other times to lowercase, and it can add an "_" to them, or even two. Obviously all libraries and the program have to use the same convention, so the best is to compile everything with the same Fortran compiler/options. If you are a power user, you can check the convention used by your compiler using the command nm <library>.
  • Unfortunately, the libraries compiled with one Fortran compiler are very often not compatible with file compiled with another Fortran compiler. In order to avoid problems, please make sure that all libraries are compiles using the same Fortran compiler.

To compile the parallel version of of the code, you will also need MPI (mpich or LAM work just fine).

Error while loading shared libraries

Sometimes, when you run octopus you stumble in the following error:

octopus: error while loading shared libraries: libXXX.so: cannot open
shared object file: No such file or directory

This is a classical problem of the dynamical linker. Octopus was compiled dynamically, but the dynamical libraries (the .so files) are in a place that is not recognized by the dynamical loader. So, the solution is to tell the dynamical linker where the library is.

The first thing you should do is to find where the library is situated in your own system. Try doing a locate libXXX.so, for example. Let us imagine that the library is in the directory /opt/intel/compiler70/ia32/lib/ (this is where the ifc7 libraries are by default). Now, if you do not have root access to the machine, just type (using the bash shell)

> export LD_LIBRARY_PATH=/opt/intel/compiler70/ia32/lib/:$LD_LIBRARY_PATH
> octopus

or

> LD_LIBRARY_PATH=/opt/intel/fc/9.0/lib/:$LD_LIBRARY_PATH octopus

If you have root control over the machine, you can use a more permanent alternative. Just add a line containing /opt/intel/compiler70/ia32/lib/ to the file /etc/ld.so.conf. This file tells the dynamic linker where to find the .so libraries. Then you have to update the cache of the linker by typing

> ldconfig

A third solution is to compile octopus statically. This is quite simple in some systems (just adding -static to LDFLAFS, or something like that), but in some others not (with my home ifc7 setup it's a real mess!)

What is METIS?

When running parallel in "domains", octopus divides the simulation region (the box) into separate regions (domains) and assigns each of these to a different processor. This lets you not only speed up the calculation, but also divide the memory among the different processors. The first step of this process, the splitting of the box, is in general a very complicated process. Note that we are talking about a simulation box of an almost arbitrary shape, and of an arbitrary number of processors. Furthermore, as the communication between processors grows proportionally to the surface of the domains, one should use an algorithm that divides the box such that each domain has the same number of points, and at the same time minimizes the total area of the domains. The METIS library does just that. If you want to run octopus parallel in domains, you are required to use it (or the parallel version, ParMETIS). Currently METIS is included in the Octopus distribution and is compiled by default when MPI is enabled.

Running

Sometimes I get a segmentation fault when running Octopus

The most typical cause for segmentation faults are caused by a limited stack size. Some compilers, especially the Intel one, use the stack to create temporary arrays, and when running large calculations the default stack size might not be enough. The solution is to remove the limitation in the size of the stack by running the command

ulimit -s unlimited

Segmentation faults can also be caused by other problems, like a wrong compilation (linking with libraries compiled with a different fortran compiler, for example) or a bug in the code.

How do I run parallel in domains?

First of all, you must have a version of octopus compiled with support to the METIS library. This is a very useful library that takes care of the division of the space into domains. Then you just have to run octopus in parallel (this step depends on your actual system, you may have to use mpirun or mpiexec to accomplish it).

In some run modes (e.g., td), you can use multi-level parallelization, i.e., run in parallel in more than one way at the same time. In the td case, you can run parallel in states and in domains at the same time. In order to fine-tune this behavior, please take a look at the variables ParallelizationStrategy and ParallelizationGroupRanks. In order to check if everything is OK, take a look at the output of octopus in section "Parallelization". This is an example:

************************** Parallelization ***************************
Octopus will run in *parallel*
Info: Number of nodes in par_states  group:     8 (      62)
Info: Octopus will waste at least  9.68% of computer time
**********************************************************************

In this case, octopus runs in parallel only in states, using 8 processors (for 62 states). Furthermore, octopus some of the processors will be idle for 9.68% of the time (this is not so great, so maybe a different number of processors would be better in this case).

How to center the molecules?

See Manual:External utilities:oct-center-geom.

How do I visualize 3D stuff?

Our preferred visualization tool is openDX. This is perhaps the most powerful 3D visualization tool for scientific data, and is highly versatile and sophisticated. However, this does not come for free: openDX is notoriously difficult to learn and to use. Anyway, in our opinion, its advantages clearly compensate for this problem.

The good news is that we made most of the dirty work for you! We have developed a small dx application that takes care of most of the details. If you want to try, start by installing openDX. This is simpler in some machines than in others. For example, in my Fedora Core 6, I simply have to type

 yum install dx dx-devel dx-samples

Note that we also need the development package. Next we have to install the chemical extensions to openDX. You can find some instructions here.

Now generate some files for visualization. These can be either in .dx or .ncdf format (see Output and OutputHow). Next copy the files [prefix]/octopus/share/util/mf.net and [prefix]/octopus/share/util/mf.cfg to your working directory and type

dx mf.net

Then in the dx menus, choose Windows>Open Control Panel by Name>Main. You will see a dialog box with some options. Play with it!

Also see Manual:Visualization.

Out of memory?

Q: From time to time, I obtain the following error when I perform some huge memory-demanding calculations:

**************************** FATAL ERROR *****************************
*** Fatal Error (description follows)
*--------------------------------------------------------------------
* Failed to allocate     185115 Kb in file 'gs.F90' line    62
*--------------------------------------------------------------------
* Stack:
**********************************************************************

Could it be related to the fact that the calculation demands more memory than available in the computer?

A: Octopus doesn't allocate memory in a big piece, but allocates small chunks as needed. So when you ask for more memory than is available, the allocation can fail even when asking for an innocent amount of memory. So yes, if you see this it is likely that you are running out of memory.

If you compiled on a 32-bit machine, you will be limited to a little more than 2 GB.

Varia

How do the version numbers in Octopus work?

Each stable release is identified by two numbers (for example x.y). The first number indicates a particular version of Octopus and the second one the release. Before the 6.0 release (2016) Octopus used a three-number scheme, with the first two numbers being the version and the third one the release.

An increase in the revision number indicates that this release contains bug fixes and minor changes over the previous version, but that no new features were added (it may happen that we disable some features that are found to contain serious bugs).

Development versions (that you can get from the git repository) do not have a version number, they only have a code name. For code names we use the scientific names of Octopus species. These are the code names that we have used so far (this scheme was started after the release of version 3.2):

  • Octopus selene (moon octopus): current development version.
  • Octopus australis (hammer octopus): 9.x
  • Octopus wolfi (star-sucker pygmy octopus): 8.x
  • Octopus mimus (Gould octopus): 7.x
  • Octopus tetricus (common Sydney octopus or gloomy octopus): 6.x
  • Octopus superciliosus (frilled pygmy octopus): 5.0.x
  • Octopus nocturnus: 4.1.x
  • Octopus vulgaris (common octopus): 4.0.x

Units

See Units

How do I cite octopus?

See Citing Octopus.