# Difference between revisions of "FAQ"

## Compilation

### Could not find library...

This is probably the most common error you can get. octopus uses several different libraries, the most important of which are gsl, fftw, and blas/lapack. We assume that you have already installed these libraries but, for some reason, you were not able to compile the code. So, what went wrong?

• Did you pass the correct --with-XXXX (where XXXX is gsl, fftw or lapack in lowercase) to the configure script? If your libraries are installed in a non-standard directory (like /opt/lapack), you will have to pass the script the location of the library (in this example, you could try ./configure --with-lapack='-L/opt/lapack -llapack'.
• If you are working on an alpha station, do not forget that the CXML library includes BLAS and LAPACK, so it can be used by octopus. If needed, just set the correct path with --with-lapack.
• If the configuration script cannot find FFTW, it is probable that you did not compile FFTW with the same Fortran compiler or with the same compiler options. The basic problem is that Fortran sometimes converts the function names to uppercase, at other times to lowercase, and it can add an "_" to them, or even two. Obviously all libraries and the program have to use the same convention, so the best is to compile everything with the same Fortran compiler/options. If you are a power user, you can check the convention used by your compiler using the command nm <library>.
• Unfortunately, the libraries compiled with one Fortran compiler are very often not compatible with file compiled with another Fortran compiler. In order to avoid problems, please make sure that all libraries are compiles using the same Fortran compiler.

To compile the parallel version of of the code, you will also need MPI (mpich or LAM work just fine).

Sometimes, when you run octopus you stumble in the following error:

octopus: error while loading shared libraries: libXXX.so: cannot open
shared object file: No such file or directory


This is a classical problem of the dynamical linker. Octopus was compiled dynamically, but the dynamical libraries (the .so files) are in a place that is not recognized by the dynamical loader. So, the solution is to tell the dynamical linker where the library is.

The first thing you should do is to find where the library is situated in your own system. Try doing a locate libXXX.so, for example. Let us imagine that the library is in the directory /opt/intel/compiler70/ia32/lib/ (this is where the ifc7 libraries are by default). Now, if you do not have root access to the machine, just type (using the bash shell)

> export LD_LIBRARY_PATH=/opt/intel/compiler70/ia32/lib/:$LD_LIBRARY_PATH > octopus  or > LD_LIBRARY_PATH=/opt/intel/fc/9.0/lib/:$LD_LIBRARY_PATH octopus


If you have root control over the machine, you can use a more permanent alternative. Just add a line containing /opt/intel/compiler70/ia32/lib/ to the file /etc/ld.so.conf. This file tells the dynamic linker where to find the .so libraries. Then you have to update the cache of the linker by typing

> ldconfig


A third solution is to compile octopus statically. This is quite simple in some systems (just adding -static to LDFLAFS, or something like that), but in some others not (with my home ifc7 setup it's a real mess!)

### What is METIS?

When running parallel in "domains", octopus divides the simulation region (the box) into separate regions (domains) and assigns each of these to a different processor. This lets you not only speed up the calculation, but also divide the memory among the different processors. The first step of this process, the splitting of the box, is in general a very complicated process. Note that we are talking about a simulation box of an almost arbitrary shape, and of an arbitrary number of processors. Furthermore, as the communication between processors grows proportionally to the surface of the domains, one should use an algorithm that divides the box such that each domain has the same number of points, and at the same time minimizes the total area of the domains. The METIS library does just that. If you want to run octopus parallel in domains, you are required to use it (or the parallel version, ParMETIS). Currently METIS is included in the Octopus distribution and is compiled by default when MPI is enabled.

## Running

### Sometimes I get a segmentation fault when running Octopus

The most typical cause for segmentation faults are caused by a limited stack size. Some compilers, especially the Intel one, use the stack to create temporary arrays, and when running large calculations the default stack size might not be enough. The solution is to remove the limitation in the size of the stack by running the command

ulimit -s unlimited


Segmentation faults can also be caused by other problems, like a wrong compilation (linking with libraries compiled with a different fortran compiler, for example) or a bug in the code.

### How do I run parallel in domains?

First of all, you must have a version of octopus compiled with support to the METIS library. This is a very useful library that takes care of the division of the space into domains. Then you just have to run octopus in parallel (this step depends on your actual system, you may have to use mpirun or mpiexec to accomplish it).

In some run modes (e.g., td), you can use multi-level parallelization, i.e., run in parallel in more than one way at the same time. In the td case, you can run parallel in states and in domains at the same time. In order to fine-tune this behavior, please take a look at the variables ParallelizationStrategy and ParallelizationGroupRanks. In order to check if everything is OK, take a look at the output of octopus in section "Parallelization". This is an example:

************************** Parallelization ***************************
Octopus will run in *parallel*
Info: Number of nodes in par_states  group:     8 (      62)
Info: Octopus will waste at least  9.68% of computer time
**********************************************************************


In this case, octopus runs in parallel only in states, using 8 processors (for 62 states). Furthermore, octopus some of the processors will be idle for 9.68% of the time (this is not so great, so maybe a different number of processors would be better in this case).

### How do I visualize 3D stuff?

Our preferred visualization tool is openDX. This is perhaps the most powerful 3D visualization tool for scientific data, and is highly versatile and sophisticated. However, this does not come for free: openDX is notoriously difficult to learn and to use. Anyway, in our opinion, its advantages clearly compensate for this problem.

The good news is that we made most of the dirty work for you! We have developed a small dx application that takes care of most of the details. If you want to try, start by installing openDX. This is simpler in some machines than in others. For example, in my Fedora Core 6, I simply have to type

 yum install dx dx-devel dx-samples


Note that we also need the development package. Next we have to install the chemical extensions to openDX. You can find some instructions here.

Now generate some files for visualization. These can be either in .dx or .ncdf format (see Output and OutputHow). Next copy the files [prefix]/octopus/share/util/mf.net and [prefix]/octopus/share/util/mf.cfg to your working directory and type

dx mf.net


Then in the dx menus, choose Windows>Open Control Panel by Name>Main. You will see a dialog box with some options. Play with it!

Also see Manual:Visualization.

### Out of memory?

Q: From time to time, I obtain the following error when I perform some huge memory-demanding calculations:

**************************** FATAL ERROR *****************************
*** Fatal Error (description follows)
*--------------------------------------------------------------------
* Failed to allocate     185115 Kb in file 'gs.F90' line    62
*--------------------------------------------------------------------
* Stack:
**********************************************************************


Could it be related to the fact that the calculation demands more memory than available in the computer?

A: Octopus doesn't allocate memory in a big piece, but allocates small chunks as needed. So when you ask for more memory than is available, the allocation can fail even when asking for an innocent amount of memory. So yes, if you see this it is likely that you are running out of memory.

If you compiled on a 32-bit machine, you will be limited to a little more than 2 GB.

## Varia

### How do the version numbers in Octopus work?

Each stable release is identified by two numbers (for example x.y). The first number indicates a particular version of Octopus and the second one the release. Before the 6.0 release (2016) Octopus used a three-number scheme, with the first two numbers being the version and the third one the release.

An increase in the revision number indicates that this release contains bug fixes and minor changes over the previous version, but that no new features were added (it may happen that we disable some features that are found to contain serious bugs).

Development versions (that you can get from the git repository) do not have a version number, they only have a code name. For code names we use the scientific names of Octopus species. These are the code names that we have used so far (this scheme was started after the release of version 3.2):

• Octopus selene (moon octopus): current development version.
• Octopus australis (hammer octopus): 9.x
• Octopus wolfi (star-sucker pygmy octopus): 8.x
• Octopus mimus (Gould octopus): 7.x
• Octopus tetricus (common Sydney octopus or gloomy octopus): 6.x
• Octopus superciliosus (frilled pygmy octopus): 5.0.x
• Octopus nocturnus: 4.1.x
• Octopus vulgaris (common octopus): 4.0.x

See Units

### How do I cite octopus?

See Citing Octopus.