HPC doc.



On Baobab, many applications are already installed. Not every application is listed below. To get a complete list, you need to use Module - lmod when logged. Example to have a complete list:

module spider


Please see TensorFlow on gitlab for some examples and scripts related to TensorFlow


Please see Keras on gitlab for some examples and scripts related to Keras

Ansys Fluent

You can use Fluent on Baobab. To do so, please see Fluent on gitlab and adapt the sbatch script.


You can use Fall3D on Baobab. To do so, please see Fall3d on gitlab and adapt the sbatch script.


See Meep on gitlab for an example how to use Meep.


Matlab is available in Baobab in version 2013, 2014 and 2016 and 2016b.

Keep in mind that it’s a licensed program, and that the licenses are shared with the whole university. To be fair with the other users, we have set up a limitation on the number of licenses you can use. I kindly ask you to specify in your sbatch file that you are using Matlab in order to keep the limitation effective. If you are using some licensed toolbox, like Wavelet_Toolbox, you need to specify it as well. If you don’t, your job may be killed without further notice in case we are out of licenses.

Example to specify that you need Matlab:


Example to specify that you need the Wavelet_Toolbox:


See the licenses available on Baobab:

scontrol show lic

If you need a license not listed here, please ask us at


You need to specify at LEAST the Matlab license, and zero or more toolbox.

To run Matlab in batch mode, you can create a batch file like this one:


#SBATCH --cpus-per-task=1
#SBATCH --ntasks=1

module load matlab



echo "Running ${BASE_MFILE_NAME}.m on $(hostname)"

srun matlab -nodesktop -nosplash -nodisplay -r ${BASE_MFILE_NAME}

In this example, you need to have your code in the file hello.m.

You submit the Matlab job like a normal sbatch SLURM job:

sbatch ./yourBatch

Parallel with Matlab

Since version 2014, you are not anymore limited to 12 cpu cores.

Please see Matlab on gitlab for some examples and scripts related to Matlab parallel

As we are talking about parallel and not distributed Matlab, you can consided Matlab as a multithread application.

see how to submit a multithread job Multithreaded jobs


If you are a parallel or distributed Matlab specialist and you have some hints, you are very welcome to contact us!

Pass sbatch arguments to Matlab

You can pass arguments from sbatch to Matlab as described below.

Example of sbatch file (

[sbatch part as usual]



module load matlab

#the variable you want to pass to matlab

echo "Starting at $(date)"
echo "Running ${MATLAB_MFILE} on $(hostname)"

# we call the matlab function (see the parenthesis around the argument) and the argument type will be integer.
srun matlab -nodesktop -nosplash -nodisplay -r "${BASE_MFILE_NAME}($job_array_index)"
echo "Finished at $(date)"

Example of Matlab file (test.m)

function test(job_array_index)

fprintf('array index: %d\n', job_array_index)

See arguments with Matlab on gitlab for more examples.

Compile your Matlab code

Thanks to Philippe Esling for his contribution to this procedure.

The idea of compiling Matlab code is to save on licenses usage. Indeed, once compiled, a Matlab code can be run without using any license.

The Matlab compiler is named MCC

First load the needed modules:

module load foss/2016a matlab/2016b

Let’s say you want to compile this .m file:

function hello(name)
   strcat({'Hello '}, name)

This operation compiles it (this takes some time) :

DISPLAY="" mcc -m -v -R '-nojvm, -nodisplay' -o hello hello.m

If you have some other .m files that you need to include, you need to explicitly specify their location as follows:

mcc -m -v -R '-nojvm, -nodisplay' -I /path/to/functions/ -I /path/to/other/functions/ [...]

The resulting files are a text file named readme.txt, a script named and an executable named hello.

You can then launch the executable hello like any other executable using a sbatch script:


#SBATCH --partition=debug
#SBATCH --ntasks=1

module load foss/2016a matlab/2016b

srun ./hello Philippe

In this case, Philippe is an argument for the function hello. Be careful, arguments are always passed to Matlab as strings.


You do NOT need to specify the Matlab license once compiled, and you are not restricted by the number of available licenses.

Please see compile Matlab on gitlab for some examples and scripts related to Matlab compilation.


To use the Wavelab library with Matlab, load Matlab 2014.

module load matlab/2014b

Launch Matlab as usual and type this command to go to the Wavelab library:

cd /opt/wavelab/Wavelab850/
Wavepath (answer /opt/wavelab/Wavelab850)

User mode linux

If you need a very exotic configuration to run you job and it’s single threaded, a solution could be to launch your job in a virtual machine.

User mode linux (UML) is a good candidate for that purpose, because you can do everything in userspace.


To use UML you need at least a linux kernel and a distribution file system. See resources below to download them.

If you need networking in your virtual machine (VM), you can use slirp (it’s available in /opt/slirp/bin/slirp-fullbolt). You can as well download and compile a new version yourself.

Run a virtual machine (with network on eth0):

./kernel64-3.10.0 ubda=./CentOS6.x-AMD64-root_fs mem=512m eth0=slirp,,/opt/slirp/bin/slirp-fullbolt

This will launch Centos6.x with 512M of ram and eth0 using slirp as transport. When the VM finish to boot, you’ll be prompted with a standard login. The user is root without password (there is no security concern). You can do what you want in the usermachine as root, the filesystem is read/write.


Since you have started the VM with network support, you need to configure it in the machine. The IP of your VM should be

ifconfig eth0
route add default dev eth0

If you just want to use the same DNS as the host, there is a specific IP for that. Edit the file /etc/resolv.conf and put nameserver

If you want to reach the host from the VM, use the IP, it’s as well a shortcut that always maps to the host IP.

With this configuration, you should be able to use IP connections as if you were using masquerade


It’s possible to access the host filesystem from the VM. Check that your kernel supports it:

$cat /proc/filesystems | grep hostfs

If yes, you can for example mount the host /home/youruser to the VM /mnt/home:

#mkdir /mnt/home
#mount none /mnt/home -t hostfs -o /home/sagon



You can use OpenCL on CPU. To compile your software, please proceed as following:

gcc  -I/opt/intel/opencl-1.2-sdk- -L/opt/intel/opencl-1.2-sdk- -Wl,-rpath,/opt/intel/opencl/lib64/ -lOpenCL -o hello hello.c

Nvida CUDA

Currently on Baobab there are several nodes equiped with GPUs. To request a GPU, it’s not enough the specify a partition with nodes having GPUs, you must as well specify how many GPUs and optionnaly the needed type.

To specify how many GPU you request, use the option --gres=gpu:n with n having a value between 1 and the maximum according to the table below.

You can also specify the type of GPU you want: tesla, titan or pascal.

Example to request three titan cards: --gres=gpu:titan:3.

In the following table you can see which type of GPU is available on Baobab.

GPU model Compute Capability SLURM resource numbers per node node partition
T20A 2.0 tesla 4 gpu001 shared-gpu
titan x (pascal) 6.1 titan 3 gpu[002-003] shared-gpu, dpnc-gpu
p100 6.0 pascal 5 gpu[004-005] shared-gpu, dpt-gpu, kruse-gpu
p100 6.0 pascal 8 gpu[006] shared-gpu, kruse-gpu


  • shared-gpu with a max execution time of 12h, tesla, titan and pascal cards
  • dpt-gpu with a max execution time of 2 days, pascal GPU cards
  • dpnc-gpu with a max execution time of 2 days, titan GPU cards
  • kruse-gpu with a max execution time of 2 days, titan GPU cards

Example of script:

#!/bin/env bash

#SBATCH --partition=shared-gpu
#SBATCH --time=01:00:00
#SBATCH --gres=gpu:titan:1

module load CUDA

# see here for more samples:
# /opt/cudasample/NVIDIA_CUDA-8.0_Samples/bin/x86_64/linux/release/

# if you need to know the allocated CUDA device, you can obtain it here:

srun deviceQuery

If you want to see what GPUs are in use in a given node:

scontrol -d show node gpu002

In this case, this means that node gpu002 has three Titan cards, and all of them are allocated.

If you just need a gpu and you don’t care of the type, don’t specify it:

#SBATCH --gres=gpu:1

It’s not possible to put two types in the GRES request, but you can ask for specific compute capability, for example you don’t want a compute capability smaller than 6.0:

#SBATCH --gres=gpu:1


Xeon PHI

The intel compiler on Baobab supports the generation of an executable using offload pragmas. This means that you can submit your phi job without having to copy first your binary to the phi using ssh.

To compile a c++ program with openMP 4.0:

icpc -openmp /home/common/phi/reduction.cpp

To launch it using one mic:

#SBATCH --cpus-per-task=1
#SBATCH --job-name=test-phi
#SBATCH --ntasks=1
#SBATCH --time=00:00:10
#SBATCH --partition=cui-phi
#SBATCH --clusters=baobab
#SBATCH --output=slurm-%J.out
#SBATCH --gres=mic:1

# if you want to see some debug information

echo "Running on device: " $OFFLOAD_DEVICES

srun ./a.out


Stata version 13 mp 16 and version 14 mp 24 are available on the cluster.

To use it, you need to add the stata path to your PATH:

module load stata/13mp16


module load stata/14mp24

The Stata binaries are stata-mp or xstata-mp for the graphical interface.

If you need a graphical interactive session, please proceed as follows:

salloc -n1 -c 16 --partition=debug --time=15:00 srun -n1 -N1 --pty $SHELL

Doing so will launch a graphical Stata on a debug node with 16 cores for a 15 minutes session. See on this document for other partition/time limits.


Please keep in mind that the cluster may be full and that you will have to wait until the resources are allocated to you. It’s best to launch stata in batch mode.

To launch stata in batch mode, see Multithreaded jobs and specify that you want one task and n cores.

Please see here for an sbatch example with stata.

R project and Rstudio

The latest version of R (if it’s not the latest, you can ask us to install it) is installed on the cluster.

Please see here for an sbatch example with R.


R studio is obsolet on Baobab

On the cluster you can as well use Rstudio which is more user friendly than plain R. To do so, please connect to the cluster using ssh -Y from a machine with an X server. You can create an interactive Rstudio session like that:

srun -n 1 -c 16 /opt/rstudio/bin/rstudio

Doing so, you will have 16 cores on one node of the partition debug for a max time of 15 minutes. Specify the appropriate duration time, partition, etc. like you would do for a normal job.

R packages

You can install R packages as a user. Just follow once the steps given below:

Create a file named .Rprofile (note the dot in front of the file) in your home directory with the following content:

cat(".Rprofile: Setting Switzerland repository\n")

r = getOption("repos") # hard code the Switzerland repo for CRAN
r["CRAN"] = ""
options(repos = r)

Create a file named .Renviron (note the dot in front of the file) in your home directory with the following content:


Create a directory where to store the installed R packages:

mkdir ~/Rpackages

Once done, you can install a R package from a R command line:


Use your newly installed package:






Usage :



You can use Gaussian g09 on one node of the cluster. Please note that you must use “module” (see Module - lmod) in you sbatch script to set the variables correctly. When using the module, it will set the variable GAUSS_SCRDIR to /scratch of the local hard disk of the allocated node. This should lower the calculation time and as well lower the usage of the shared filesystem. See below for other optimizations.

There are two versions of g09 on the cluster. Revision c01 and d01.

Please see Gaussian example on gitlab for some examples and scripts related to Gaussian

To optimize the run, you can add some lines in your job file. If you need more than 190 GB of scratch space, you should add the line (adapt /home/yourusername to your own path):


You may as well specify how much memory you want to use. By default, Gaussian will use 250MB of ram. You can try with 50GB for example:


You need to specify as well how many CPU cores you want to use:



You can use SCM ADF on one ore more nodes of the cluster. Please note that you must use “module” (see Module - lmod) in you sbatch script to set the variables correctly.

Sbatch example script:

#SBATCH --output=%J-test_adf.log
#SBATCH --error=%J-test_adf.err
#SBATCH --job-name=testadf
#SBATCH --cpus-per-task=1
#SBATCH --tasks=32
#SBATCH --time=0:15:0
#SBATCH --partition=debug

module load adf/201301d

mkdir -p ${SIMULATION_DIR}

srun mkdir -p ${SCRATCH_DIR}





Do not launch ADF using srun. ADF is a wrapper which uses srun internally.


ADF needs a fast local scratch space. On Baobab, the local scratch of each node is only about 180GB. If you need more space, you need to find another solution (do the calculation on more nodes, do not use local scratch, buy us new hard disks)





Download the latest palabos version on

tar xzf palabos-v1.3r0.tgz
cd palabos-v1.3r0/examples/benchmarks/cavity3d

If you want to use more cores to compile it, edit the Makefile of your project and change the line

Cons     = $(palabosRoot)/scons/ -j 2 -f $(palabosRoot)/SConstruct

Replace the number 2 by the number of cores that you have at your disposal.

For example if you want to use a full 16 core node to do that, you can compile Palabos like this:

srun --cpus-per-task 16 --ntasks=1 --exclusive make

Distant Paraview

Thanks to Orestis for this tutorial.


  1. Do not use at any point X11 forwarding it will be done by paraview itself.
  2. You must have the SAME version of paraview on your local machine and on baobab (5.3.0).

Baobab connection:


Get some ressources in interactive mode (4 cores here, you can add a lot more options here if you want). You can even ask for gpu nodes (more on this afterwards):

salloc -n 4

Determine on which node your ressources are affacted (here node001):



On another terminal (in your pc) open an ssh tunnel to your NODE (in this case node001):

ssh -L 11150:node001:11111

On the first terminal load paraview (if not loaded by default) and launch pserver:

module load foss/2016b ParaView/5.3.0-mpi
srun pvserver --server-port=11111

If you asked for GPU nodes the command is slightly different:

srun pvserver --server-port=11111 -display :0.0 --use-offscreen-rendering

On your local machine launch paraview and click on Connect. There you should find the menu to add a server. Put the name you want in Name, leave Client/Server as Server Type, and localhost in Host. The only very important configuration is the port which should be 11150 (the same number as in 11150:node001 from before). Save the configuration (click on Configure and Save, leave startup as Manual) and the click on Connect. The remote paraview session should start immedtiatly. There should be an error message: Display is not accessible on the server side. Remote rendering will be disabled. This message is normal.


Default Python version on the cluster is 2.6.6.

If you need a more modern version, you can use these:

To load python 2.7 environment:

module load foss/2016a Python/2.7.11

To load python 3.x environment for intel toolchain:

module load intel/2017a Python/3.6.1

To load python 3.x environment for GCC toolchain:

module load foss/2016b Python/3.5.2

Custom Python lib

If you need to install a python library or a different version of the ones already installed, virtualenv is the solution.

Python-virtualenv is installed on Baobab

Begin by loading a version of python using module (see above)

Create a new virtualenv if it’s not already existing (put it where you want and name it like you want):

virtualenv --no-site-packages ~/baobab_python_env

This will create a directory named baobab_python_env in your home directory.

Install all the needed packages in the environment:

~/baobab_python_env/bin/pip install mpi4py

Use your new environment:


You can as well put it in your path like that:

. ~/baobab_python_env/bin/activate


To use git on the cluster you need to do the following:

Add that to your ${HOME}/.gitconfig:

createObject = rename

To invoke git:

git clone --no-hardlinks