FVCOM: ideal estuary model

This tutorial gives an end to end example how to install and then use FVCOM-ERSEM with an ideal estuary model on a high performance computing (HPC) machine. We have used PML’s in house machine, CETO. Unlike the other tutorials in this section we go through setting up FVCOM-ERSEM on the HPC machine and then running and plotting the results.

This tutorial is based on the scripts found in the ERSEM’s setups repository. The individual scripts can be found in the ideal_estuary folder.

Note

You will need to get access to the UK-FVCOM GitLab repo.

Building and running FVCOM-ERSEM

The three key packages required to run this tutorial are:

Both ERSEM and FABM are freely available on GitHub, however, for UK-FVCOM you will have to register for the code – see note above.

An example build script is as follows:

 1#!/usr/bin/env bash
 2
 3module load intel-mpi/5.1.2 netcdf-intelmpi/default intel/intel-2016 hdf5-intelmpi/default
 4
 5SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
 6CODE_DIR=$SCRIPT_DIR/code
 7INSTALL_DIR=$SCRIPT_DIR/model
 8
 9num_cpu=$(nproc)
10
11website=("git@github.com:UK-FVCOM-Usergroup/uk-fvcom.git" "git@github.com:pmlmodelling/ersem.git" "git@github.com:fabm-model/fabm.git")
12name=("uk-fvcom" "ersem" "fabm")
13branch=("FVCOM-FABM" "master" "master")
14
15mkdir $CODE_DIR
16cd $CODE_DIR
17
18echo "Obtaining source code"
19for i in 0 1 2
20do
21    git clone ${website[i]} ${name[i]}
22    cd ${name[i]}
23    git checkout ${branch[i]}
24    cd $CODE_DIR
25done
26
27cd $SCRIPT_DIR
28
29FABM=$CODE_DIR/fabm/src
30ERSEM=$CODE_DIR/ersem
31FABM_INSTALL=$INSTALL_DIR/FABM-ERSEM
32FC=$(which mpiifort)
33
34mkdir -p $FABM_INSTALL
35
36cd $FABM
37mkdir build
38cd build
39# Production config:
40cmake $FABM -DFABM_HOST=fvcom -DFABM_ERSEM_BASE=$ERSEM -DCMAKE_Fortran_COMPILER=$FC -DCMAKE_INSTALL_PREFIX=$FABM_INSTALL
41make install -j $num_cpu
42
43cd $SCRIPT_DIR
44
45sed -i 's|BASE_SETUP_DIR|'"$SCRIPT_DIR"'|g' make_ideal_estuary.inc
46ln -s $SCRIPT_DIR/make_ideal_estuary.inc $SCRIPT_DIR/code/uk-fvcom/FVCOM_source/make.inc
47
48# Installing FVCOM additional packages (METIS, Proj, etc)
49cd $SCRIPT_DIR/code/uk-fvcom/FVCOM_source/libs
50mv makefile makefile_
51ln -s makefile.CETO makefile
52make -j $num_cpu
53
54# Building FVCOM
55cd ..
56make -j $num_cpu

Here you will have to adapt the script to ensure you are using the right HPC modules and the corresponding compilers. The important compiler is the fortran one which is set with the variable FC.

Another key file to change is the make.inc file, here again you will need to change the compilers to reflect the modules you are using on the HPC machine. For example, on lines 74 and 75 IOLIBS and IOINCS are set, these would need to be change to reflect the modules on the HPC machine you are using.

The key lines to change are:

  • 74–75

  • 174

  • 458–465

After building FVCOM-ERSEM, we suggest you use a HPC scheduler, for example, SLURM to run example. An example of the SLURM script used here is given below:

 1#!/bin/bash --login
 2
 3#SBATCH --nodes=4
 4#SBATCH --ntasks-per-node=20
 5#SBATCH --threads-per-core=1
 6#SBATCH --job-name=estuary
 7#SBATCH --partition=all
 8#SBATCH --time=48:00:00
 9##SBATCH --mail-type=ALL
10##SBATCH --mail-user=your_mail@pml.ac.uk
11
12# Set the number of processes based on the number of nodes we have `select'ed.
13np=$SLURM_NTASKS
14
15# Export the libraries to LD_LIBRARY_PATH
16export LD_LIBRARY_PATH=$(readlink -f $WORKDIR/install/lib):$LD_LIBRARY_PATH
17
18set -eu
19ulimit -s unlimited
20
21# Number of months to skip
22skip=0
23
24# d53f5083 = FVCOM v3.2.2, M-Y, HEATING_ON, ceto
25binary=bin/fvcom
26grid=${grid:-"estuary"}
27casename="${grid}"
28
29# Make sure any symbolic links are resolved to absolute path
30export WORKDIR=$(readlink -f $(pwd))
31
32# Set the number of threads to 1
33#   This prevents any system libraries from automatically
34#   using threading.
35export OMP_NUM_THREADS=1
36
37# Magic stuff from the Atos scripts.
38export I_MPI_PIN_PROCS=0-19
39export I_MPI_EXTRA_FILESYSTEM=on
40export I_MPI_EXTRA_FILESYSTEM_LIST=gpfs
41export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
42
43# Change to the directory from which the job was submitted. This should be the
44# project "run" directory as all the paths are assumed relative to that.
45cd $WORKDIR
46
47if [ ! -d ./logs/slurm ]; then
48    mkdir -p ./logs/slurm
49fi
50mv *.out logs/slurm || true
51if [ -f ./core ]; then
52    rm core
53fi
54
55if [ ! -d ./output ]; then
56    mkdir -p ./output
57fi
58
59# Iterate over the months in the year
60
61    # Launch the parallel job
62    srun -n $np $binary --casename=$casename --dbg=0 > logs/${casename}-$SLURM_JOBID.log
63
64
65    # Check if we crashed and if so, exit the script, bypassing the restart
66    # file creation.
67    if grep -q NaN logs/${casename}-$SLURM_JOBID.log; then
68        echo "NaNs in the output. Halting run."
69        exit 2
70    fi

Example output from FVCOM-ERSEM

We provide two python scripts to demonstrate how to visualise both the input files and the output files. The plotting uses PyFVCOM, we suggest you ask for access here, however, a version of the code is available on GitHub as well as it being pip installable.

Using the input python plot script, we generate the domain as follows:

../_images/bathymetry.png

The following plots videos are produced from the plots produced by the python output plot script.