LUMI (CSC)

The LUMI cluster is located at CSC.

Introduction

If you are new to this system, please see the following resources:

Installation

Use the following commands to download the WarpX source code and switch to the correct branch:

git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx

We use the following modules and environments on the system ($HOME/lumi_warpx.profile).

Listing 25 You can copy this file from Tools/machines/lumi-csc/lumi_warpx.profile.example.
# please set your project account
#export proj=<yourProject>

# optional: just an additional text editor
module load nano

# required dependencies
module load LUMI/22.08  partition/G
module load buildtools
module load craype-accel-amd-gfx90a
module load rocm/5.0.2
module load cray-mpich
module load cce/14.0.2

# optional: faster re-builds
#module load ccache

# optional: for PSATD in RZ geometry support
# TODO: BLAS++, LAPACK++

# optional: for QED lookup table generation support
# TODO: BOOST

# optional: for openPMD support
# TODO: HDF5, ADIOS2

# optional: Ascent in situ support
# TODO

# optional: for Python bindings or libEnsemble
# TODO

if [ -d "$HOME/sw/venvs/warpx-lumi" ]
then
  source $HOME/sw/venvs/warpx-lumi/bin/activate
fi

# an alias to request an interactive batch node for two hours
#   for paralle execution, start on the batch node: jsrun <command>
#alias getNode="..."
# an alias to run a command on a batch node for up to 30min
#   usage: nrun <command>
#alias runNode="..."

# optimize ROCm compilation for MI250X
export AMREX_AMD_ARCH=gfx90a

# compiler environment hints
export CC=$(which cc)
export CXX=$(which CC)
export FC=$(which ftn)

We recommend to store the above lines in a file, such as $HOME/lumi_warpx.profile, and load it into your shell after a login:

source $HOME/lumi_warpx.profile

Then, cd into the directory $HOME/src/warpx and use the following commands to compile:

cd $HOME/src/warpx
rm -rf build

cmake -S . -B build -DWarpX_DIMS=3 -DWarpX_COMPUTE=HIP
cmake --build build -j 6

The general cmake compile-time options apply as usual.

Running

MI250X GPUs (2x64 GB)

Note

TODO: Add batch script template.

Post-Processing

Note

TODO: Document any Jupyter or data services.