Karolina (IT4I)
The Karolina cluster is located at IT4I, Technical University of Ostrava.
Introduction
If you are new to this system, please see the following resources:
Batch system: SLURM
Jupyter service: not provided/documented (yet)
-
$HOME
: per-user directory, use only for inputs, source and scripts; backed up (25GB default quota)/scratch/
: production directory; very fast for parallel jobs (10TB default)/mnt/proj<N>/<proj>
: per-project work directory, used for long term data storage (20TB default)
Installation
We show how to install from scratch all the dependencies using Spack.
For size reasons it is not advisable to install WarpX in the $HOME
directory, it should be installed in the “work directory”. For this purpose we set an environment variable $WORK
with the path to the “work directory”.
On Karolina, you can run either on GPU nodes with fast A100 GPUs (recommended) or CPU nodes.
Profile file
One can use the pre-prepared karolina_warpx.profile
script below,
which you can copy to ${HOME}/karolina_warpx.profile
, edit as required and then source
.
To have the environment activated on every login, add the following line to ${HOME}/.bashrc
:
source $HOME/karolina_warpx.profile
To install the spack
environment and Python packages:
bash $WORK/src/warpx/Tools/machines/karolina-it4i/install_dependencies.sh
Compilation
Use the following cmake commands to compile the application executable:
cd $WORK/src/warpx
rm -rf build_gpu
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 48
The WarpX application executables are now in $WORK/src/warpx/build_gpu/bin/
.
Additionally, the following commands will install WarpX as a Python module:
cd $WORK/src/warpx
rm -rf build_gpu_py
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 48 --target pip_install
Now, you can submit Karolina compute jobs for WarpX Python (PICMI) scripts (example scripts).
Or, you can use the WarpX executables to submit Karolina jobs (example inputs).
For executables, you can reference their location in your job script or copy them to a location in /scratch/
.
Running
The batch script below can be used to run a WarpX simulation on multiple GPU nodes (change #SBATCH --nodes=
accordingly) on the supercomputer Karolina at IT4I.
This partition has up to 72 nodes.
Every node has 8x A100 (40GB) GPUs and 2x AMD EPYC 7763, 64-core, 2.45 GHz processors.
Replace descriptions between chevrons <>
by relevant values, for instance <proj>
could be DD-23-83
.
Note that we run one MPI rank per GPU.
$WORK/src/warpx/Tools/machines/karolina-it4i/karolina_gpu.sbatch
.#!/bin/bash -l
# Copyright 2023 The WarpX Community
#
# This file is part of WarpX.
#
# Authors: Axel Huebl, Andrei Berceanu
# License: BSD-3-Clause-LBNL
#SBATCH --account=<proj>
#SBATCH --partition=qgpu
#SBATCH --time=00:10:00
#SBATCH --job-name=WarpX
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=16
#SBATCH --gpus-per-node=8
#SBATCH --gpu-bind=single:1
#SBATCH --mail-type=ALL
# change me!
#SBATCH --mail-user=someone@example.com
#SBATCH --chdir=/scratch/project/<proj>/it4i-<user>/runs/warpx
#SBATCH -o stdout_%j
#SBATCH -e stderr_%j
# set user rights to u=rwx;g=r-x;o=---
umask 0027
# OpenMP threads per MPI rank
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# executable & inputs file or python interpreter & PICMI script here
EXE=./warpx.rz
INPUTS=./inputs_rz
# run
srun -K1 ${EXE} ${INPUTS}
To run a simulation, copy the lines above to a file karolina_gpu.sbatch
and run
sbatch karolina_gpu.sbatch
to submit the job.
Post-Processing
Note
This section was not yet written. Usually, we document here how to use a Jupyter service.