Juwels (JSC)

The Juwels supercomputer is located at JSC.

See this page for a quick introduction.

  • Batch system: Slurm

  • Production directories:

    • $SCRATCH: Scratch filesystem for temporary data (90 day purge)

    • $FASTDATA/: Storage location for large data (backuped)

    • Note that the $HOME directory is not designed for simulation runs and producing output there will impact performance.

Installation

Use the following commands to download the WarpX source code and switch to the correct branch:

mkdir ~/src
cd ~/src

git clone https://github.com/ECP-WarpX/WarpX.git warpx
git clone --branch development https://github.com/ECP-WarpX/picsar.git
git clone --branch development https://github.com/AMReX-Codes/amrex.git

We use the following modules and environments on the system.

# please set your project account
export proj=<yourProject>

# required dependencies
module load GCC
module load OpenMPI
module load CUDA

# JEWELS' job scheduler may not map ranks to GPUs,
# so we give a hint to AMReX about the node layout.
# This is usually done in Make.<supercomputing center> files in AMReX
# but there is no such file for JSC yet.
export GPUS_PER_SOCKET=2
export GPUS_PER_NODE=4

Note that for now WarpX must rely on OpenMPI instead of the recommended MPI implementation on this platform MVAPICH2.

We recommend to store the above lines in a file, such as $HOME/warpx.profile, and load it into your shell after a login:

source $HOME/warpx.profile

Then, cd into the directory $HOME/src/warpx and use the following commands to compile:

make -j 16 COMP=gcc USE_GPU=TRUE

The other general compile-time options apply as usual.

Running

An example submission script reads

#!/bin/bash -l

#SBATCH -A $proj
#SBATCH --partition=gpus
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --ntasks-per-node=4
#SBATCH --gres=gpu:4
#SBATCH --time=00:05:00
#SBATCH --job-name=warpx
#SBATCH --output=warpx-%j-%N.txt
#SBATCH --error=warpx-%j-%N.err

export OMP_NUM_THREADS=1

module load GCC
module load OpenMPI
module load CUDA

srun -n 8 --cpu_bind=sockets $HOME/src/warpx/Bin/main3d.gnu.TPROF.MPI.CUDA.ex inputs

See Visualization with yt (for plotfiles) for more information on how to visualize the simulation results.