Essentials of running GTC
Bashrc | Job-Script | Makefile - To run GTC(v4.6U) in ParamPravega and PTG
Files for PTG Cluster

Below I have listed the bashrc, jobscript, and Makefile needed to the run the NON-GPU version of the GTC Code for Stellarator in PTG Cluster.

GTC v4.6U - f73abb


Bashrc

Add the below code in a .bashrc file in your HOME directory of the PTG Cluster

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific environment
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
    PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
export PATH

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions


. /exports/apps/installed/spack/share/spack/setup-env.sh

spack load openmpi@4.1.3 fabrics=psm2,ofi
export LD_LIBRARY_PATH=/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/petsc-3.18.1-xeuns4w5kms5jertyhfmlqwtmlmyhw3m/lib:/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/netcdf-fortran-4.6.0-65d4q7c6xulphalt2lpacg3ytactn7g3/lib:$LD_LIBRARY_PATH


Job-Script

Below is a list of different simulations one can submit in the Cluster for GTC.

Tokamak

  1. Tokamak: Linear Adiabatic Electron Response Jobscript (Electrostatic)
  2. (To be Updated) Tokamak: Non-Linear Adiabatic Electron Response (Electrostatic)Jobscript
  3. (To be Updated) Tokamak: Kinetic Electron Response Jobscript (Electrostatic)

Stellarator

  1. Stellarator: Linear Adiabatic Electron Response Jobscript (Electrostatic)
  2. (To be Updated) Stellarator: Non-Linear Adiabatic Electron Response Jobscript (Electrostatic)
  3. (To be Updated) Stellarator: Kinetic Electron Response Jobscript (Electrostatic)

Tokamak:


Tokamak: Linear Adiabatic Electron Response Jobscript (Electrostatic)

For Tokamaks,

#!/bin/bash
#SBATCH -N 05
#SBATCH --ntasks-per-node=64
#SBATCH --time=100:00:00
#SBATCH --job-name=gtc_1.1
#SBATCH --error=%J.error
#SBATCH --output=%J.output
#SBATCH --partition=debug
#SBATCH -c 1
#SBATCH -n 320

cd $SLURM_SUBMIT_DIR

spack load petsc
spack load netcdf-fortran
spack load openmpi@4.1.3 fabrics=psm2,ofi

export LD_LIBRARY_PATH=/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/petsc-3.18.1-xeuns4w5kms5jertyhfmlqwtmlmyhw3m/lib:/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/netcdf-fortran-4.6.0-65d4q7c6xulphalt2lpacg3ytactn7g3/lib:$LD_LIBRARY_PATH

echo "Number of Nodes Allocated      = $SLURM_JOB_NUM_NODES"
echo "Number of Tasks Allocated      = $SLURM_NTASKS"
echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK"

export I_MPI_FALLBACK=disable
export I_MPI_FABRICS=shm:dapl
export I_MPI_DEBUG=9

export OMP_NUM_THREADS=16
mkdir -p restart_dir1
mkdir -p restart_dir2
mkdir -p phi_dir
mkdir -p trackp_dir


mpirun -n 320 --bind-to core  --mca btl_tcp_if_include ib0 --mca orte_base_help_aggregate 0 ./gtc

In above 320 MPI processes are submitted as mtoroidal = 32(i.e., divided the whole computational domain in 24 Toroidal sections) and the number of MPI process should be a multiple of mtoroidal value in order for the particle domains present in a given Toroidal section to take up an integer value.


Tokamak: Non-Linear Adiabatic Electron Response (Electrostatic)Jobscript


Tokamak: Kinetic Electron Response Jobscript (Electrostatic)



Stellarator:


Stellarator: Linear Adiabatic Electron Response Jobscript (Electrostatic)

For Stellarators,

#!/bin/bash
#SBATCH -N 10
#SBATCH --ntasks-per-node=64
#SBATCH --time=100:00:00
#SBATCH --job-name=gtc_1.1
#SBATCH --error=%J.error
#SBATCH --output=%J.output
#SBATCH --partition=debug
#SBATCH -c 1
#SBATCH -n 639

cd $SLURM_SUBMIT_DIR

spack load petsc
spack load netcdf-fortran
spack load openmpi@4.1.3 fabrics=psm2,ofi

export LD_LIBRARY_PATH=/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/petsc-3.18.1-xeuns4w5kms5jertyhfmlqwtmlmyhw3m/lib:/exports/apps/installed/spack/opt/spack/linux-rocky8-zen/gcc-8.5.0/netcdf-fortran-4.6.0-65d4q7c6xulphalt2lpacg3ytactn7g3/lib:$LD_LIBRARY_PATH

echo "Number of Nodes Allocated      = $SLURM_JOB_NUM_NODES"
echo "Number of Tasks Allocated      = $SLURM_NTASKS"
echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK"

export I_MPI_FALLBACK=disable
export I_MPI_FABRICS=shm:dapl
export I_MPI_DEBUG=9

export OMP_NUM_THREADS=16
mkdir -p restart_dir1
mkdir -p restart_dir2
mkdir -p phi_dir
mkdir -p trackp_dir

mpirun -n 639 --bind-to core  --mca btl_tcp_if_include ib0 --mca orte_base_help_aggregate 0 ./gtc

In above 639 MPI processes are submitted as mtoroidal = 9(For W7-X), i.e. number of MPI process is a multiple of mtoroidal value.


Stellarator: Non-Linear Adiabatic Electron Response Jobscript (Electrostatic)


Stellarator: Kinetic Electron Response Jobscript (Electrostatic)



Makefile PTG

Follow the link to get the code for Makefile_PTG for running GTCv4.6U for the case of ICONFIG = TOROIDAL3D(Stellarators)


Last Updated: 30 June 2023