Running Codes on WPI Machines

From Deskins Group Resources

Below describes how to run various codes on the WPI clusters. Typically jobs are run through a bash script.

Ace

VASP

CPU Version

GPU Version

Add these settings to INCAR

NCORE = 1
LREAL = .TRUE.  or LREAL = A
ALGO = Normal, Fast, or Very Fast

Run this:

module load vasp/5.4.4
svasp -N 1 -n 4 -g 2 -t 24:00 -i INCAR

CP2K

Version 5.1 Run this submission script. "sbatch sub-ace.cp2k"

#!/bin/bash
#SBATCH -p normal
#SBATCH -J hmd-cl3b
#SBATCH -N 1
#SBATCH --time=12:00:00
#SBATCH --mem 40000
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:0

export OMP_NUM_THREADS=1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/libint2-beta.so.2

cd /home/nadeskins/blah/blah/blah
orterun -np 16 cp2k.popt -i CuO.inp -o CuO.out


NWChem

module load nwchem/gcc-4.8.5/6.6 snwchem -n 8 -m 40000 -t 06:00 -i input.nw -l output.out -e error.err

DL_POLY Classic

Gaussian 09

Turing

VASP

Version 5.3.5

Submission script. Run "sbatch sub.vasp".

#!/bin/bash
#SBATCH -p short
#SBATCH -J sub
#SBATCH -N 1
#SBATCH --time=16:00:00
#SBATCH --mem 48000
#SBATCH --ntasks-per-node=18
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:0
#SBATCH -x compute-0-[02,03,04,26]

export OMP_NUM_THREADS=1
module unload cp2k/gcc-4.8.5/2.6.0
module load vasp/5.3
export SLURM_TMPDIR=/local
mpirun -n 18 vasp >> out

Submission of GPU version. "sbatch sub.vasp.gpu"

#!/bin/bash
#SBATCH -p short
#SBATCH -J cd
#SBATCH -N 1
#SBATCH -o INCAR.out
#SBATCH -e INCAR.error
#SBATCH -C K20|K40|K80
#SBATCH --time=12:00:00
#SBATCH --mem 128000
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH --sockets-per-node=2
#SBATCH --ntasks-per-socket=2
#SBATCH --gres=gpu:2

export OMP_NUM_THREADS=1
module load vasp/5.4.4
mpirun -n 4 -genv I_MPI_DEBUG=4 -env I_MPI_PIN_PROCESSOR_LIST=allcores:map=scatter vasp_gpu >> INCAR.out

Can also run this.

svasp -n 2 -N 1 -t 24:00 -C K40 -g 2 -i INCAR -l out

CP2K

"sbatch sub.cp2k"

#!/bin/bash
#SBATCH -p short
#SBATCH -J test
#SBATCH -N 1
#SBATCH --time=10:00:00
#SBATCH --mem 40000
#SBATCH --ntasks-per-node=18
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:0
#SBATCH -x compute-0-39 #if you want to exclude specific nodes

export OMP_NUM_THREADS=1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/libint2-beta.so.2
module load cp2k/gcc-4.8.5/2.6.0

mpiexec -np 18 cp2k.popt -i in -o out

NWChem

"sbatch sub.nwchem"

#!/usr/bin/bash
#SBATCH -p short
#SBATCH -J small
#SBATCH -N 1
#SBATCH -e error
#SBATCH --time=24:00:00
#SBATCH --mem 64000
#SBATCH --ntasks-per-node=48

export NWCHEM_BASIS_LIBRARY=/cm/shared/spack/opt/spack/linux-rhel7-x86_64/gcc-4.8.5/nwchem-6.6-5dyyf2abkzpjy6nagbrcme4q45eezic5/share/nwchem/libraries/

module load nwchem/gcc-4.8.5/6.6
mpirun -n 48 nwchem in.nw >> in.out

Quantum Espresso

"sbatch sub.qe"

#!/bin/bash
#SBATCH -p short
#SBATCH -J qe_co2
#SBATCH -N 1
#SBATCH --time=01:00:00
#SBATCH --mem 40000
#SBATCH --ntasks-per-node=18
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:0

module load espresso/gcc-4.8.5/6.1.0
module load netlib-scalapack/gcc-4.8.5/2.0.2
module load openmpi/gcc-4.8.5/2.1.1
module load openblas/gcc-4.8.5/0.2.19
module load elpa/gcc-4.8.5/2016.05.004
module load fftw/gcc-4.8.5/3.3.6-pl2

export OMP_NUM_THREADS=1
#export PSEUDO_DIR=`pwd`

mpiexec -np 18 pw.x < qe.in > qe.out
echo "QE done in `pwd`" >> ~/log

DL_POLY Classic

Gaussian 09