PBS/SGE commands vs Slurm
Other HPC clusters may use different schedulers to Slurm. This section provides a brief comparison of PBS, SGE and Slurm input parameters. Please note that in some cases there is no direct equivalent between the different systems.
Basic Job Options
| Comment | PBS | SGE | Slurm | 
|---|---|---|---|
| Give the job a name. | #PBS -N JobName | #$ -N JobName | #SBATCH --job-name=JobName  or #SBATCH -J JobName Note: The job name appears in the queue list, but is not used to name the output files (opposite behavior to PBS, SGE)  | 
| Redirect standard output of job to this file. | #PBS -o path | #$ -o path | #SBATCH --output=path/file-%j.ext1  or #SBATCH -o path/file-%j.ext1 Note: %j is replaced by the job number  | 
| Redirect standard error of job to this file. | #PBS -e path | #$ -e path | #SBATCH --error=path/file-%j.ext2 or #SBATCH -e path/file-%j.ext2 Note: %j is replaced by the job number.  | 
Specify accounts, queues and working directories.
| Comment | PBS | SGE | Slurm | 
|---|---|---|---|
| Account to charge quota. (if so set up) | #PBS -A AccountName | #SBATCH --account=AccountName  or #SBATCH -A AccountName  | |
| Walltime | #PBS -l walltime=2:23:59:59 | # -l h_rt=hh:mm:ss e.g. # -l h_rt=96:00:00 | #SBATCH --time=2-23:59:59 #SBATCH -t 2-23:59:59 Note '-' between day(s) and hours for Slurm. | 
| Change to the directory that the script was launched from | cd $PBS_O_WORKDIR | #$ -cwd | This is the default for Slurm. | 
| Specify a queue (partition) | #PBS -q batch | #$ -cwd | #SBATCH --partition=main  or #SBATCH -p main Note: In Slurm a queue is called a partition, and the default is 'batch'.  | 
How to request nodes, sockets and cores.
| Comment | PBS | SGE | Slurm | 
|---|---|---|---|
| The number of compute cores to ask for | #PBS -l nodes=1:ppn=12  Asking for 12 CPU cores, which is all the cores on a MASSIVE node. You could put "nodes=1" for a single CPU core job or "nodes=1:ppn=4" to get four cpu cores on the one node (typically for multithreaded, smp or openMP jobs)  | #$ -pe smp 12 #$ -pe orte_adv 12 MCC SGE did not implement running jobs across machines, due to limitations of the interconnection hardware  | #SBATCH --nodes=1 --ntasks=12 or #SBATCH -N1 -n12 --ntasks is not used in isolation but combined with other commands such as --nodes=1  | 
| The number of tasks per socket | --ntasks-per-socket=Number  Request the maximum ntasks be invoked on each socket. Meant to be used with the --ntasks option. Related to --ntasks-per-node except at the socket level instead of the node level  | ||
| Cores per task (for use with openMP) | --cpus-per-task=ncpus  or -c ncpus Request that ncpus be allocated per process. The default is one CPU per process.  | ||
| Specify per core memory. | #PBS -l pmem=4000MB Specifies how much memory you need per CPU core (1000MB if not specified)  | No equivalent. SGE uses memory/process | --mem-per-cpu=24576  or --mem=24576 Slurm default unit is MB.  | 
Notify job progress via email.
| Comment | PBS | SGE | Slurm | 
|---|---|---|---|
| Email notification when: job fails. | #PBS -m a | #$ -m a | #SBATCH --mail-type=FAIL | 
| Email notification when: job begins. | #PBS -m b | #$ -m b | #SBATCH --mail-type=BEGIN | 
| Email notification when: job stops. | #PBS -m e | #$ -m e | #SBATCH --mail-type=END | 
| Email notification for ALL events | #SBATCH --mail-type=ALL | ||
| e-mail address to send information to | #PBS -M name@email.address | #$ -M name@email.address | #SBATCH --mail-user=name@email.address |