##useful slurm commands
| **commands** | **function** | **usage** | **example** |
| squeue | list all jobs | list all jobs | `$ squeue` |
| srun -N -n | submit a single slurm job | srun -N #nodes -n #cores | `$ srun -N 1 -n 1 hostname` |
| [*] sbatch | submit slurm jobs | sbatch [script] | `$ sbatch script.sh` |
| scancel | delete a slurm batch job | scancel [job_id] | `$ scancel 14157 ` |
##sbatch example
```lang=bash
#!/bin/bash
#SBATCH -n 1 # Number of cores
#SBATCH -N 1 # Number of nodes
#SBATCH -t 02:00:00 # Runtime
#SBATCh --mem=64G # Memory pool for all cores (see also --mem-per-cpu)
#SBATCH --mail-user=user@domain.edu
#SBATCH --mail-type=ALL
#SBATCH --output=/path/to/output.log
#commands go below
printf "1,000,000 integers"
out=$(srun bin/parallel -c -s 1000000 -e 123)
printf "$out," >> ./output/parallel/parallel.csv && printf " $out s\n"
.
.
.
```