# CME Atlas Cluster
## Setup
### Head Node
The head node will need an OS, a running DHCP/DNS server, a running TFTP server, and a few NFS exports.
I've used Ubuntu Server 16.04.
#### Some basics
Install a couple things for good measure, just in case.
```
sudo apt update
sudo apt install openssh-server ansible
```
#### Network
The server being used has four network ports, two embedded and two on a PCI expansion card. For this
cluseter, port 0 (enp32s0) will be used to connect to the UNR network, and port 1 (enp34s0) will be used to
connect to a local network switch. This can be accomplished by editing `/etc/network/interfaces`.
In this case, enp32s0 is set to dhcp to get an IP address from the UNR network, and enp34s0 is set to static with an IP of 10.0.0.1.
```
# /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
auto enp32s0
iface enp32s0 inet dhcp
auto enp34s0
iface enp34s0 inet static
address 10.0.0.1
netmask 255.255.255.0
network 10.0.0.0
```
#### DHCP/DNS
I've used `dnsmasq` for the DHCP/DNS server, and it is fairly straightforward to setup.
First, install the appropriate packages:
```
sudo apt update
sudo apt install dnsmasq
```
Once installed, the config file for dnsmasq is located at `/etc/dnsmasq.conf`. Below is an example config file.
This config file specifies an interface for dnsmasq to run on, in this case enp34s0 (port 1), which ensures
dhcp is only run on the local network (10.0.0.0), and not the UNR network (134.197.0.0).
The `dchp-option` line tells dhcp clients which IP address to PXE boot from, and the
`dhcp-boot` lines tell dhcp clients which PXE files to boot with.
```
# /etc/dnsmasq.conf
interface=enp34s0
dhcp-range=10.0.0.100,10.0.0.254,12h
dhcp-option=3,10.0.0.1
dhcp-authoritative
dhcp-boot=pxelinux.0
dhcp-boot=net:normalarch,pxelinux.0
#Optionally define MAC/IP for specific nodes
#dhcp-host=xx:xx:xx:xx:xx:xx,compute-1-01,10.0.0.101
#dhcp-host=xx:xx:xx:xx:xx:xx,compute-1-02,10.0.0.102
```
Make the dnsmasq service start on boot, and restart it to ensure all changes are live.
```
sudo update-rc.d dnsmasq defaults
sudo service dnsmasq restart
```
#### TFTP
For PXE booting clients to boot, they will need some files to boot with, provided by the head node. To accomplish this, a TFTP server must be configured, in this case `tftpd-hpa` was used. Install the appropriate packages:
```
sudo apt update
sudo apt install tftpd-hpa
```
The configuration file for tftpd-hpa is located at `/etc/default/tftdp-hpa`. Below is an example config file. This config file specifies some options for the tftpd-hpa service, as well as specifying the root directorhy of the tftp server, in this case `/tftp`.
```
# /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"
RUN_DAEMON="yes"
OPTIONS="-l -s /tftp"
```
Once configured, the `/tftp` directory will need to be created and populated with some files. In order for PXE clients to boot, the following files are needed to be in the `/tftp` directory:
```
boot/ images/ pxelinux.0 pxelinux.cfg/
```
Most of the files can be populated from these commands:
```
sudo mkdir /tftp
sudo cp /usr/lib/PXELINUX/pxelinux.0 /tftp
sudo mkdir -p /tftp/boot
sudo cp -r /usr/lib/syslinux/modules/bios /tftp/boot/isolinux
sudo mkdir -p /tftp/pxelinux.cfg
sudo mkdir -p /tftp/images
sudo touch /tftp/pxelinux.cfg/default
```
Make the tftpd-hpa service start on boot, and restart it to ensure all changes are live.
```
sudo update-rc.d tftpd-hpa defaults
sudo service tftpd-hpa restart
```
#### PXE
The menu file for PXE is now located at `/tftp/pxelinux.cfg/default`. This can be configured to your liking, but here is a basic menu that will get the job done. The most important part to keep consistant if the menu is changed is the boot option for the NFSRoot label. This tells the PXE booting client to use the kernel located in the TFTP root, and to mount it's root filesystem from 10.0.0.1:/exports/xenial (which will be created later) as readonly.
```
# /tftp/pxelinux.cfg/default
default menu.c32
prompt 0
timeout 30
ONTIMEOUT AtlasNFSRoot
MENU TITLE PXE Boot Menu
LABEL AtlasNFSRoot
MENU LABEL Atlas NFS Root
KERNEL /images/ubuntu-1604/linux
APPEND root=/dev/nfs initrd=/images/ubuntu-1604/initrd.img nfsroot=10.0.0.1:/exports/xenial ip=dhcp ro
```
#### ExportsCreating the Filesystem
We will use `/exports/` as our exporting directory, so it will need to be created.
```
sudo mkdir /exports
```
Now, add this export to `/etc/exports`, and sync the changes with `sudo exportfs -arv`.
```
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
/exports/xenial 10.0.0.0/24(ro,async,no_root_squash,no_subtree_check,insecure)
```
#### Creating the Filesystem
We willThen we use `debootstrap` to create a filesystem for the booting nodes to mount, which can be installed via
```
sudo apt update
sudo apt install debootstrap
```
Once created, use debootstrap to create a filesystem with a specified archetecture, distribution, and mirror, in our case amd64, xenial, and archive.ubuntu.com.
```
sudo debootstrap --arch amd64 xenial /exports/xenial http://archive.ubuntu.com/ubuntu
```
After debootstrap is finished, a few things will need to be configured within the created filesystem.
First, You cancopy over the current apt sources from the head node:
```
sudo cp /etc/apt/sources.list /exports/xenial/etc/apt/sources.list
```
Now use `chroot` to enter the filesystem and install packages and make configuration changes.
```
sudo chroot /exports/xenial/
```
Some essential packages to install within the filesystem are:
```
sudo apt install linux-firmware nano build-essential openssh-server munge slurm-llnl ntp
```
For clients to boot from this nfsroot, some changes to fstab will need to be made. The nfs option tells fstab to mount a folder via nfs, and the tempfs option mounts a folder in memory.
Other NFS mounts are included for mirror synchronicity across compute nodes (you'll see it all as we go)
Within the chroot enviornment, replace `/ect/fstab` with this:
```
proc /proc proc defaults 0 0#/etc/fstab
proc /proc proc defaults 0 0
/dev/nfs / nfs defaults,ro 1 1
/dev/nfs / nfs none /tmp tmpfs defaults,ro 1 1 0 0
none /var/tmp tmpfs defaults 0 0 0 0
none /var/tmp log tmpfs defaults 0 0 0 0
none /var/log /lib/lightdm-data tmpfs defaults 0 0 0 0
none /var/lib/lightdm-data ubuntu-drivers-common tmpfs defaults 0 0 0 0
none /var/lib/ubuntu-drivers-common pbis tmpfs defaults 0 0 0 0
none /var/lib/pbis lightdm tmpfs defaults 0 0 0 0
#none /var/lib/lightdm /usr/local/home/cse-admin tmpfs defaults 0 0 0 0
none /home /var/lib/dhcp tmpfs defaults 0 0 0 0
none /usr/local/home/cse-admin tmpfs defaults 0 /var/spool/slurm tmpfs defaults,uid=slurm,gid=slurm 0 0
10.0.0.1:/opt /opt nfs defaults,ro,nolock 0 0
none /var/lib/dhcp tmpfs defaults 0 10.0.0.1:/usr /usr nfs defaults,ro,nolock 0 0
10.0.0.1:/home /home nfs defaults,rw,nolock 0 0
10.0.0.1:/scratch /scratch nfs defaults,rw,nolock 0 0
10.0.0.1:/etc/slurm-llnl /etc/slurm-llnl nfs defaults,ro,nolock 0 0
```
#### Updating network configuration
In order to ensure booting does not lag if interfaces are not up, enable hotplugging:
of the interface.
Edit `/etc/network/interfaces` and ensure `allow-hotplug` is set for the primary PXE boot interface.
```
#/etc/network/interfaces
source-directory /etc/network/interfaces.d
allow-hotplug enp34s0
iface enp34s0 inet dhcp
```
Exit the chroot enviornment with `exit` when finished.
#### SSH Access Setup
Now we will generate an ssh key that will be distributed to each node and allow seamless ssh access
#### Generating initramfs```
You will need to generate a new kernel and initramfs in order for it to support nfsroot arguments. This can be done with `initramfs-tools`ssh-keygen
sudo mkdir -p /exports/xenial/root/.ssh/
sudo cp ~/.ssh/id_rsa.pub /exports/xenial/root/.ssh/authorized_keys
```
sudo apt update(If the last command doesnt work, just copy-pasta the hash into the authorized keys file manually)
We can also put our key in our own authorized keys file, allowing other nodes to be accessed easily.
```
sudo apt install initramfs-toolscat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
```
Edit `/etc/initramfs-tools/initramfs.conf`, and change the entries BOOT to nfs, MODULES to most, and NFSROOT to auto.(If this command doesnt work, just copy-pasta the hash into the authorized keys file manually)
#### Management setup
Next we need to install a few things on both the head node and PXE root filesystem.
Management software such as Slurm require packages such as munge (credentials) and ntp (synced time) to work correctly.
Ensure they are installed on the head node:
```
# initramfs.confsudo apt update
# Configuration file for mkinitramfs(8). See initramfs.conf(5).sudo apt install ntp munge slurm-llnl
#```
# Note that configuration options from this file can be overridden##### NTP and munge
# by config files inConfigure ntp as a local-net timeserver by adding the /etc/initramfs-tools/conf.d directory.following lines to the end of `/etc/ntp.conf`:
BOOT=nfs```
server 127.127.1.0
fudge 127.127.1.0 stratum 10
```
#Similarly, we edit the ntp configuration of the PXE root filesystem by editing `/exports/xenial/etc/ntp.conf`.
and adding the line:
```
# MODULES: [ most | netboot | dep | list ]server head-01 iburst
#```
Where head-01 is the hostname of the head node.
Restart the ntp service and check munge on the head node
```
# most - Add most filesystem and all harddrive drivers.sudo service ntp restart
#munge -n | unmunge
# dep - Try and guess which modules to load.```
##### Slurm
#make a directory for slurm in `/var/spool` on head node and PXE root
# netboot - Add the base modules, network modules, but skip block devices.```
#sudo mkdir -p /var/spool/slurm
# list - Only include modules from the 'additional modules' listsudo mkdir -p /exports/xenial/var/spool/slurm
#```
MODULES=mostEdit the `/usr/lib/tmpfiles.d/[munge, slurmd, slurmctld].conf` files on head node. These are mounted
directly in the PXE boot too, so only one set are needed.
#
# BUSYBOX: [ y | n | auto ]
#```
# Use busybox shell and utilities. If set to n, klibc utilities will be used.#munge.conf
# If set to auto (or unset), busybox will be used if installed and klibc willd /var/run/munge 0755 munge munge -
# be used otherwise.d /var/log/munge 0700 munge munge -
#d /var/lib/munge 0711 munge munge -
BUSYBOX=auto```
#```
# COMPCACHE_SIZE: [ "x K" | "x M" | "x G" | "x %" ]#slurmd.conf
#d /var/run/slurm-llnl 0755 slurm slurm - -
# Amount of RAM to use for RAM-based compressed swap space.```
```
#slurmctld.conf
# An empty value - compcache isn't used,d /var/run/slurm-llnl 0755 slurm slurm - -
```
Now edit `/etc/slurm-llnl/slurm.conf` in the head node, changing the node configurations
as needed for your system. This is not a complete conf file; or added to the initramfs at allyou must make one by following the link below.
# An integer and K (e.g. 65536 K) - use a number of kilobytes.https://slurm.schedmd.com/configurator.easy.html
Edit as needed.
```
ControlMachine=head-01
# An integer and M (e.g. 256 M) - use a number of megabytes.ControlAddr=10.0.0.1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
# An integer and G (e.g. 1 G) - use a number of gigabytes.SlurmctldPort=6817
# An integer and % (e.g. 50 %) - use a percentage of the amount of RAM.SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
#SlurmdPort=6818
# You can optionally install the compcache package to configure this settingSlurmdSpoolDir=/var/spool/slurm
# via debconf and have userspace scripts to load and unload compcache.SlurmUser=slurm
#StateSaveLocation=/var/spool/slurm-state
COMPCACHE_SIZE=""SlurmctldTimeout=10
SlurmdTimeout=10
ClusterName=cme_atlas
# COMPUTE NODES
NodeName=compute-1-01 Sockets=1 CPUs=4 RealMemory=3500 CoresPerSocket=4 ThreadsPerCore=1 State=IDLE
NodeName=compute-1-02 Sockets=1 CPUs=4 RealMemory=7900 CoresPerSocket=4 ThreadsPerCore=1 State=IDLE
NodeName=compute-1-03 Sockets=1 CPUs=4 RealMemory=7900 CoresPerSocket=4 ThreadsPerCore=1 State=IDLE
NodeName=head-01 Sockets=1 CPUs=4 RealMemory=7900 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN
# COMPRESS: [ gzip | bzip2 | lzma | lzop | xz ]PartitionName=debug Nodes=head-01 Default=NO MaxTime=INFINITE State=UP
#PartitionName=comp Nodes=head-01,compute-1-[01-03] Default=YES MaxTime=INFINITE State=UP
COMPRESS=gzip```
#### Exportfs
#Finally, we export all this glory over NFS.
# NFS Section ofAdd our needed exports to `/etc/exports`, and sync the confighanges with `sudo exportfs -arv`.
#```
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#/exports/xenial 10.0.0.0/24(ro,async,no_root_squash,no_subtree_check,insecure)
# DEVICE: .../opt 10.0.0.0/24(ro,async,no_root_squash,no_subtree_check,insecure)
#/home 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check,insecure)
# Specify a specific network interface, like eth0/scratch 10.0.0.0/24(rw,sync,no_subtree_check,insecure)
# Overridden by optional ip= bootarg/usr 10.0.0.0/24(ro,async,no_root_squash,no_subtree_check,insecure)
#
DEVICE=
#/etc/slurm-llnl 10.0.0.0/24(ro,async,no_root_squash,no_subtree_check)
```
#### Generating initramfs
Last but not least, you will need to generate a new kernel and initramfs in order for it to support nfsroot arguments. This can be done with `initramfs-tools`
```
sudo apt update
sudo apt install initramfs-tools
```
Edit `/etc/initramfs-tools/initramfs.conf`, and change the entries BOOT to nfs, MODULES to most, and NFSROOT to auto.
# NFSROOT: [ auto | HOST:MOUNT ]```
# initramfs.conf
# Configuration file for mkinitramfs(8). See initramfs.conf(5).
#
# Note that configuration options from this file can be overridden
# by config files in the /etc/initramfs-tools/conf.d directory.
BOOT=nfs
MODULES=most
BUSYBOX=auto
COMPRESS=gzip
NFSROOT=auto
```
Now, generate the initramfs, and copy it and the kernel to the tftp directory:
Note:
```
sudo mkdir ~/tmp
sudo mkinitramfs -o /tftp/images/ubuntu-1604/initrd.img
sudo cp /boot/vmlinuz-$(uname -r) /tftp/images/ubuntu-1604/linux
```
IMPORTANT:
Edit your `/etc/initramfs-tools/initramfs.conf` and comment out the change the `BOOT=nfs` line.
This prevents updateinitramfs from turning the head node into a PXE boot machine later on!
### Booting
At this point, you should have a bootable system. Add another node to the local network switch, turn it on, and enable PXE booting in the BIOS. The machine should come up with a PXE menu, and boot from the AtlasNFSRoot.
After booting, attempt to connect to each compute node via ssh. Ensure munge, slurm, and boot from the AtlasNFSRootother tools are operating normally.