Emulating Slurm on Ubuntu 16.04

Installing/emulating SLURM on an Ubuntu 16.04 desktop: slurmd fails to start

The value in front of ControlMachine has to match the output of hostname -s on the machine on which slurmctld starts. The same holds for NodeName ; it has to match the output of hostname -s on the machine on which slurmd starts. As you only have one machine and it appears to be called Haggunenon, the relevant lines in slurm.conf should be:

ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN

If you want to start several slurmd daemon to emulate a larger cluster, you will need to start slurmd with the -N option (but that requires that Slurm be built using the --enable-multiple-slurmd configure option)


UPDATE. Here is a walkthrough. I setup a VM with Vagrant and VirtualBox (vagrant init ubuntu/xenial64 ; vagrant up) and then after vagrant ssh, I ran the following:

ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
ubuntu@ubuntu-xenial:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3
[...]
python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.

And in the end, it gives me the expected result:

ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
debug* up infinite 1 idle ubuntu-denial

If following the exact steps here does not help, try running:

sudo slurmctld -Dvvv
sudo slurmd -Dvvv

The messages should be explicit enough.

Emulating SLURM on Ubuntu 16.04

So ... we have an existing cluster here but it runs an older Ubuntu version which does not mesh well with my workstation running 17.04.

So on my workstation, I just made sure I slurmctld (backend) and slurmd installed, and then set up a trivial slurm.conf with

ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000

after which I restarted slurmcltd and then slurmd. Now all is fine:

root@mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
demo up infinite 1 idle mybox
root@mybox:/etc/slurm-llnl$

This is a degenerate setup, our real one has a mix of dev and prod machine and appropriate partitions. But this should answer your "can backend really be client" question. Also, my machine is not really called mybox but is not really pertinent for the question in either case.

Using Ubuntu 17.04, all stock, with munge to communicate (which is the default anyway).

Edit: To wit:

me@mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii slurm-client 16.05.9-1ubun amd64 SLURM client side commands
ii slurm-wlm-basic- 16.05.9-1ubun amd64 SLURM basic plugins
ii slurmctld 16.05.9-1ubun amd64 SLURM central management daemon
ii slurmd 16.05.9-1ubun amd64 SLURM compute node daemon
me@mybox:~$

Running multiple worker daemons SLURM

as your intention seems to be just testing the behavior of Slurm, I would recommend you to use the front-end mode, where you can create dummy computation nodes in the same machine.

In their FAQ, you have more details, but basically you must configure your installation to work with this mode:

./configure --enable-front-end  

And configure the nodes in slurm.conf

NodeName=test[1-100] NodeHostName=localhost

In that guide, they also explain how to launch more than one real daemons in the same node by changing the ports, but for my testing purposes it was not necessary.

Good luck!

Ubuntu Server 16.04 with dropbear

Manage to figure it out in the end. I ended up writing a blog post about it here Unlocking Ubuntu Server 16 encrypted LUKS using Dropbear SSH. The post I wrote is very heavily based from the answer I found here SSH to decrypt encrypted LVM during headless server boot? and all did was change the version 16 specific parts.

cheers
alexis



Related Topics



Leave a reply



Submit