SLURM/ClusterStatus: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually. | The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually. | ||
<pre> | <pre> | ||
username@opensub00:sinfo | |||
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST | PARTITION AVAIL TIMELIMIT NODES STATE NODELIST | ||
dpart* up infinite 8 idle openlab[00-07] | dpart* up infinite 8 idle openlab[00-07] | ||
Line 11: | Line 11: | ||
</pre> | </pre> | ||
<pre> | <pre> | ||
username@opensub00:sinfo -N | |||
NODELIST NODES PARTITION STATE | NODELIST NODES PARTITION STATE | ||
openlab00 1 dpart* idle | openlab00 1 dpart* idle |
Revision as of 15:45, 7 May 2021
Cluster Status
SLURM offers a variety of tools to check the general status of nodes/partitions in a cluster.
sinfo
The sinfo command will show you the status of partitions in the cluster. Passing the -N flag will show each node individually.
username@opensub00:sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST dpart* up infinite 8 idle openlab[00-07] gpu up infinite 2 idle openlab08
username@opensub00:sinfo -N NODELIST NODES PARTITION STATE openlab00 1 dpart* idle openlab01 1 dpart* idle openlab02 1 dpart* idle openlab03 1 dpart* idle openlab04 1 dpart* idle openlab05 1 dpart* idle openlab06 1 dpart* idle openlab07 1 dpart* idle openlab08 1 gpu idle
scontrol
The scontrol command can be used to view the status/configuration of the nodes in the cluster. If passed specific node name(s) only information about those node(s) will be displayed, otherwise all nodes will be listed. To specify multiple nodes, separate each node name by a comma (no spaces).
$ scontrol show nodes openlab00,openlab08 NodeName=openlab00 Arch=x86_64 CoresPerSocket=4 CPUAlloc=8 CPUErr=0 CPUTot=8 CPULoad=7.10 AvailableFeatures=(null) ActiveFeatures=(null) Gres=(null) NodeAddr=openlab00 NodeHostName=openlab00 Version=16.05 OS=Linux RealMemory=7822 AllocMem=7822 FreeMem=149 Sockets=2 Boards=1 State=ALLOCATED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A BootTime=2017-01-17T14:46:59 SlurmdStartTime=2017-01-17T14:47:43 CapWatts=n/a CurrentWatts=0 LowestJoules=0 ConsumedJoules=0 ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s NodeName=openlab08 Arch=x86_64 CoresPerSocket=8 CPUAlloc=1 CPUErr=0 CPUTot=16 CPULoad=1.19 AvailableFeatures=(null) ActiveFeatures=(null) Gres=gpu:3 NodeAddr=openlab08 NodeHostName=openlab08 Version=16.05 OS=Linux RealMemory=128722 AllocMem=1024 FreeMem=395 Sockets=2 Boards=1 State=MIXED ThreadsPerCore=1 TmpDisk=49975 Weight=1 Owner=N/A MCS_label=N/A BootTime=2016-12-22T20:26:52 SlurmdStartTime=2016-12-22T20:33:21 CapWatts=n/a CurrentWatts=0 LowestJoules=0 ConsumedJoules=0 ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
sacctmgr
The sacctmgr command shows cluster accounting information. One of the helpful commands is to list the available QOS which translates into queues in systems like PBS/Torque.
$ sacctmgr list qos format=Name,Priority,MaxWall,MaxJobsPU Name Priority MaxWall MaxJobsPU ---------- ---------- ----------- --------- normal 0 dpart 0 2-00:00:00 8 gpu 0 08:00:00 2