Nexus/MBRC

From UMIACS
Revision as of 13:44, 6 June 2023 by Mbaney (talk | contribs)
Jump to navigation Jump to search

The Nexus scheduler houses MBRC's new computational partition. Only MBRC lab members are able to run non-interruptible jobs on these nodes.

Submission Nodes

There are two submission nodes for Nexus exclusively available for MBRC users.

  • nexusmbrc00.umiacs.umd.edu
  • nexusmbrc01.umiacs.umd.edu

Resources

The MBRC partition has nodes brought over from the previous standalone MBRC Slurm scheduler. The compute nodes are named mbrc##.

QoS

MBRC users have access to all of the standard QoS' in the mbrc partition using the mbrc account.

The additional QoSes for the MBRC partition specifically are:

  • huge-long: Allows for longer jobs using higher overall resources.

Please note that the partition has a GrpTRES limit of 100% of the available cores/RAM on the partition-specific nodes plus 50% of the available cores/RAM on legacy## nodes, so your job may need to wait if all available cores/RAM (or GPUs) are in use.

Jobs

You will need to specify --partition=mbrc, --account=mbrc, and a specific --qos to be able to submit jobs to the MBRC partition.

[username@nexusmbrc00:~ ] $ srun --pty --ntasks=4 --mem=8G --qos=default --partition=mbrc --account=mbrc --time 1-00:00:00 bash
srun: job 218874 queued and waiting for resources
srun: job 218874 has been allocated resources
[username@mbrc00:~ ] $ scontrol show job 218874
JobId=218874 JobName=bash
   UserId=username(1000) GroupId=username(21000) MCS_label=N/A
   Priority=897 Nice=0 Account=mbrc QOS=default
   JobState=RUNNING Reason=None Dependency=(null)
   Requeue=1 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
   RunTime=00:00:06 TimeLimit=1-00:00:00 TimeMin=N/A
   SubmitTime=2022-11-18T11:13:56 EligibleTime=2022-11-18T11:13:56
   AccrueTime=2022-11-18T11:13:56
   StartTime=2022-11-18T11:13:56 EndTime=2022-11-19T11:13:56 Deadline=N/A
   PreemptEligibleTime=2022-11-18T11:13:56 PreemptTime=None
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2022-11-18T11:13:56 Scheduler=Main
   Partition=mbrc AllocNode:Sid=nexusmbrc00:25443
   ReqNodeList=(null) ExcNodeList=(null)
   NodeList=mbrc00
   BatchHost=mbrc00
   NumNodes=1 NumCPUs=4 NumTasks=4 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   TRES=cpu=4,mem=8G,node=1,billing=2266
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=1 MinMemoryNode=8G MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
   Command=bash
   WorkDir=/nfshomes/username
   Power=

Storage

All data filesystems that were available in the standalone MBRC cluster are also available in Nexus.

MBRC users can also request Nexus project allocations.