Nexus/MC2
The Nexus scheduler houses MC2's new computational partition.
Submission Nodes
There are two submission nodes for Nexus exclusively available for MC2 users.
nexusmc200.umiacs.umd.edunexusmc201.umiacs.umd.edu
Resources
The MC2 partition has nodes brought over from the previous standalone MC2 Slurm scheduler. The compute nodes are named twist##.
QoS
MC2 users have access to all of the standard QoS' in the mc2 partition using the mc2 account.
$ show_qos
Name MaxWall MaxJobs MaxTRES MaxTRESPU GrpTRES
------------ ----------- ------- ------------------------------ ------------------------------ --------------------
normal
scavenger 2-00:00:00 cpu=64,gres/gpu=8,mem=256G cpu=192,gres/gpu=24,mem=768G
medium 2-00:00:00 cpu=8,gres/gpu=2,mem=64G
high 1-00:00:00 cpu=16,gres/gpu=4,mem=128G
default 3-00:00:00 cpu=4,gres/gpu=1,mem=32G
tron cpu=32,gres/gpu=4,mem=256G
huge-long 10-00:00:00 cpu=32,gres/gpu=8,mem=256G
clip cpu=339,mem=2926G
class cpu=32,gres/gpu=4,mem=256G
gamma cpu=179,mem=1511G
mc2 cpu=307,mem=1896G
cbcb cpu=913,mem=46931G
highmem 21-00:00:00 cpu=32,mem=2000G
Jobs
You will need to specify --partition=mc2, --account=mc2, and a specific --qos to be able to submit jobs to the MC2 partition.
[username@nexusmc200:~ ] $ srun --pty --ntasks=4 --mem=8G --qos=default --partition=mc2 --account=mc2 --time 1-00:00:00 bash srun: job 218874 queued and waiting for resources srun: job 218874 has been allocated resources [username@twist00:~ ] $ scontrol show job 218874 JobId=218874 JobName=bash UserId=username(1000) GroupId=username(21000) MCS_label=N/A Priority=897 Nice=0 Account=mc2 QOS=default JobState=RUNNING Reason=None Dependency=(null) Requeue=1 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0 RunTime=00:00:06 TimeLimit=1-00:00:00 TimeMin=N/A SubmitTime=2022-11-18T11:13:56 EligibleTime=2022-11-18T11:13:56 AccrueTime=2022-11-18T11:13:56 StartTime=2022-11-18T11:13:56 EndTime=2022-11-19T11:13:56 Deadline=N/A PreemptEligibleTime=2022-11-18T11:13:56 PreemptTime=None SuspendTime=None SecsPreSuspend=0 LastSchedEval=2022-11-18T11:13:56 Scheduler=Main Partition=mc2 AllocNode:Sid=nexuscbcb00:25443 ReqNodeList=(null) ExcNodeList=(null) NodeList=twist00 BatchHost=twist00 NumNodes=1 NumCPUs=16 NumTasks=16 CPUs/Task=1 ReqB:S:C:T=0:0:*:* TRES=cpu=16,mem=2000G,node=1,billing=2266 Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=* MinCPUsNode=1 MinMemoryNode=2000G MinTmpDiskNode=0 Features=(null) DelayBoot=00:00:00 OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null) Command=bash WorkDir=/nfshomes/username Power=
Storage
All data filesystems that were available in the standalone MC2 cluster are also available in Nexus.
MC2 users can also request Nexus project allocations.