The Nexus cluster already has a large pool of compute resources made possible through leftover funding for the Brendan Iribe Center. Details on common nodes already in the cluster (Tron partition) can be found here.
The Vulcan cluster's standalone submission nodes
vulcansub01.umiacs.umd.edu were retired on Thursday, September 21st, 2023 during that month's maintenance window (5-8pm). Please use
nexusvulcan01.umiacs.umd.edu for any general purpose Vulcan compute needs.
Please contact staff with any questions or concerns.
The Nexus cluster submission nodes that are allocated to Vulcan are
All partitions, QoSes, and account names from the standalone Vulcan cluster have been moved over to Nexus. However, please note that
vulcan- is prepended to all of the values that were present in the standalone Vulcan cluster to distinguish them from existing values in Nexus. The lone exception is the base account that was named
vulcan in the standalone cluster (it is also named just
vulcan in Nexus).
Here are some before/after examples of job submission with various parameters:
|Standalone Vulcan cluster submission command||Nexus cluster submission command|
Vulcan users (exclusively) can schedule non-interruptible jobs on Vulcan nodes with any non-scavenger job parameters. Please note that the
vulcan-dpart partition has a
GrpTRES limit of 100% of the available cores/RAM on all vulcan## in aggregate nodes plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use. It also has a max submission limit of 500 jobs per user simultaneously so as to not overload the cluster. This is codified by the partition QoS named vulcan.
Please note that the Vulcan compute nodes are also in the institute-wide
scavenger partition in Nexus. Vulcan users still have scavenging priority over these nodes via the
vulcan-scavenger partition (i.e., all
vulcan- partition jobs (other than
vulcan-scavenger) can preempt both
scavenger partition jobs, and
vulcan-scavenger partition jobs can preempt
scavenger partition jobs).
There are currently 45 GPU nodes available running a mixture of NVIDIA RTX A6000, NVIDIA RTX A5000, NVIDIA RTX A4000, NVIDIA Quadro P6000, NVIDIA GeForce GTX 1080 Ti, NVIDIA GeForce RTX 2080 Ti, and NVIDIA Tesla P100 cards. There are also 2 CPU-only nodes available.
All nodes are scheduled with the SLURM resource manager.
There are three partitions available to general Vulcan SLURM users. You must specify a partition when submitting your job.
- vulcan-dpart - This is the default partition. Job allocations are guaranteed.
- vulcan-scavenger - This is the alternate partition that allows jobs longer run times and more resources but is preemptable when jobs in other
vulcan-partitions are ready to be scheduled.
- vulcan-cpu - This partition is for CPU focused jobs. Job allocations are guaranteed.
There are a few additional partitions available to subsets of Vulcan users based on specific requirements.
Vulcan has a base SLURM account
vulcan which has a modest number of guaranteed billing resources available to all cluster users at any given time. Other faculty that have invested in Vulcan compute infrastructure have an additional account provided to their sponsored accounts on the cluster, which provides a number of guaranteed billing resources corresponding to the amount that they invested.
If you do not specify an account when submitting your job, you will receive the
vulcan account. If your faculty sponsor has their own account, it is recommended to use that account for job submission.
The current faculty accounts are:
$ sacctmgr show account format=account%20,description%30,organization%10 Account Descr Org -------------------- ------------------------------ ---------- ... ... ... vulcan vulcan vulcan vulcan-abhinav vulcan - abhinav shrivastava vulcan vulcan-djacobs vulcan - david jacobs vulcan vulcan-jbhuang vulcan - jia-bin huang vulcan vulcan-lsd vulcan - larry davis vulcan vulcan-metzler vulcan - chris metzler vulcan vulcan-rama vulcan - rama chellappa vulcan vulcan-ramani vulcan - ramani duraiswami vulcan vulcan-yaser vulcan - yaser yacoob vulcan vulcan-zwicker vulcan - matthias zwicker vulcan ... ... ...
Faculty can manage this list of users via our Directory application in the Security Groups section. The security group that controls access has the prefix
vulcan_ and then the faculty username. It will also list
slurm://nexusctl.umiacs.umd.edu as the associated URI.
You can check your account associations by running the show_assoc command to see the accounts you are associated with. Please contact staff and include your faculty member in the conversation if you do not see the appropriate association.
$ show_assoc User Account MaxJobs GrpTRES QOS ---------- ---------------- ------- ------------- -------------------------------------------------------------------------------- ... ... ... ... abhinav abhinav 48 vulcan-cpu,vulcan-default,vulcan-high,vulcan-medium,vulcan-scavenger abhinav vulcan 48 vulcan-cpu,vulcan-default,vulcan-medium,vulcan-scavenger ... ... ... ...
You can also see the total number of Track-able Resources (TRES) allowed for each account by running the following command. Please make sure you give the appropriate account that you are looking for. As shown below, there is a concurrent limit of 64 total GPUs for all users not in a contributing faculty group.
$ sacctmgr show assoc account=vulcan format=user,account,qos,grptres User Account QOS GrpTRES ---------- ---------- -------------------- ------------- vulcan gres/gpu=64 ... ...
You need to decide the QOS to submit with which will set a certain number of restrictions to your job. If you do not specify a QoS when submitting your job using the
--qos parameter, you will receive the vulcan-default QoS assuming you are using a Vulcan account.
sacctmgr command will list the current QOS. Either the
vulcan-high QOS is required for the vulcan-dpart partition. Please note that only faculty accounts (see above) have access to the
The following example will show you the current limits that the QOS have. The output is truncated to show only relevant Vulcan QoS.
$ show_qos Name MaxWall MaxTRES MaxJobsPU MaxTRESPU -------------------- ----------- ------------------------------ --------- ------------------------------ ... vulcan-cpu 2-00:00:00 cpu=1024,mem=4T 4 vulcan-default 7-00:00:00 cpu=4,gres/gpu=1,mem=32G 2 vulcan-exempt 7-00:00:00 cpu=32,gres/gpu=8,mem=256G 2 vulcan-high 1-12:00:00 cpu=16,gres/gpu=4,mem=128G 2 vulcan-janus 3-00:00:00 cpu=32,gres/gpu=10,mem=256G vulcan-medium 3-00:00:00 cpu=8,gres/gpu=2,mem=64G 2 vulcan-sailon 3-00:00:00 cpu=32,gres/gpu=8,mem=256G gres/gpu=48 vulcan-scavenger 3-00:00:00 cpu=32,gres/gpu=8,mem=256G ... $ show_partition_qos Name MaxSubmitPU MaxTRESPU GrpTRES -------------------- ----------- ------------------------------ -------------------- ... vulcan 500 cpu=1760,mem=15824G vulcan-scavenger 500 ...
Vulcan has the following storage available. Please also review UMIACS Local Data Storage policies including any volume that is labeled as scratch.
Vulcan users can also request Nexus project allocations.
Home directories are intended to store personal or configuration files only. We encourage users to not share any data in their home directory.
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the Vulcan compute infrastructure:
- Network scratch directory
- Local scratch directories
Network Scratch Directory
You are allocated 300GB of scratch space via NFS from
/vulcanscratch/$username. It is not backed up or protected in any way. This directory is automounted so you will need to
cd into the directory or request/specify a fully qualified file path to access this.
You may request a temporary increase of up to 500GB total space for a maximum of 120 days without any faculty approval by contacting email@example.com. Once the temporary increase period is over, you will be contacted and given a one-week window of opportunity to clean and secure your data before staff will forcibly remove data to get your space back under 300GB. If you need space beyond 500GB or for longer than 120 days, you will need faculty approval and/or a project directory.
This file system is available on all submission, data management, and computational nodes within the cluster.
Local Scratch Directories
Each computational node that you can schedule compute jobs on has one or more local scratch directories. These are always named
/scratch1, etc. These are almost always more performant than any other storage available to the job. However, you must stage their data within the confine of their job and stage the data out before the end of their job.
These local scratch directories have a tmpwatch job which will delete unaccessed data after 90 days, scheduled via maintenance jobs to run once a month at 1am. Different nodes will run the maintenance jobs on different days of the month to ensure the cluster is still highly available at all times. Please make sure you secure any data you write to these directories at the end of your job.
We have read-only dataset storage available at
/fs/vulcan-datasets. If there are datasets that you would like to see curated and available, please see this page.
The list of Vulcan datasets we currently host can be viewed here.
Users within the Vulcan compute infrastructure can request project based allocations for up to 10TB for up to 180 days by contacting staff with approval from the Vulcan faculty manager (Dr. Shrivastava). These allocations will be available from
/fs/vulcan-projects under a name that you provide when you request the allocation. Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation for up to another 180 days (requires re-approval from Dr. Shrivastava). If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period. Staff will then remove the allocation. If you do not respond to staff's request by the end of the allocation period, staff will make the allocation temporarily inaccessible. If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible. If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.
This data, by default, will be backed up nightly and have a limited snapshot schedule (1 daily snapshot). Upon request, staff can both exclude the data from backups and/or disable snapshots on the project storage volume. We currently have 100TB total to support these projects which includes the snapshot data for this volume.
All Vulcan users can request project allocations in the UMIACS Object Store. Please email firstname.lastname@example.org with a short project name and the amount of storage you will need to get started.
The Nexus uses NFShomes home directories - if your UMIACS account was created before February 22nd, 2023, you were using
/cfarhomes/<username> as your home directory on the standalone Vulcan cluster. While
/cfarhomes is available on Nexus, your shell initialization scripts from it will not automatically load. Please copy over anything you need to your
/nfshomes/<username> directory at your earliest convenience, as
/cfarhomes will be retired in a two phase process:
- Fri 11/17/2023, 5pm: cfarhomes directories are made read-only
- Thu 12/21/2023, 5-8pm (monthly maintenance window): cfarhomes directories are taken offline