Nexus/GAMMA: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the Nexus which are only available to GAMMA lab members. | The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the Nexus which are only available to GAMMA lab members. | ||
= | =Access= | ||
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page. However the GAMMA lab has a special submission host that has additional local storage available. | You can always find out what hosts you have access to submit via the [[Nexus#Access]] page. However the GAMMA lab has a special submission host that has additional local storage available. | ||
Revision as of 18:45, 12 July 2022
The GAMMA lab has a partition of GPU nodes available in the Nexus which are only available to GAMMA lab members.
Access
You can always find out what hosts you have access to submit via the Nexus#Access page. However the GAMMA lab has a special submission host that has additional local storage available.
nexusgamma00.umiacs.umd.edu
Quality of Service
The following QoS are available to this partition. Please run the show_qos
command on a submission host to show the limits for these QoS.
- default
- medium
- high
- huge-long
Hardware
Nodenames | Type | Quantity | CPUs | Memory | GPUs |
---|---|---|---|---|---|
gammagpu[01-03] | A5000 GPU Node | 3 | 32 | 256GB | 8 |
Total | 3 | 96 | 768GB | 24 |
Storage
Other than the default Nexus storage allocation(s) Gamma has invested in a 20TB NVMe scratch file system on the nexusgamma00.umiacs.umd.edu
host as /scratch1
. To utilize this space users will need to copy data from/to this over SSH from a compute node. To make this easier users may want to setup SSH keys that will allow users to copy data without prompting for passwords.
This file system is not available over NFS, there are no backups or snapshots available for this file system. Please refer to our UNIX Local Storage page for more information.
Example
From nexusgamma00.umiacs.umd.edu
you can run the following example to submit an interactive job. Please note that you need to specify the --account
, --partition
and --qos
. Please refer to our SLURM documentation about about how to further customize your submissions including making a batch submission.
$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash $ hostname gammagpu01.umiacs.umd.edu $ nvidia-smi -L GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66) GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd) GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193) GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3) GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a) GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859) GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe) GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)