Nexus/GAMMA: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
 
(16 intermediate revisions by 2 users not shown)
Line 2: Line 2:


=Access=
=Access=
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission host that has additional local storage available.
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission node that has additional local storage available.
* <code>nexusgamma00.umiacs.umd.edu</code>
* <code>nexusgamma00.umiacs.umd.edu</code>
Please do not run anything on the submission node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.


=Quality of Service=
=Quality of Service=
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard QoS']] in the <code>gamma</code> partition using the <code>gamma</code> account.
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard job QoSes]] in the <code>gamma</code> partition using the <code>gamma</code> account.


The additional QoSes for the GAMMA partition specifically are:
The additional job QoSes for the GAMMA partition specifically are:
* <code>huge-long</code>: Allows for longer jobs using higher overall resources.
* <code>huge-long</code>: Allows for longer jobs using higher overall resources.


Please note that the partition has a <code>GrpTRES</code> limit of 100% of the available cores/RAM on the partition-specific nodes plus 50% of the available cores/RAM on legacy## nodes, so your job may need to wait if all available cores/RAM (or GPUs) are in use.
Please note that the partition has a <code>GrpTRES</code> limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.


=Hardware=
=Hardware=
Line 18: Line 20:
! Type
! Type
! Quantity
! Quantity
! CPUs
! CPU cores per node
! Memory
! Memory per node
! GPUs
! GPUs per node
|-
|-
|gammagpu[00-04,07-09]
|gammagpu[00-04,06-09]
|A5000 GPU Node
|A5000 GPU Node
|8
|9
|32
|32
|256GB
|256GB
Line 38: Line 40:
|
|
!Total
!Total
|9
|10
|288
|320
|2304GB
|2560GB
|72
|80
|}
|}
=Storage=
Other than the [[Nexus#Storage|default Nexus storage allocation(s)]], Gamma has invested in a 20TB NVMe scratch file system on <code>nexusgamma00.umiacs.umd.edu</code> that is available as <code>/scratch1</code>.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords. 
This file system is not available over [[NFS]] and there are no [[Backups | backups]] or [[Snapshots | snapshots]] available for this file system.  Please refer to our [[LocalDataStorage#UNIX_Local_Storage|UNIX Local Storage]] page for more information.


=Example=
=Example=
From <code>nexusgamma00.umiacs.umd.edu</code> you can run the following example to submit an interactive job.  Please note that you need to specify the <code>--account</code>, <code>--partition</code> and <code>--qos</code>.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.
From <code>nexusgamma00.umiacs.umd.edu</code> you can run the following example to submit an interactive job.  Please note that you need to specify the <code>--partition</code> and <code>--account</code>.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.


<pre>
<pre>
Line 67: Line 64:
</pre>
</pre>


You can also use SBATCH to submit your job, Here are 2 examples on how to do that.
You can also use SBATCH to submit your jobHere are two examples on how to do that.


<pre>
<pre>
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh
</pre>
OR
OR
<pre>
$ sbatch script.sh
$ sbatch script.sh


Line 85: Line 86:
python your_file.py
python your_file.py
</pre>
</pre>
=Storage=
There are 3 types of user storage available to users in GAMMA:
* Home directories
* Project directories
* Scratch directories
There is also read-only storage available for Dataset directories.
GAMMA users can also request [[Nexus#Project_Allocations | Nexus project allocations]].
===Home Directories===
{{Nfshomes}}
===Project Directories===
You can request project based allocations for up to 8TB and up to 180 days with approval from a GAMMA faculty member. 
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that approved the project in the conversation.  Please include the following details:
* Project Name (short)
* Description
* Size (1TB, 2TB, etc.)
* Length in days (30 days, 90 days, etc.)
* Other user(s) that need to access the allocation, if any
These allocations will be available from '''/fs/gamma-projects''' under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).
* If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.
* If you do not respond to staff's request by the end of the allocation period, staff will make the allocation temporarily inaccessible.
** If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible.
** If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.
This data is backed up nightly.
===Scratch Directories===
Scratch data has no data protection, there are no snapshots and the data is not backed up.
There are two types of scratch directories:
* Network scratch directory
* Local scratch directories
====Network Scratch Directory====
You are allocated 100GB of scratch space via NFS from <code>/gammascratch/$username</code>.  '''It is not backed up or protected in any way.''' 
This directory is '''automounted''' so you may not see your directory if you run <code>ls /gammascratch</code> but it will be mounted when you <code>cd</code> into your /gammascratch directory.
You may request a permanent increase of up to 200GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 200GB, you will need faculty approval.
This file system is available on all submission and computational nodes within the cluster.
====Local Scratch Directories====
These file systems are not available over [[NFS]] and '''there are no backups or snapshots available''' for these file systems.
* Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named <code>/scratch0</code>, <code>/scratch1</code>, etc.  These directories are local to each node, ie. the <code>/scratch0</code> on two different nodes are completely separate.
** These directories are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.
** These local scratch directories have a tmpwatch job which will '''delete unaccessed data after 90 days''', scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.
* Gamma has invested in a 20TB NVMe scratch file system on <code>nexusgamma00.umiacs.umd.edu</code> that is available as <code>/scratch1</code>.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords.
** The <code>/scratch1</code> directory on <code>nexusgamma00.umiacs.umd.edu</code> doesn't have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.
===Datasets===
We have read-only dataset storage available at <code>/fs/gamma-datasets</code>.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].
The list of GAMMA datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA here].

Latest revision as of 17:43, 22 November 2024

The GAMMA lab has a partition of GPU nodes available in the Nexus. Only GAMMA lab members are able to run non-interruptible jobs on these nodes.

Access

You can always find out what hosts you have access to submit via the Nexus#Access page. The GAMMA lab in particular has a special submission node that has additional local storage available.

  • nexusgamma00.umiacs.umd.edu

Please do not run anything on the submission node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.

Quality of Service

GAMMA users have access to all of the standard job QoSes in the gamma partition using the gamma account.

The additional job QoSes for the GAMMA partition specifically are:

  • huge-long: Allows for longer jobs using higher overall resources.

Please note that the partition has a GrpTRES limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.

Hardware

Nodenames Type Quantity CPU cores per node Memory per node GPUs per node
gammagpu[00-04,06-09] A5000 GPU Node 9 32 256GB 8
gammagpu05 A4000 GPU Node 1 32 256GB 8
Total 10 320 2560GB 80

Example

From nexusgamma00.umiacs.umd.edu you can run the following example to submit an interactive job. Please note that you need to specify the --partition and --account. Please refer to our SLURM documentation about about how to further customize your submissions including making a batch submission. The following command will allocate 8 GPUs for 2 days in an interactive session. Change parameters accordingly to your needs. We discourage use of srun and promote use of sbatch for fair use of GPUs.

$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash
$ hostname
gammagpu01.umiacs.umd.edu
$ nvidia-smi -L
GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66)
GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd)
GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193)
GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3)
GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a)
GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859)
GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe)
GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)

You can also use SBATCH to submit your job. Here are two examples on how to do that.

$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh

OR

$ sbatch script.sh

// script.sh //

#!/bin/bash
#SBATCH --gres=gpu:8
#SBATCH --account=gamma
#SBATCH --partition=gamma
#SBATCH --qos=huge-long
#SBATCH --time=1-23:00:00

python your_file.py

Storage

There are 3 types of user storage available to users in GAMMA:

  • Home directories
  • Project directories
  • Scratch directories

There is also read-only storage available for Dataset directories.

GAMMA users can also request Nexus project allocations.

Home Directories

You have 30GB of home directory storage available at /nfshomes/<username>. It has both Snapshots and Backups enabled.

Home directories are intended to store personal or configuration files only. We encourage you to not share any data in your home directory. You are encouraged to utilize our GitLab infrastructure to host your code repositories.

NOTE: To check your quota on this directory, use the command df -h ~.

Project Directories

You can request project based allocations for up to 8TB and up to 180 days with approval from a GAMMA faculty member.

To request an allocation, please contact staff with the faculty member(s) that approved the project in the conversation. Please include the following details:

  • Project Name (short)
  • Description
  • Size (1TB, 2TB, etc.)
  • Length in days (30 days, 90 days, etc.)
  • Other user(s) that need to access the allocation, if any

These allocations will be available from /fs/gamma-projects under a name that you provide when you request the allocation. Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).

  • If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period. Staff will then remove the allocation.
  • If you do not respond to staff's request by the end of the allocation period, staff will make the allocation temporarily inaccessible.
    • If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible.
    • If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.

This data is backed up nightly.

Scratch Directories

Scratch data has no data protection, there are no snapshots and the data is not backed up. There are two types of scratch directories:

  • Network scratch directory
  • Local scratch directories

Network Scratch Directory

You are allocated 100GB of scratch space via NFS from /gammascratch/$username. It is not backed up or protected in any way.

This directory is automounted so you may not see your directory if you run ls /gammascratch but it will be mounted when you cd into your /gammascratch directory.

You may request a permanent increase of up to 200GB total space without any faculty approval by contacting staff. If you need space beyond 200GB, you will need faculty approval.

This file system is available on all submission and computational nodes within the cluster.

Local Scratch Directories

These file systems are not available over NFS and there are no backups or snapshots available for these file systems.

  • Each computational node that you can schedule compute jobs on has one or more local scratch directories. These are always named /scratch0, /scratch1, etc. These directories are local to each node, ie. the /scratch0 on two different nodes are completely separate.
    • These directories are almost always more performant than any other storage available to the job. However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.
    • These local scratch directories have a tmpwatch job which will delete unaccessed data after 90 days, scheduled via maintenance jobs to run once a month during our monthly maintenance windows. Again, please make sure you secure any data you write to these directories at the end of your job.
  • Gamma has invested in a 20TB NVMe scratch file system on nexusgamma00.umiacs.umd.edu that is available as /scratch1. To utilize this space, you will need to copy data from/to this over SSH from a compute node. To make this easier, you may want to setup SSH keys that will allow you to copy data without prompting for passwords.
    • The /scratch1 directory on nexusgamma00.umiacs.umd.edu doesn't have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.

Datasets

We have read-only dataset storage available at /fs/gamma-datasets. If there are datasets that you would like to see curated and available, please see this page.

The list of GAMMA datasets we currently host can be viewed here.