CML: Difference between revisions

From UMIACS
Jump to navigation Jump to search
Line 34: Line 34:


=== Network Scratch Directory===
=== Network Scratch Directory===
Users granted access to the CML compute infrastructure are each allocated 200GB of network attached scratch.  This is available as <code>/fs/cml-scratch/USERNAME</code> where USERNAME is your username.
Users granted access to the CML compute infrastructure are each allocated 200GB of network attached scratch.  This is available as <code>/cmlscratch/USERNAME</code> where USERNAME is your username.  This directory is '''automounted''' so you will need to <code>cd</code> into the directory or request/specify a fully qualified file path to access this.


=== Local Scratch Directory===
=== Local Scratch Directory===
Each computational node that a user can schedule compute jobs on has one or more local scratch directories.  These are always named <code>/scratch0</code>, <code>/scratch1</code>, etc.  These are almost always more performant than any other storage available to the job.  However users must stage their data within the confine of their job and stage the data out before the end of their job.
Each computational node that a user can schedule compute jobs on has one or more local scratch directories.  These are always named <code>/scratch0</code>, <code>/scratch1</code>, etc.  These are almost always more performant than any other storage available to the job.  However users must stage their data within the confine of their job and stage the data out before the end of their job.

Revision as of 17:43, 7 August 2019

The Center for Machine Learning (CML) at the University of Maryland is located within the Institute for Advanced Computer Studies. The CML has a cluster of computational (CPU/GPU) resources that are available to be scheduled.

Compute Infrastructure

Each of UMIACS cluster computational infrastructures is accessed through the submission node. Users will need to submit jobs through the SLURM resource manager once they have logged into the submission node. Each cluster in UMIACS has different quality of service (QoS) that need to be selected upon submission of a job.

The current submission node(s) for CML are:

  • cmlsub00.umiacs.umd.edu

GPUs

Jobs that require GPU resources need to explicitly request the resources within their job submission.

Data Storage

Until the final storage investment arrives we have made available a temporary allocation of storage. There are 3 types of storage available to users in the CML home directories, project directories and scratch directories.

Home Directories

Home directories in the CML computational infrastructure are available from the Institutes NFShomes as /nfshomes/USERNAME where USERNAME is your username. These home directories have very limited storage and are intended for your personal files, configuration and source code. Your home directory is not intended for data sets or other large scale data holdings. Users are encouraged to utilize our GitLab infrastructure to host your code repositories.

NOTE: To check your quota on this directory you will need to use the quota -s command.

Your home directory data is fully protected and has both snapshots and is backed up nightly.

Project Directories

Users within the CML compute infrastructure can request project based allocations for up to 1TB for up to 120 days from staff@umiacs.umd.edu with approval from a CML faculty member and the director. These allocations will be available from /fs/cml-projects under a name that you provide when you request the allocation. Once the allocation period is over the user will be contacted and give a window of opportunity to clean and secure their data before staff will remove the allocation.

This data is backed up nightly.

Scratch Directories

There are two types of scratch directories in the CML compute infrastructure, network and local scratch directories. Scratch data has no data protection including no snapshots and the data is not backed up.

Network Scratch Directory

Users granted access to the CML compute infrastructure are each allocated 200GB of network attached scratch. This is available as /cmlscratch/USERNAME where USERNAME is your username. This directory is automounted so you will need to cd into the directory or request/specify a fully qualified file path to access this.

Local Scratch Directory

Each computational node that a user can schedule compute jobs on has one or more local scratch directories. These are always named /scratch0, /scratch1, etc. These are almost always more performant than any other storage available to the job. However users must stage their data within the confine of their job and stage the data out before the end of their job.