<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ncaple</id>
	<title>UMIACS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ncaple"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php/Special:Contributions/Ncaple"/>
	<updated>2026-05-09T18:33:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Aim&amp;diff=12775</id>
		<title>Aim</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Aim&amp;diff=12775"/>
		<updated>2025-08-06T17:49:42Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: Redirected page to AIM Artificial Intelligence Interdisciplinary Institute at Maryland&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[AIM Artificial Intelligence Interdisciplinary Institute at Maryland]]&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12773</id>
		<title>AIM Artificial Intelligence Interdisciplinary Institute at Maryland</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12773"/>
		<updated>2025-08-06T17:15:25Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&lt;br /&gt;
&lt;br /&gt;
For more information, please see AIM&#039;s [https://aim.umd.edu/about website ]&lt;br /&gt;
&lt;br /&gt;
=Support=&lt;br /&gt;
&lt;br /&gt;
AIM&#039;s IT support needs are handled by the &amp;lt;b&amp;gt;UMIACS Help Desk&amp;lt;/b&amp;gt;. Whether you require assistance with computing resources, account access, email setup, networking, or troubleshooting technical issues, the UMIACS Help Desk is your primary point of contact.&lt;br /&gt;
&lt;br /&gt;
==Contact Information==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Email:&amp;lt;/b&amp;gt; staff@umiacs.umd.edu&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Phone:&amp;lt;/b&amp;gt; 301-405-1775&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Location:&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: UMIACS Help Desk&lt;br /&gt;
&lt;br /&gt;
: Room 3109, Iribe Center&lt;br /&gt;
&lt;br /&gt;
: 8125 Paint Branch Drive&lt;br /&gt;
&lt;br /&gt;
: College Park, MD 20742&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12772</id>
		<title>AIM Artificial Intelligence Interdisciplinary Institute at Maryland</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12772"/>
		<updated>2025-08-06T17:12:14Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&lt;br /&gt;
&lt;br /&gt;
For more information, please see AIM&#039;s [https://aim.umd.edu/about website ]&lt;br /&gt;
&lt;br /&gt;
=Support=&lt;br /&gt;
&lt;br /&gt;
AIM&#039;s IT support needs are handled by the &amp;lt;b&amp;gt;UMIACS Help Desk&amp;lt;/b&amp;gt;. Whether you require assistance with computing resources, account access, email setup, networking, or troubleshooting technical issues, the UMIACS Help Desk is your primary point of contact.&lt;br /&gt;
&lt;br /&gt;
==Contact Information==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Email:&amp;lt;/b&amp;gt; staff@umiacs.umd.edu&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Phone:&amp;lt;/b&amp;gt; 301-405-1775&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Location:&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
UMIACS Help Desk&lt;br /&gt;
&lt;br /&gt;
Room 3109, Iribe Center&lt;br /&gt;
&lt;br /&gt;
8125 Paint Branch Drive&lt;br /&gt;
&lt;br /&gt;
College Park, MD 20742&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12771</id>
		<title>AIM Artificial Intelligence Interdisciplinary Institute at Maryland</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12771"/>
		<updated>2025-08-06T17:08:20Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&lt;br /&gt;
&lt;br /&gt;
For more information, please see AIM&#039;s [https://aim.umd.edu/about website ]&lt;br /&gt;
=support=&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12770</id>
		<title>AIM Artificial Intelligence Interdisciplinary Institute at Maryland</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12770"/>
		<updated>2025-08-06T17:05:03Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&lt;br /&gt;
&lt;br /&gt;
=support=&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12769</id>
		<title>AIM Artificial Intelligence Interdisciplinary Institute at Maryland</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=AIM_Artificial_Intelligence_Interdisciplinary_Institute_at_Maryland&amp;diff=12769"/>
		<updated>2025-08-06T17:04:53Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: Created page with &amp;quot;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AIM, or &amp;lt;b&amp;gt;Artificial Intelligence Interdisciplinary Institute at Maryland&amp;lt;/b&amp;gt; is a interdisciplinary research institution that focuses on responsible, ethical development and use of AI to advance public good in industry, government and society.&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=LabFacilities&amp;diff=12768</id>
		<title>LabFacilities</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=LabFacilities&amp;diff=12768"/>
		<updated>2025-08-06T16:44:58Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[CBCB | Center for Bioinformatics and Computational Biology]] ([http://www.cbcb.umd.edu CBCB]) &lt;br /&gt;
* Center for Automation Research ([http://www.cfar.umd.edu/ CfAR])&lt;br /&gt;
* Center for Health-related Informatics and Bioimaging ([http://www.chib.umd.edu/ CHIB]) &lt;br /&gt;
* Computational Linguistics and Information Processing ([https://wiki.umiacs.umd.edu/clip/index.php/Main_Page CLIP]) &lt;br /&gt;
* Center for Machine Learning ([https://ml.umd.edu CML])&lt;br /&gt;
* Computer Vision Laboratory ([http://www.cfar.umd.edu/cvl/ CVL]) &lt;br /&gt;
* Distributed Systems Software Laboratory ([http://www.cs.umd.edu/projects/dssl DSSL]) &lt;br /&gt;
* Fraunhofer Center at Maryland ([https://www.cese.fraunhofer.org/ FCMD])&lt;br /&gt;
* Human Computer Interaction Laboratory ([http://hcil.umd.edu/ HCIL]) &lt;br /&gt;
* Graphics and Visual Informatics Laboratory ([http://www.cs.umd.edu/gvil/ GVIL])&lt;br /&gt;
* Language and Media processing laboratory ([http://lamp.cfar.umd.edu/ LAMP])&lt;br /&gt;
* Laboratory for Parallel and Distributed Computing ([http://www.umiacs.umd.edu/labs/parallel/index.htm LPDC])&lt;br /&gt;
* Laboratory for Telecommunication Sciences ([http://www.ltsnet.net/ LTS])&lt;br /&gt;
* Lab for Broadband Mobile Communications ([http://www.umiacs.umd.edu/research/maxwell/ MAXWell])&lt;br /&gt;
* Maryland Cybersecurity Center ([http://cyber.umd.edu/ MC2])&lt;br /&gt;
* [[QuICS | Joint Center for Quantum Information and Computer Science]] ([https://quics.umd.edu/ QUICS])&lt;br /&gt;
* National Socio-Environmental Synthesis Center ([https://www.sesync.org/ SESYNC])&lt;br /&gt;
* [[AIM Artificial Intelligence Interdisciplinary Institute at Maryland]] ([https://aim.umd.edu/ AIM])&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=VS_Code&amp;diff=11272</id>
		<title>VS Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=VS_Code&amp;diff=11272"/>
		<updated>2023-09-07T17:20:39Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: Created blank page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Adobe&amp;diff=11061</id>
		<title>Adobe</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Adobe&amp;diff=11061"/>
		<updated>2023-07-03T23:21:12Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: adobe instruction link was out of date&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Adobe is a software development company most known for their multimedia software products and Flash Player. The commonly deployed Adobe product at UMIACS is the [https://www.adobe.com/creativecloud.html Creative Cloud Suite] which contains products such as Acrobat, Photoshop, and more.&lt;br /&gt;
&lt;br /&gt;
==Installation and Licensing==&lt;br /&gt;
&#039;&#039;&#039;Please note that UMD&#039;s license only allows for two concurrent logons (on two computing devices) to use any Creative Cloud product. If you are already logged onto two devices and try to log on a third, it will prompt you to confirm that you are OK with being signed out of one or more of the other devices in order to use Creative Cloud products on that new device.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===UMIACS-supported desktop machines===&lt;br /&gt;
Please [[HelpDesk | contact staff]] if you would like one or more Creative Cloud products installed on a supported [[Windows]] or macOS desktop machine. Staff will install the Creative Cloud desktop app and the applications that you want.&lt;br /&gt;
&lt;br /&gt;
You will still need to sign in with your UMD account to use the apps the same as how you have to for laptops or other personal machines by following the steps [https://umd.service-now.com/itsupport?id=kb_article&amp;amp;article=KB0013664 here].&lt;br /&gt;
&lt;br /&gt;
===All other machines===&lt;br /&gt;
Creative Cloud products can be installed on laptops or personal machines following the instructions provided by UMD on Terpware for [https://terpware.umd.edu/Windows/List/244 Windows] or [https://terpware.umd.edu/Mac/List/244 macOS]. Faculty and staff should choose the Faculty and Staff Enterprise &#039;&#039;&#039;for Individuals&#039;&#039;&#039; option.&lt;br /&gt;
&lt;br /&gt;
You will be prompted to sign in when using any Creative Cloud product installed this way. Licensing is handled through your UMD account. Instructions for signing in can be found [https://umd.service-now.com/itsupport?id=kb_article_view&amp;amp;sysparm_article=KB0013664 here].&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=CML&amp;diff=11014</id>
		<title>CML</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=CML&amp;diff=11014"/>
		<updated>2023-06-12T13:49:28Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: /* Project Directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Center for Machine Learning ([https://ml.umd.edu CML]) at the University of Maryland is located within the Institute for Advanced Computer Studies.  The CML has a cluster of computational (CPU/GPU) resources that are available to be scheduled.&lt;br /&gt;
&lt;br /&gt;
=Compute Infrastructure=&lt;br /&gt;
Each of UMIACS&#039; cluster computational infrastructures is accessed through the submission node.  Users will need to submit jobs through the [[SLURM]] resource manager once they have logged into the submission node.  Each cluster in UMIACS has different quality of services (QoS) that are &#039;&#039;&#039;required&#039;&#039;&#039; to be selected upon submission of a job. Many clusters, including this one, also have other resources such as GPUs that need to be requested for a job.  &lt;br /&gt;
&lt;br /&gt;
The current submission node(s) for &#039;&#039;&#039;CML&#039;&#039;&#039; are:&lt;br /&gt;
* &amp;lt;code&amp;gt;cmlsub00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Center for Machine Learning GPU resources are a small investment from the base Center funds and a number of investments by individual faculty members.  The scheduler&#039;s resources are modeled around this concept.  This means there are additional Slurm accounts that users will need to be aware of if they are submitting in the non-scavenger partition.&lt;br /&gt;
&lt;br /&gt;
==Partitions==&lt;br /&gt;
There are three partitions to the CML [[SLURM]] computational infrastructure.  If you do not specify a partition when submitting your job, you will receive the &#039;&#039;&#039;dpart&#039;&#039;&#039; partition.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;dpart&#039;&#039;&#039; - This is the default partition. Job allocations are guaranteed.&lt;br /&gt;
* &#039;&#039;&#039;scavenger&#039;&#039;&#039; - This is the alternate partition that allows jobs longer run times and more resources but is preemptable when jobs in other partitions are ready to be scheduled.&lt;br /&gt;
* &#039;&#039;&#039;cpu&#039;&#039;&#039; - This partition is for CPU focused jobs. Job allocations are guaranteed.&lt;br /&gt;
&lt;br /&gt;
==Accounts==&lt;br /&gt;
The Center has a base SLURM account &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; which has a modest number of guaranteed billing resources available to all cluster users at any given time.  Other faculty that have invested in the cluster have an additional account provided to their sponsored accounts on the cluster, which provides a number of guaranteed billing resources corresponding to the amount that they invested.  If you do not specify an account when submitting your job, you will receive the &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show accounts&lt;br /&gt;
   Account                Descr                  Org&lt;br /&gt;
---------- -------------------- --------------------&lt;br /&gt;
   abhinav  abhinav shrivastava                  cml&lt;br /&gt;
       cml                  cml                  cml&lt;br /&gt;
   furongh         furong huang                  cml&lt;br /&gt;
  hajiagha  mohammad hajiaghayi                  cml&lt;br /&gt;
      john       john dickerson                  cml&lt;br /&gt;
    ramani    ramani duraiswami                  cml&lt;br /&gt;
      root default root account                 root&lt;br /&gt;
 scavenger            scavenger            scavenger&lt;br /&gt;
    sfeizi         soheil feizi                  cml&lt;br /&gt;
   tokekar       pratap tokekar                  cml&lt;br /&gt;
      tomg        tom goldstein                  cml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check your account associations by running the &#039;&#039;&#039;show_assoc&#039;&#039;&#039; to see the accounts you are associated with.  Please [[HelpDesk | contact staff]] and include your faculty member in the conversation if you do not see the appropriate association. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_assoc&lt;br /&gt;
      User    Account   Def Acct   Def QOS                                  QOS&lt;br /&gt;
---------- ---------- ---------- --------- ------------------------------------&lt;br /&gt;
      tomg       tomg                                       default,high,medium&lt;br /&gt;
      tomg        cml                                        cpu,default,medium&lt;br /&gt;
      tomg  scavenger                                                 scavenger&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also see the total number of Track-able Resources (TRES) allowed for each account by running the following command. Please make sure you give the appropriate account that you are looking for.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show assoc account=tomg format=user,account,qos,grptres&lt;br /&gt;
      User    Account                  QOS       GrpTRES&lt;br /&gt;
---------- ---------- -------------------- -------------&lt;br /&gt;
                 tomg                       billing=8107&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==QoS==&lt;br /&gt;
CML currently has 5 QoS for the &#039;&#039;&#039;dpart&#039;&#039;&#039; partition (though &amp;lt;code&amp;gt;high_long&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;very_high&amp;lt;/code&amp;gt; may not be available to all faculty accounts), 1 QoS for the &#039;&#039;&#039;scavenger&#039;&#039;&#039; partition, and 1 QoS for the &#039;&#039;&#039;cpu&#039;&#039;&#039; partition.  You are &#039;&#039;&#039;required&#039;&#039;&#039; to specify a QoS when submitting your job.  The important part here is that in different QoS you can have a shorter/longer maximum wall time, a different total number of jobs running at once, and a different maximum number of track-able resources (TRES) for the job.  In the scavenger QoS, one more constraint that you are restricted by is the total number of TRES per user (over multiple jobs). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_qos&lt;br /&gt;
        Name     MaxWall MaxJobs                        MaxTRES                      MaxTRESPU              GrpTRES&lt;br /&gt;
------------ ----------- ------- ------------------------------ ------------------------------ --------------------&lt;br /&gt;
      medium  3-00:00:00       1       cpu=8,gres/gpu=2,mem=64G&lt;br /&gt;
     default  7-00:00:00       2       cpu=4,gres/gpu=1,mem=32G&lt;br /&gt;
        high  1-12:00:00       2     cpu=16,gres/gpu=4,mem=128G&lt;br /&gt;
   scavenger  3-00:00:00                                                           gres/gpu=24&lt;br /&gt;
      normal&lt;br /&gt;
         cpu  7-00:00:00       8&lt;br /&gt;
   very_high  1-12:00:00       8     cpu=32,gres/gpu=8,mem=256G                    gres/gpu=12&lt;br /&gt;
   high_long 14-00:00:00       8              cpu=32,gres/gpu=8                     gres/gpu=8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==GPUs==&lt;br /&gt;
Jobs that require GPU resources need to explicitly request the resources within their job submission.  This is done through Generic Resource Scheduling (GRES).  Users may use the most generic identifier (in this case &#039;&#039;&#039;gpu&#039;&#039;&#039;), a colon, and a number to select without explicitly naming the type of GPU (i.e. &amp;lt;code&amp;gt;--gres=gpu:4&amp;lt;/code&amp;gt; for 4 GPUs of any type).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sinfo -o &amp;quot;%20N %10c %10m %25f %40G&amp;quot;&lt;br /&gt;
NODELIST             CPUS       MEMORY     AVAIL_FEATURES            GRES&lt;br /&gt;
cmlgrad[02,05]       32         385421     Xeon,4216                 gpu:rtx2080ti:7,gpu:rtx3070:1&lt;br /&gt;
cml[00-11,13-16],cml 32         353924+    Xeon,4216                 gpu:rtx2080ti:8&lt;br /&gt;
cmlcpu[01-04]        20         386675     Xeon,E5-2660              (null)&lt;br /&gt;
cmlcpu[00,06-07]     24         386675+    Xeon,E5-2680              (null)&lt;br /&gt;
cml12                32         385429     Xeon,4216                 gpu:rtx2080ti:7,gpu:rtxa4000:1&lt;br /&gt;
cml[17-29]           32         257654     Zen,EPYC-7282             gpu:rtxa4000:8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Job Submission and Management==&lt;br /&gt;
Users should review our [[SLURM]] [[SLURM/JobSubmission | job submission]] and [[SLURM/JobStatus | job management]] documentation.  &lt;br /&gt;
&lt;br /&gt;
A very quick start to get an interactive shell is as follows when run on the submission node.  This will allocate 1 GPU with 16GB of memory (system RAM) in the QoS default for 4 hours maximum time.  If the job goes beyond these limits (either the memory allocation or the maximum time) it will be terminated immediately. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --pty --gres=gpu:1 --mem=16G --qos=default --time=04:00:00 bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[username@cmlsub00:~ ] $ srun --pty --gres=gpu:1 --mem=16G --qos=default --time=04:00:00 bash&lt;br /&gt;
[username@cml00:~ ] $ nvidia-smi -L&lt;br /&gt;
GPU 0: GeForce RTX 2080 Ti (UUID: GPU-20846848-e66d-866c-ecbe-89f2623f3b9a)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are going to run in a faculty account instead of the default &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; account you will need to specify the &amp;lt;code&amp;gt;--account=&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
A quick example to run an interactive job using the cpu partition. The cpu partition uses the default account &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
-bash-4.2$ srun --partition=cpu --qos=cpu bash -c &#039;echo &amp;quot;Hello World from&amp;quot; `hostname`&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Data Storage=&lt;br /&gt;
Until the final storage investment arrives we have made available a temporary allocation of storage.  This section is subject to change.  There are 3 types of storage available to users in the CML:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
==Home Directories==&lt;br /&gt;
Home directories in the CML computational infrastructure are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your username.  These home directories have very limited storage (20GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: To check your quota on this directory you will need to use the &amp;lt;code&amp;gt;quota -s&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
==Project Directories==&lt;br /&gt;
You can request project based allocations for up to 6TB for up to 120 days with approval from a CML faculty member and the director of CML.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with your account sponsor involved in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/cml-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation for up to another 120 days (requires re-approval from a CML faculty member and the director of CML).  If you do not want to renew or do not get approval for renewal, you will need to relocate all desired data within 14 days of the end of the allocation period.  Staff will then remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
==Scratch Directories==&lt;br /&gt;
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the CML compute infrastructure:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
===Network Scratch Directory===&lt;br /&gt;
You are allocated 400GB of scratch space via NFS from &amp;lt;code&amp;gt;/cmlscratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you will need to &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into the directory or request/specify a fully qualified file path to access this.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 800GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 800GB, you will need faculty approval and/or a project directory. Space increases beyond 800GB also have a maximum request period of 120 days (as with project directories), after which they will need to be renewed with re-approval from a CML faculty member and the director of CML.&lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
===Local Scratch Directories===&lt;br /&gt;
Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
&lt;br /&gt;
These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
==Datasets==&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/cml-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The following is the list of datasets available:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Dataset&lt;br /&gt;
! Path&lt;br /&gt;
|-&lt;br /&gt;
| CelebA&lt;br /&gt;
| /fs/cml-datasets/CelebA&lt;br /&gt;
|-&lt;br /&gt;
| CelebA-HQ&lt;br /&gt;
| /fs/cml-datasets/CelebA-HQ&lt;br /&gt;
|-&lt;br /&gt;
| CelebAMask-HQ&lt;br /&gt;
| /fs/cml-datasets/CelebAMask-HQ&lt;br /&gt;
|-&lt;br /&gt;
| Charades&lt;br /&gt;
| /fs/cml-datasets/Charades&lt;br /&gt;
|-&lt;br /&gt;
| Cityscapes&lt;br /&gt;
| /fs/cml-datasets/cityscapes&lt;br /&gt;
|-&lt;br /&gt;
| COCO&lt;br /&gt;
| /fs/cml-datasets/coco&lt;br /&gt;
|-&lt;br /&gt;
| Diversity in Faces [1]&lt;br /&gt;
| /fs/cml-datasets/diversity_in_faces&lt;br /&gt;
|-&lt;br /&gt;
| FFHQ&lt;br /&gt;
| /fs/cml-datasets/FFHQ&lt;br /&gt;
|-&lt;br /&gt;
| ImageNet ILSVRC2012&lt;br /&gt;
| /fs/cml-datasets/ImageNet/ILSVRC2012&lt;br /&gt;
|-&lt;br /&gt;
| LFW&lt;br /&gt;
| /fs/cml-datasets/facial_test_data&lt;br /&gt;
|-&lt;br /&gt;
| LibriSpeech&lt;br /&gt;
| /fs/cml-datasets/LibriSpeech&lt;br /&gt;
|-&lt;br /&gt;
| LSUN&lt;br /&gt;
| /fs/cml-datasets/LSUN&lt;br /&gt;
|-&lt;br /&gt;
| MAG240M&lt;br /&gt;
| /fs/cml-datasets/OGB/MAG240M&lt;br /&gt;
|-&lt;br /&gt;
| MegaFace&lt;br /&gt;
| /fs/cml-datasets/megaface&lt;br /&gt;
|-&lt;br /&gt;
| MS-Celeb-1M&lt;br /&gt;
| /fs/cml-datasets/MS_Celeb_aligned_112&lt;br /&gt;
|-&lt;br /&gt;
| OC20&lt;br /&gt;
| /fs/cml-datasets/OC20&lt;br /&gt;
|-&lt;br /&gt;
| ogbn-papers100M&lt;br /&gt;
| /fs/cml-datasets/OGB/ogbn-papers100M&lt;br /&gt;
|-&lt;br /&gt;
| roberta&lt;br /&gt;
| /fs/cml-datasets/roberta&lt;br /&gt;
|-&lt;br /&gt;
| Salient ImageNet&lt;br /&gt;
| /fs/cml-datasets/Salient-ImageNet&lt;br /&gt;
|-&lt;br /&gt;
| ShapeNetCore.v2&lt;br /&gt;
| /fs/cml-datasets/ShapeNetCore.v2&lt;br /&gt;
|-&lt;br /&gt;
| Tiny ImageNet&lt;br /&gt;
| /fs/cml-datasets/tiny_imagenet&lt;br /&gt;
|-&lt;br /&gt;
| WikiKG90M&lt;br /&gt;
| /fs/cml-datasets/OGB/WikiKG90M&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[1] - This dataset has restricted access. Please [[HelpDesk | contact staff]] if you are looking to use this dataset.&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus&amp;diff=10956</id>
		<title>Nexus</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus&amp;diff=10956"/>
		<updated>2023-05-26T15:35:40Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: /* Network Scratch Directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Nexus is the combined scheduler of resources in UMIACS.  Many of our existing computational clusters that have discrete schedulers will be folding into this scheduler in the future (see [[#Migrations | below]]).  The resource manager for Nexus (as with our other existing computational clusters) is [[SLURM]].  Resources are arranged into partitions where users are able to schedule computational jobs.  Users are arranged into a number of SLURM accounts based on faculty, lab, or center investments.&lt;br /&gt;
&lt;br /&gt;
= Getting Started =&lt;br /&gt;
All accounts in UMIACS are sponsored.  If you don&#039;t already have a UMIACS account, please see [[Accounts]] for information on getting one.  You need a full UMIACS account (not a [[Accounts/Collaborator | collaborator account]]) in order to access Nexus.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
Your access to submission nodes for Nexus computational resources are determined by your account sponsor&#039;s department, center, or lab affiliation.  You can log into the [https://intranet.umiacs.umd.edu/directory/cr/ UMIACS Directory CR application] and select the Computational Resource (CR) in the list that has the prefix &amp;lt;code&amp;gt;nexus&amp;lt;/code&amp;gt;.  The Hosts section lists your available submission nodes, generally a pair of nodes of the format &amp;lt;tt&amp;gt;nexus&amp;lt;department, lab, or center abbreviation&amp;gt;[00,01]&amp;lt;/tt&amp;gt;, e.g., &amp;lt;tt&amp;gt;nexuscfar00&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;nexuscfar01&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039; - UMIACS requires multi-factor authentication through our [[Duo]] instance.  This is completely discrete from both UMD&#039;s and CSD&#039;s Duo instances.  You will need to enroll one or more devices to access resources in UMIACS, and will be prompted to enroll when you log into the Directory application for the first time.&lt;br /&gt;
&lt;br /&gt;
Once you have identified your submission nodes, you can [[SSH]] directly into them.  From there, you are able to submit to the cluster via our [[SLURM]] workload manager.  You need to make sure that your submitted jobs have the correct account, partition, and qos.&lt;br /&gt;
&lt;br /&gt;
== Jobs ==&lt;br /&gt;
[[SLURM]] jobs are submitted by either &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; depending if you are doing an interactive job or batch job, respectively.  You need to provide the where/how/who to run the job and specify the resources you need to run with.&lt;br /&gt;
&lt;br /&gt;
For the where/how/who, you may be required to specify &amp;lt;code&amp;gt;--partition&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt;, and/or &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt; (respectively) to be able to adequately submit jobs to the Nexus.&lt;br /&gt;
&lt;br /&gt;
For resources, you may need to specify &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; for time, &amp;lt;code&amp;gt;--tasks&amp;lt;/code&amp;gt; for CPUs, &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; for RAM, and &amp;lt;code&amp;gt;--gres=gpu&amp;lt;/code&amp;gt; for GPUs in your submission arguments to meet your requirements.  There are defaults for all four, so if you don&#039;t specify something, you may be scheduled with a very minimal set of time and resources (e.g., by default, NO GPUs are included if you do not specify &amp;lt;code&amp;gt;--gres=gpu&amp;lt;/code&amp;gt;).  For more information about submission flags for GPU resources, see [[SLURM/JobSubmission#Requesting_GPUs]].  You can also can run &amp;lt;code&amp;gt;man srun&amp;lt;/code&amp;gt; on your submission node for a complete list of available submission arguments.&lt;br /&gt;
&lt;br /&gt;
=== Interactive ===&lt;br /&gt;
Once logged into a submission node, you can run simple interactive jobs.  If your session is interrupted from the submission node, the job will be killed.  As such, we encourage use of a terminal multiplexer such as [[Tmux]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --ntasks 4 --mem=2gb --gres=gpu:1 nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-ae5dc1f5-c266-5b9f-58d5-7976e62b3ca1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Batch ===&lt;br /&gt;
Batch jobs are scheduled with a script file with an optional ability to embed job scheduling parameters via variables that are defined by &amp;lt;code&amp;gt;#SBATCH&amp;lt;/code&amp;gt; lines at the top of the file.  You can find some examples in our [[SLURM/JobSubmission]] documentation.&lt;br /&gt;
&lt;br /&gt;
= Partitions = &lt;br /&gt;
The SLURM resource manager uses partitions to act as job queues which can restrict size, time and user limits.  The Nexus has a number of different partitions of resources.  Different Centers, Labs, and Faculty are able to invest in computational resources that are restricted to approved users through these partitions.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Partitions usable by all non-[[ClassAccounts |class account]] users:&#039;&#039;&#039;&lt;br /&gt;
* [[Nexus/Tron]] - Pool of resources available to all UMIACS and CSD faculty and graduate students.&lt;br /&gt;
* Scavenger - [https://slurm.schedmd.com/preempt.html Preemption] partition that supports nodes from multiple other partitions.  More resources are available to schedule simultaneously than in other partitions, however jobs are subject to preemption rules.  You are responsible for ensuring your jobs handle this preemption correctly.  The SLURM scheduler will simply restart a preempted job with the same submission arguments when it is available to run again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Partitions usable by [[ClassAccounts]]:&#039;&#039;&#039;&lt;br /&gt;
* [[ClassAccounts | Class]] - Pool available for UMIACS class accounts sponsored by either UMIACS or CSD faculty.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Partitions usable by specific lab/center users:&#039;&#039;&#039;&lt;br /&gt;
* [[Nexus/CBCB]] - CBCB lab pool available for CBCB lab members.&lt;br /&gt;
* [[Nexus/CLIP]] - CLIP lab pool available for CLIP lab members.&lt;br /&gt;
* [[Nexus/Gamma]] - GAMMA lab pool available for GAMMA lab members.&lt;br /&gt;
* [[Nexus/MC2]] - MC2 lab pool available for MC2 lab members.&lt;br /&gt;
&lt;br /&gt;
= Quality of Service (QoS) =&lt;br /&gt;
SLURM uses a QoS to provide limits on job sizes to users.  Note that you should still try to only allocate the minimum resources for your jobs, as resources that each of your jobs schedules are counted against your [https://slurm.schedmd.com/fair_tree.html FairShare priority] in the future.&lt;br /&gt;
* default - Default QoS. Limited to 4 cores, 32GB RAM, and 1 GPU per job.  The maximum wall time per job is 3 days.&lt;br /&gt;
* medium - Limited to 8 cores, 64GB RAM, and 2 GPUs per job.  The maximum wall time per job is 2 days.&lt;br /&gt;
* high - Limited to 16 cores, 128GB RAM, and 4 GPUs per job.  The maximum wall time per job is 1 day.&lt;br /&gt;
* scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job.  The maximum wall time per job is 2 days.  Only 192 total cores, 768GB total RAM, and 24 total GPUs are permitted simultaneously across all of your jobs running in this QoS.  This QoS is both only available in the scavenger partition and the only QoS available in the scavenger partition. To use this QoS, include &amp;lt;code&amp;gt;--partition=scavenger&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--account=scavenger&amp;lt;/code&amp;gt; in your submission arguments. Do not include any QoS argument other than &amp;lt;code&amp;gt;--qos=scavenger&amp;lt;/code&amp;gt; (optional) or submission will fail.&lt;br /&gt;
&lt;br /&gt;
You can display these QoSes from the command line using &amp;lt;code&amp;gt;show_qos&amp;lt;/code&amp;gt; command. Other partition, lab-or-group-specific or reserved QoSes may also appear in the listing. The above four QoSes are the ones that everyone can submit to.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# show_qos&lt;br /&gt;
        Name     MaxWall MaxJobs                        MaxTRES                      MaxTRESPU              GrpTRES&lt;br /&gt;
------------ ----------- ------- ------------------------------ ------------------------------ --------------------&lt;br /&gt;
      normal&lt;br /&gt;
   scavenger  2-00:00:00             cpu=64,gres/gpu=8,mem=256G   cpu=192,gres/gpu=24,mem=768G&lt;br /&gt;
      medium  2-00:00:00               cpu=8,gres/gpu=2,mem=64G&lt;br /&gt;
        high  1-00:00:00             cpu=16,gres/gpu=4,mem=128G&lt;br /&gt;
     default  3-00:00:00               cpu=4,gres/gpu=1,mem=32G&lt;br /&gt;
        tron                                                        cpu=32,gres/gpu=4,mem=256G&lt;br /&gt;
   huge-long 10-00:00:00             cpu=32,gres/gpu=8,mem=256G&lt;br /&gt;
        clip                                                                                      cpu=339,mem=2926G&lt;br /&gt;
       class                                                        cpu=32,gres/gpu=4,mem=256G&lt;br /&gt;
       gamma                                                                                      cpu=179,mem=1511G&lt;br /&gt;
         mc2                                                                                      cpu=307,mem=1896G&lt;br /&gt;
        cbcb                                                                                     cpu=913,mem=46931G&lt;br /&gt;
     highmem 21-00:00:00                       cpu=32,mem=2000G&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that in the default non-preemption partition (&amp;lt;tt&amp;gt;tron&amp;lt;/tt&amp;gt;), you will be restricted to 32 total cores, 256GB total RAM, and 4 total GPUs at once across all jobs you have running in the QoSes allowed by that partition.  This is codified by the reserved QoS also named &amp;lt;tt&amp;gt;tron&amp;lt;/tt&amp;gt; in the output above.&lt;br /&gt;
&lt;br /&gt;
Lab/group-specific partitions may also have similar restrictions across all users in that lab/group that are using the partition (codified by &amp;lt;tt&amp;gt;GrpTRES&amp;lt;/tt&amp;gt; in the output above for the QoS name that matches the lab/group partition). Note that the exact values above for TRES are not fixed and may fluctuate as more resources are added to various partitions.&lt;br /&gt;
&lt;br /&gt;
To find out what accounts and partitions you have access to, first use the &amp;lt;code&amp;gt;show_assoc&amp;lt;/code&amp;gt; command to show your account/QoS combinations. Then, use the &amp;lt;code&amp;gt;scontrol show partition&amp;lt;/code&amp;gt; command and note the &amp;lt;tt&amp;gt;AllowAccounts&amp;lt;/tt&amp;gt; entry for each listed partition. You are able to submit to any partition that allows an account that you have. If you need to use an account other than the default account &amp;lt;tt&amp;gt;nexus&amp;lt;/tt&amp;gt;, you will need to specify an account via the &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt; submission argument.&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
All storage available in Nexus is currently [[NFS]] based.  We will be introducing some changes for Phase 2 to support high performance GPUDirect Storage (GDS).  These storage allocation procedures will be revised and approved by the launch of Phase 2 by a joint UMIACS and CSD faculty committee.&lt;br /&gt;
&lt;br /&gt;
== Home Directories ==&lt;br /&gt;
Home directories in the Nexus computational infrastructure are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your username.  These home directories have very limited storage (20GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: To check your quota on this directory you will need to use the &amp;lt;code&amp;gt;quota -s&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
Other standalone compute clusters have begun to fold into partitions in Nexus.  The corresponding home directories used by these clusters (if not &amp;lt;code&amp;gt;/nfshomes&amp;lt;/code&amp;gt;) will be gradually phased out in favor of the &amp;lt;code&amp;gt;/nfshomes&amp;lt;/code&amp;gt; home directories.&lt;br /&gt;
&lt;br /&gt;
== Scratch Directories ==&lt;br /&gt;
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the Nexus compute infrastructure:&lt;br /&gt;
* Network scratch directories&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
Please note that [[ClassAccounts | class accounts]] do not have network scratch directories.&lt;br /&gt;
&lt;br /&gt;
=== Network Scratch Directories ===&lt;br /&gt;
You are allocated 200GB of scratch space via NFS from &amp;lt;code&amp;gt;/fs/nexus-scratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you will need to &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into the directory or request/specify a fully qualified file path to access this.&lt;br /&gt;
&lt;br /&gt;
You can view your quota usage by running &amp;lt;code&amp;gt;df -h /fs/nexus-scratch/$username&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 400GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 400GB, you will need faculty approval and/or a [[#Project_Allocations | project allocation]]. If you choose to increase your scratch space beyond 400GB, the increased space is also subject to the 270 TB days limit mentioned in the project allocation section before we check back in for renewal. For example, if you request 1.4TB total space, you may have this for 270 days (1TB beyond the 400GB permanent increase).&lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
=== Local Scratch Directories ===&lt;br /&gt;
Each computational node that you can schedule compute jobs on also has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These are almost always more performant than any other storage available to the job.  However, you must stage their data within the confines of your job and stage the data out before the end of your job.&lt;br /&gt;
&lt;br /&gt;
These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
== Faculty Allocations ==&lt;br /&gt;
Each faculty member can be allocated 1TB of lab space upon request.  We can also support grouping these individual allocations together into larger center, lab, or research group allocations if desired by the faculty.  Please [[HelpDesk | contact staff]] to inquire.&lt;br /&gt;
&lt;br /&gt;
This lab space does not have [[Snapshots | snapshots]] by default (but are available if requested), but is [[NightlyBackups | backed up]].&lt;br /&gt;
&lt;br /&gt;
== Project Allocations ==&lt;br /&gt;
Project allocations are available per user for 270 TB days; you can have a 1TB allocation for up to 270 days, a 3TB allocation for 90 days, etc..  A single faculty member can not have more than 20 TB of sponsored account project allocations active at any point. &lt;br /&gt;
&lt;br /&gt;
The minimum storage space you can request (maximum length) is 500GB (540 days) and the minimum allocation length you can request (maximum storage) is 30 days (9TB).&lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with your account sponsor involved in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (270 days, 135 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations are available via &amp;lt;code&amp;gt;/fs/nexus-projects/$project_name&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;Renewal is not guaranteed to be available due to limits on the amount of total storage.&#039;&#039;&#039;  Near the end of the allocation period, staff will contact you and ask if you are still in need of the storage allocation.  If you are no longer in need of the storage allocation, you will need to relocate all desired data within 14 days of the end of the allocation period.  Staff will then remove the allocation.&lt;br /&gt;
&lt;br /&gt;
== Datasets ==&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/nexus-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
We will have a more formal process to approve datasets by Phase 2 of Nexus.&lt;br /&gt;
&lt;br /&gt;
= Migrations =&lt;br /&gt;
If you are a user of an existing cluster that is the process of being folded into Nexus now or in the near future, your cluster-specific migration information will be listed here.&lt;br /&gt;
* (n/a)&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Iribe/ConferenceRooms/AutoAccept&amp;diff=10677</id>
		<title>Iribe/ConferenceRooms/AutoAccept</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Iribe/ConferenceRooms/AutoAccept&amp;diff=10677"/>
		<updated>2022-09-28T17:12:48Z</updated>

		<summary type="html">&lt;p&gt;Ncaple: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Auto-accept [[Iribe/ConferenceRooms | rooms]] will allow users to schedule them from the panel (12 hours in advance for up to 2 hours at a time) or through UMD&#039;s Google Calendar interface.  The room will auto-accept the reservation if there is no conflict on the calendar.&lt;br /&gt;
&lt;br /&gt;
Instructions on reserving a room are [[Iribe/ConferenceRooms/Reserve | here]].&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Room&lt;br /&gt;
! Occupancy&lt;br /&gt;
! Notes&lt;br /&gt;
|-&lt;br /&gt;
| IRB-1119&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-1134&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-2119&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-2143&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-3119&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4119&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-4145&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5107&lt;br /&gt;
| 12&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5111&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5119&lt;br /&gt;
| 6&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| IRB-5161&lt;br /&gt;
| 12&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Large Conference Room Capabilities ===&lt;br /&gt;
Rooms &#039;&#039;&#039;5107&#039;&#039;&#039; and &#039;&#039;&#039;5161&#039;&#039;&#039; have the following setup.&lt;br /&gt;
&lt;br /&gt;
* Single Display (LCD)&lt;br /&gt;
* Single camera conferencing via room PC&lt;br /&gt;
* Laptop presentation via HDMI or Mersive Solstice&lt;br /&gt;
&lt;br /&gt;
=== Small Conference Room Capabilities ===&lt;br /&gt;
&lt;br /&gt;
* Single Display (LCD)&lt;br /&gt;
* Software Conferencing via room PC&lt;br /&gt;
* Laptop presentation via HDMI or Mersive Solstice&lt;/div&gt;</summary>
		<author><name>Ncaple</name></author>
	</entry>
</feed>