<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ekr597</id>
	<title>UMIACS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ekr597"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php/Special:Contributions/Ekr597"/>
	<updated>2026-04-07T22:42:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=12893</id>
		<title>WebSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WebSpace&amp;diff=12893"/>
		<updated>2025-10-30T20:32:51Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS provides static web space hosting for research/lab pages and user pages.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Hosting websites in UMIACS Object Store &#039;&#039;(preferred method)&#039;&#039;&#039;&#039;&#039; ==&lt;br /&gt;
Please refer to the section &amp;quot;Hosting a Website in your Bucket&amp;quot; on the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store Help Page] or visit [[OBJ/WebHosting]]. This is currently our most updated and reliable method for hosting websites.&lt;br /&gt;
&lt;br /&gt;
==Main Website and Lab Pages==&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can access the main website and lab sites for editing in two ways:&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www - and can be remotely accessed by [[SFTP]] to a supported Unix host (e.g., [[Nexus]]).&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; using [[WinSCP]].&lt;br /&gt;
&lt;br /&gt;
Faculty members and authorized users can modify their own public profiles on the main UMIACS homepage. For instructions, see [[ContentManagement]].&lt;br /&gt;
&lt;br /&gt;
==Personal Web Space==&lt;br /&gt;
Your personal website URL at UMIACS is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;http://www.umiacs.umd.edu/~username&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username.  You can set this page to redirect to any page of your choice by setting the &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute in your UMIACS [https://intranet.umiacs.umd.edu/directory/info/ directory entry].&lt;br /&gt;
&lt;br /&gt;
In general, large files or directories for distribution related to a lab&#039;s research should go into the specific lab&#039;s web tree, not your individual web tree.  Remember that your webpage is not permanently continued upon your departure from UMIACS.&lt;br /&gt;
&lt;br /&gt;
UMIACS currently supports hosting a personal website on the Object Store.&lt;br /&gt;
&lt;br /&gt;
===UMIACS Object Store===&lt;br /&gt;
This is the preferred method of hosting a personal website at UMIACS. Please see the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store (OBJ) Help Page] for more information on creating a website within OBJ. Once you create your website in OBJ, you will need to set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; to the bucket&#039;s URL (the URL that ends in &amp;lt;code&amp;gt;umiacs.io&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===Nexus File Space===&lt;br /&gt;
{{Note|&#039;&#039;&#039;&#039;&#039;This service has been deprecated.&#039;&#039;&#039;&#039;&#039;}}&lt;br /&gt;
&lt;br /&gt;
This is primarily a legacy method for users who already have their websites configured this way. If you believe that your circumstances require your personal website to be hosted on this file space, please contact the [[HelpDesk | Help Desk]]. (This does not affect existing users who already have websites hosted on the Nexus file space.)&lt;br /&gt;
&lt;br /&gt;
You will need set your directory &#039;&#039;&#039;Home Page&#039;&#039;&#039; attribute to &amp;lt;code&amp;gt;http://users.umiacs.umd.edu/~username&amp;lt;/code&amp;gt;, where &#039;&#039;&#039;username&#039;&#039;&#039; is your UMIACS username (similar to your personal URL above). You can access your website for editing in two ways:&lt;br /&gt;
&lt;br /&gt;
* From &amp;lt;b&amp;gt;Unix&amp;lt;/b&amp;gt; as /fs/www-users/username - and can be remotely accessed by [[SFTP]] to a supported Unix host (e.g., [[Nexus]]).&lt;br /&gt;
* From &amp;lt;b&amp;gt;Windows&amp;lt;/b&amp;gt; as \\umiacs-webftp\www-users\username&lt;br /&gt;
&lt;br /&gt;
==Adding A Password Protected Folder To Your Web Space==&lt;br /&gt;
{{Note|&#039;&#039;&#039;&#039;&#039;This method will NOT work in the UMIACS Object Store.&#039;&#039;&#039;&#039;&#039;}}&lt;br /&gt;
&lt;br /&gt;
1) Create the directory you want to password protect or &amp;lt;tt&amp;gt;cd&amp;lt;/tt&amp;gt; into the directory you want to password protect.&lt;br /&gt;
&lt;br /&gt;
2) Create a file called &#039;&#039;.htaccess&#039;&#039; (&amp;lt;tt&amp;gt; vi .htaccess&amp;lt;/tt&amp;gt;) in the directory you wish to password protect.&lt;br /&gt;
&lt;br /&gt;
3) In the file you just created, type the following lines &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile &amp;quot;/your/directory/here/&amp;quot;.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user username&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you were going to protect the &amp;lt;tt&amp;gt;/fs/www-users/username/private&amp;lt;/tt&amp;gt; directory and you want the required name to be &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt;, then your file would look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthUserFile /fs/www-users/username/private/.htpasswd&lt;br /&gt;
AuthName &amp;quot;Secure Document&amp;quot;&lt;br /&gt;
AuthType Basic&lt;br /&gt;
require user class239&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Create a file called &#039;&#039;.htpasswd&#039;&#039; in the same directory as &#039;&#039;.htaccess&#039;&#039;. You create this file by typing in &amp;lt;tt&amp;gt;htpasswd -c .htpasswd &#039;&#039;username&#039;&#039;&amp;lt;/tt&amp;gt; in the directory area to be protected.&lt;br /&gt;
&lt;br /&gt;
In the example above, the username is &amp;lt;tt&amp;gt;class239&amp;lt;/tt&amp;gt; so you would type &amp;lt;tt&amp;gt;htpasswd -c .htpasswd class239&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will be prompted to enter the password you want. The &#039;&#039;.htpasswd&#039;&#039; file will be created in the current directory and will contain an encrypted version of the password.&lt;br /&gt;
&lt;br /&gt;
To later change the username, edit the &#039;&#039;.htaccess&#039;&#039; file and change the username. If you want to later change the password, just retype the above line in step 4 and enter the new password at the prompt.&lt;br /&gt;
&lt;br /&gt;
==Restricting Content based on IP address==&lt;br /&gt;
It is possible to have pages on your webspace only accessible to clients connecting from certain IP addresses. In order to accomplish this, cd in to the directory you wish to restrict, and edit your &#039;&#039;.htaccess&#039;&#039; or &#039;&#039;httpd.conf&#039;&#039; file. The example below shows how to make content only viewable to clients connecting from the UMD WiFi in Apache 2.2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap; &lt;br /&gt;
white-space: -moz-pre-wrap; &lt;br /&gt;
white-space: -pre-wrap; &lt;br /&gt;
white-space: -o-pre-wrap; &lt;br /&gt;
word-wrap: break-word;&amp;quot;&amp;gt;SetEnvIF X-Forwarded-For &amp;quot;^128\.8\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^129\.2\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^192\.168\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^206\.196\.(?:1[6-9][0-9]|2[0-5][0-9])\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
SetEnvIF X-Forwarded-For &amp;quot;^10\.\d+\.\d+\.\d+$&amp;quot; UMD_NETWORK&lt;br /&gt;
Order Deny,Allow&lt;br /&gt;
Deny from all&lt;br /&gt;
Allow from env=UMD_NETWORK&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The SetEnvIF directive will modify one&#039;s environment if the specified attribute matches the provided regular expression. In this example, IP addresses that are forwarded from an IP within UMD&#039;s IP space are tagged with UMD_NETWORK. Then, all traffic to the example directory is blocked unless it has the UMD_NETWORK tag. See the following pages for a more in depth explanation of the commands used.&lt;br /&gt;
&lt;br /&gt;
[https://httpd.apache.org/docs/2.2/howto/htaccess.html .htaccess], [https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html#setenvif SetEnvIf], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#order Order], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#deny Deny], [https://httpd.apache.org/docs/2.2/mod/mod_authz_host.html#allow Allow]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=FilesystemDataStorage&amp;diff=12834</id>
		<title>FilesystemDataStorage</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=FilesystemDataStorage&amp;diff=12834"/>
		<updated>2025-09-24T19:28:55Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Filesystem [[Data Storage | (data) storage]] refers to all data that is stored physically at UMIACS, i.e., on hard drives either in servers in datacenters managed by [[HelpDesk | UMIACS staff]], or in UMIACS-supported workstations. The opposite of this is [[CloudDataStorage | cloud storage]] which is stored on third-party providers&#039; data hosting platforms.&lt;br /&gt;
&lt;br /&gt;
The below sections outline the different categories of filesystem storage available at UMIACS. Although technically filesystem storage by the above definition, UMIACS also hosts an [[OBJ | Object Store]] that is documented outside the scope of this page.&lt;br /&gt;
&lt;br /&gt;
==Network Home Directory Filesystem Storage==&lt;br /&gt;
We provide network home directory filesystem storage to each of our users through [[NFShomes]] home directories.&lt;br /&gt;
&lt;br /&gt;
This home directory can be accessed via &amp;lt;code&amp;gt;/nfshomes/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt; on supported UNIX machines or by using [[WinSCP]] on Windows machines.&lt;br /&gt;
&lt;br /&gt;
Each home directory is backed up nightly using the Institute&#039;s [[TSM]] backup system. It also has [[Snapshots]] enabled for easy user restores.&lt;br /&gt;
&lt;br /&gt;
Users are given a 30GB, non-expandable [[Quota]]. You will need to use either platform-specific filesystem storage, directly-attached filesystem storage, or other network-attached filesystem storage for increased space.&lt;br /&gt;
&lt;br /&gt;
On user account closure, the account&#039;s NFShomes home directory goes into our [[Archives]].&lt;br /&gt;
&lt;br /&gt;
==UNIX Filesystem Storage==&lt;br /&gt;
UNIX hosts use redundant, backed-up network file shares for user home directories ([[#Network Home Directory Filesystem Storage |above section]]). Research data storage ([[#Network-Attached Filesystem Storage |below section]]) is also stored on redundant, possibly-backed-up network file shares and is generally available under /fs/&lt;br /&gt;
&lt;br /&gt;
All UNIX hosts also have filesystem storage available for transitory use. These directories may be used to store temporary &#039;&#039;&#039;&#039;&#039;COPIES&#039;&#039;&#039;&#039;&#039; of data that is permanently stored elsewhere or as a staging point for output.&lt;br /&gt;
&lt;br /&gt;
These directories may not, &#039;&#039;&#039;&#039;&#039;under any circumstances&#039;&#039;&#039;&#039;&#039;, be used as permanent storage for unique, important data. They are not backed up or archived by UMIACS. UMIACS staff cannot recover damaged or deleted data from these directories and will not be responsible for data loss if they are misused. Additionally, on our [[SLURM]] compute clusters, these volumes may have an automated cleanup routine that will delete unmodified data after some number of days. You can check the page for the specific cluster you are using for more information.&lt;br /&gt;
&lt;br /&gt;
Please note that &#039;&#039;&#039;/tmp&#039;&#039;&#039; in particular is at risk for data loss or corruption as that directory is regularly used by system processes and services for temporary storage.&lt;br /&gt;
&lt;br /&gt;
These directories include:&lt;br /&gt;
&lt;br /&gt;
  - /tmp&lt;br /&gt;
  - /scratch0, /scratch1, ... (/scratch#)&lt;br /&gt;
  - any directory named in whole or in part &amp;quot;tmp&amp;quot;, &amp;quot;temp&amp;quot;, or &amp;quot;scratch&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==Windows and macOS Filesystem Storage==&lt;br /&gt;
Windows and macOS hosts at UMIACS store user directories on their primary internal drives (&amp;lt;tt&amp;gt;C:\Users&amp;lt;/tt&amp;gt; for Windows, &amp;lt;tt&amp;gt;/Users&amp;lt;/tt&amp;gt; for macOS). Supported, UMIACS-managed hosts automatically back up user data on these drives nightly using the Institute&#039;s [[TSM]] backup system. If you have a supported, UMIACS-managed host that has other internal or external hard drives attached to it, or other partitions on its primary internal drive, please be aware that these drives/partitions &#039;&#039;&#039;are not&#039;&#039;&#039; backed up. Laptops and non-standard hosts are not automatically backed up and should be manually backed up by their users.&lt;br /&gt;
&lt;br /&gt;
On host decommission, user directories go into our [[Archives]].&lt;br /&gt;
&lt;br /&gt;
==Direct-Attached Filesystem Storage==&lt;br /&gt;
Direct-attached filesystem storage refers to devices like USB flash drives and USB hard drives, which are very popular for easily expanding storage capacity on a host. However, these devices are significantly more vulnerable to data loss or theft than internal or networked data storage. In general, UMIACS discourages the use of direct-attached filesystem storage when any other option is available. Please note that these devices are prone to high rates of failure and additional steps should be taken to ensure that the data is backed up and that critical or confidential data is not lost or stolen.&lt;br /&gt;
&lt;br /&gt;
Direct-attached filesystem storage is not backed up or archived by UMIACS.&lt;br /&gt;
&lt;br /&gt;
==Network-Attached Filesystem Storage==&lt;br /&gt;
Some labs have network-attached filesystem storage space dedicated for datasets, models, and project storage. These shares are typically named in the form &amp;lt;tt&amp;gt;/fs/&amp;lt;lab&amp;gt;-&amp;lt;purpose&amp;gt;&amp;lt;/tt&amp;gt; (i.e., &amp;lt;tt&amp;gt;/fs/cml-models&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/fs/vulcan-projects&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Network-attached filesystem storage may or may not be backed up and/or archived by UMIACS. Details of a specific share&#039;s retention policy should be stated along with the documentation of the share&#039;s access / usage policy. If you find documentation for a network-attached filesystem storage space in this wiki that does not state its retention policy, please [[HelpDesk | contact staff]].&lt;br /&gt;
&lt;br /&gt;
===Network-Attached Filesystem Scratch Storage===&lt;br /&gt;
One specific sub-category of network-attached filesystem storage is network-attached filesystem scratch storage. These shares are named similarly to UNIX filesystem storage, but with the lab&#039;s name included (i.e., &amp;lt;tt&amp;gt;/fs/cbcb-scratch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/gammascratch&amp;lt;/tt&amp;gt;), are intended for scratch/temporary storage, and are subject to the same policies as filesystem scratch/tmp directories, discussed above.&lt;br /&gt;
&lt;br /&gt;
Network-attached filesystem scratch storage is not backed up or archived by UMIACS.&lt;br /&gt;
&lt;br /&gt;
==UNIX Filesystem Storage Commands==&lt;br /&gt;
Below are a few different CLI commands that may prove useful for monitoring your filesystem storage usage and performance. For additional information, run &amp;lt;code&amp;gt;[command] --help&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;man [command]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
df - Shows descriptive file system information&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: df [OPTION]... [FILE]...&lt;br /&gt;
Show information about the file system on which each FILE resides,&lt;br /&gt;
or all file systems by default.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example, to check how much space is available at a directory:&lt;br /&gt;
&amp;lt;pre&amp;gt;df -h ./&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
du - Shows disk usage of specific files. Use the -d flag for better depth control.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: du [OPTION]... [FILE]...&lt;br /&gt;
  or:  du [OPTION]... --files0-from=F&lt;br /&gt;
Summarize disk usage of each FILE, recursively for directories.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example, to check how much space each file in a directory takes up:&lt;br /&gt;
&amp;lt;pre&amp;gt;du -ah -d 1 ./&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
free - Shows current memory(RAM) usage. Use the -h flag for a human readable format.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage:&lt;br /&gt;
 free [options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
quota - Shows quota information, this is useful for viewing per filesystem limits in places such as a home directory. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]&lt;br /&gt;
	quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...&lt;br /&gt;
	quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...&lt;br /&gt;
	quota [-qvswugQm] [-F quotaformat] -f filesystem ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
iostat - Shows drive utilization, as well as other utilizations. Pair this with the &amp;lt;code&amp;gt;watch&amp;lt;/code&amp;gt; command for regular updates. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: iostat [ options ] [ &amp;lt;interval&amp;gt; [ &amp;lt;count&amp;gt; ] ]&lt;br /&gt;
Options are:&lt;br /&gt;
[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ]&lt;br /&gt;
[ -j { ID | LABEL | PATH | UUID | ... } ]&lt;br /&gt;
[ [ -T ] -g &amp;lt;group_name&amp;gt; ] [ -p [ &amp;lt;device&amp;gt; [,...] | ALL ] ]&lt;br /&gt;
[ &amp;lt;device&amp;gt; [...] | ALL ]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=FileTransferProtocol&amp;diff=12833</id>
		<title>FileTransferProtocol</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=FileTransferProtocol&amp;diff=12833"/>
		<updated>2025-09-24T18:33:37Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|&#039;&#039;&#039;Our FTP service is deprecated in favor of the UMIACS Object Store. Please see [[OBJ]]. Depending on when your account was installed, you may or may not have FTP access.&#039;&#039;&#039;}} &lt;br /&gt;
&lt;br /&gt;
UMIACS provides FTP services for transferring data to external collaborators.  Since the FTP protocol is conducted entirely in plaintext, external users login to the service as anonymous, and internal users can access the file directories internally.  Users will never authenticate over FTP with their UMIACS account.  Please see [[SFTP]] for more information on a secure file transfer protocol.&lt;br /&gt;
&lt;br /&gt;
==Publishing datasets via FTP==&lt;br /&gt;
Users can place data to be externally accessible in their public FTP space, which is located from the FTP service as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ftp://ftp.umiacs.umd.edu/pub/&amp;lt;username&amp;gt;/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To upload data to your public site, you can use&lt;br /&gt;
* &#039;&#039;&#039;/fs/ftp/pub/&amp;lt;username&amp;gt;&#039;&#039;&#039; from supported UNIX machines&lt;br /&gt;
* [[WinSCP]] from supported Windows machines&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASProjects&amp;diff=12832</id>
		<title>NASProjects</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=NASProjects&amp;diff=12832"/>
		<updated>2025-09-24T18:28:58Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Project Directories===&lt;br /&gt;
UMIACS labs distribute shared data to workgroups onsite and outside UMIACS using a wide variety of protocols including [[FTP]], [[SFTP]], [[SCP]], and [[NFS]].&lt;br /&gt;
&lt;br /&gt;
On-site clients can access project directories through the Network File System ([[NFS]]) protocol. Supported UNIX workstations map project directories to &lt;br /&gt;
&lt;br /&gt;
    /fs/&lt;br /&gt;
&lt;br /&gt;
Windows workstations can access these directories using a [[SFTP]] client, such as [[WinSCP]].&lt;br /&gt;
&lt;br /&gt;
Project data storage may not be shared by default with every host.  Please send mail to [[HelpDesk | staff]] to configure a new share.&lt;br /&gt;
&lt;br /&gt;
Clients can also access data using authenticated File Transfer Protocol ([[FTP]]), Secure File Transfer Protocol ([[SFTP]]), and Secure Copy ([[SCP]]) through UMIACS file servers.&lt;br /&gt;
&lt;br /&gt;
===Usage Guidelines===&lt;br /&gt;
Project Directories using Network attached Storage ([[NAS]]) are tuned for your workgroup&#039;s requirements and preferences.&lt;br /&gt;
&lt;br /&gt;
Please avoid storing personal data in project storage allocations. Separating project data from personal data will simplify administration and data management for both researchers and staff.&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=UMobj/Example&amp;diff=12805</id>
		<title>UMobj/Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=UMobj/Example&amp;diff=12805"/>
		<updated>2025-09-11T18:47:27Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Deprecated, see MinIO Client&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|&#039;&#039;&#039;UMobj has been deprecated in favor of the MinIO Client. Please see [[MinIO_Client]]&#039;&#039;&#039;}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In your shell, export the credentials if not already done (substituting in your actual ACCESS_KEY and SECRET_KEY for your personal account or [[OBJ#LabGroups | LabGroup]] in the [https://obj.umiacs.umd.edu/obj/user/ Object Store]).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OBJ_ACCESS_KEY_ID=&amp;quot;&amp;lt;ACCESS_KEY&amp;gt;&amp;quot;&lt;br /&gt;
export OBJ_SECRET_ACCESS_KEY=&amp;quot;&amp;lt;SECRET_KEY&amp;gt;&amp;quot;&lt;br /&gt;
export OBJ_SERVER=&amp;quot;obj.umiacs.umd.edu&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We will be uploading this 4.4GB file.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# stat CASIA-WebFace.tar.gz&lt;br /&gt;
  File: `CASIA-WebFace.tar.gz&#039;&lt;br /&gt;
  Size: 4018119666      Blocks: 7847896    IO Block: 262144 regular file&lt;br /&gt;
Device: 13h/19d Inode: 87161       Links: 1&lt;br /&gt;
Access: (0644/-rw-r--r--)  Uid: (10001/username)   Gid: (18001/groupname)&lt;br /&gt;
Access: 2015-11-24 19:34:05.715695000 -0500&lt;br /&gt;
Modify: 2015-11-23 14:23:35.872087000 -0500&lt;br /&gt;
Change: 2015-11-23 16:07:10.062306000 -0500&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the bucket does not exist yet you need to create the bucket first.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# mkobj janus_datasets:&lt;br /&gt;
Created bucket janus_datasets.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can copy the file to the bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cpobj CASIA-WebFace.tar.gz janus_datasets:&lt;br /&gt;
100% |#########################################################################################################################################|&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To verify the object store has the data expected we can compare the checksums.  First you can calculate the md5sum locally and then use cmpobj to stream the data from the object store and create a md5sum.  They should match.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# md5sum CASIA-WebFace.tar.gz&lt;br /&gt;
004c2475e5ed66771e8873be67a93105  CASIA-WebFace.tar.gz&lt;br /&gt;
# cmpobj janus_datasets:CASIA-WebFace.tar.gz&lt;br /&gt;
004c2475e5ed66771e8873be67a93105&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=UMobj&amp;diff=12804</id>
		<title>UMobj</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=UMobj&amp;diff=12804"/>
		<updated>2025-09-11T18:46:43Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Deprecated, see MinIO Client&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|&#039;&#039;&#039;UMobj has been deprecated in favor of the MinIO Client. Please see [[MinIO_Client]]&#039;&#039;&#039;}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The UMobj suite of utilities provide command-line access to the [https://obj.umiacs.umd.edu/obj UMIACS Object Store].  UMobj is preinstalled on all UMIACS-supported RHEL8 machines. For other operating systems or non UMIACS-supported hosts, we encourage use of one of the many [[S3Clients#Command_Line_Clients | third-party command line clients]] that exist.&lt;br /&gt;
&lt;br /&gt;
==When to use UMobj==&lt;br /&gt;
Use umobj if:&lt;br /&gt;
* you have a large number of files to upload (hundreds or thousands of files)&lt;br /&gt;
* you are uploading large files (files greater than 4GB in size)&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
We highly recommend setting a few environmental variables containing your credentials for&lt;br /&gt;
convenience.  When logged into the Object Store web interface (see list above), you can&lt;br /&gt;
find these credentials on the user page.  E.g. https://obj.umiacs.umd.edu/obj/user/&lt;br /&gt;
&lt;br /&gt;
For example, if you use the &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; shell, you can add something like the following to your&lt;br /&gt;
&amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;.bash_profile&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OBJ_ACCESS_KEY_ID=&amp;quot;&amp;lt;ACCESS_KEY&amp;gt;&amp;quot;&lt;br /&gt;
export OBJ_SECRET_ACCESS_KEY=&amp;quot;&amp;lt;SECRET_KEY&amp;gt;&amp;quot;&lt;br /&gt;
export OBJ_SERVER=&amp;quot;obj.umiacs.umd.edu&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or, in tcsh, you can do the following or add it into your &amp;lt;tt&amp;gt;.tcshrc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
setenv OBJ_ACCESS_KEY_ID &amp;quot;&amp;lt;ACCESS_KEY&amp;gt;&amp;quot;&lt;br /&gt;
setenv OBJ_SECRET_ACCESS_KEY &amp;quot;&amp;lt;SECRET_KEY&amp;gt;&amp;quot;&lt;br /&gt;
setenv OBJ_SERVER &amp;quot;obj.umiacs.umd.edu&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(substituting in your actual &amp;lt;ACCESS_KEY&amp;gt; and &amp;lt;SECRET_KEY&amp;gt; for your personal account or [[OBJ#LabGroups | LabGroup]] in the [https://obj.umiacs.umd.edu/obj/user/ Object Store]).&lt;br /&gt;
&lt;br /&gt;
==Detailed Usage==&lt;br /&gt;
For an example of how to use UMobj, please see [[UMobj/Example]].&lt;br /&gt;
&lt;br /&gt;
For complete usage information, please see the [[GitLab]] page for [https://gitlab.umiacs.umd.edu/staff/umobj/blob/master/README.md#umobj umobj].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=MinIO_Client&amp;diff=12803</id>
		<title>MinIO Client</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=MinIO_Client&amp;diff=12803"/>
		<updated>2025-09-11T18:38:18Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Created page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The MinIO Client is a comprehensive single binary (Go) command line client for cloud based storage services.&lt;br /&gt;
&lt;br /&gt;
==Installing the MinIO Client(mc)==&lt;br /&gt;
mc is available on all UMIACS-supported RHEL machines through adding it via our software [[Modules|module tree]].&lt;br /&gt;
:&amp;lt;pre&amp;gt;module add mc&amp;lt;/pre&amp;gt;&lt;br /&gt;
For other operating systems or non UMIACS-supported hosts, the instructions to download and install it are found at https://docs.min.io/enterprise/aistor-object-store/reference/cli/#quickstart.&lt;br /&gt;
&lt;br /&gt;
==Setup the MinIO Client(mc)==&lt;br /&gt;
To connect to Obj you&#039;ll need to configure mc by adding a host with your ACCESS_KEY and SECRET_KEY.&lt;br /&gt;
* The ACCESS_KEY and SECRET_KEY can either be for your personal account or a [[OBJ#LabGroups | LabGroup]]. You can find these in the [https://obj.umiacs.umd.edu/obj/user/ Object Store].&lt;br /&gt;
&lt;br /&gt;
Running the following command to create the host &amp;quot;obj&amp;quot;.&lt;br /&gt;
:&amp;lt;pre&amp;gt;mc config host add obj http://obj.umiacs.umd.edu &amp;lt;ACCESS_KEY&amp;gt; &amp;lt;SECRET_KEY&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see what host(s) you have configured with the command &amp;lt;code&amp;gt;mc config host ls&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mc config host ls&lt;br /&gt;
...&lt;br /&gt;
obj&lt;br /&gt;
  URL       : http://obj.umiacs.umd.edu&lt;br /&gt;
  AccessKey : (redacted)&lt;br /&gt;
  SecretKey : (redacted)&lt;br /&gt;
  API       : s3v4&lt;br /&gt;
  Path      : auto&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running the MinIO Client(mc)==&lt;br /&gt;
To use most commands, specify the host you created and bucket name. For example the following to list the contents of the bucket &amp;quot;iso&amp;quot;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mc ls obj/iso&lt;br /&gt;
[2017-02-10 16:45:04 EST] 3.5GiB rhel-server-7.3-x86_64-dvd.iso&lt;br /&gt;
[2017-02-13 12:21:33 EST] 4.0GiB rhel-workstation-7.3-x86_64-dvd.iso&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The full MinIO Client documentation can be found here: https://docs.min.io/enterprise/aistor-object-store/reference/cli/#command-quick-reference.&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting==&lt;br /&gt;
If you receive an error message, please check if it matches one of the error messages below. If you don&#039;t see the error message below, please reach out to the [[HelpDesk]].&lt;br /&gt;
* &amp;lt;i&amp;gt;The provided &#039;x-amz-content-sha256&#039; header does not match what was computed.&amp;lt;/i&amp;gt;&lt;br /&gt;
** The MinIO Client has issues handling empty files. You can have it use an older api to handle these files by re-running the config command and specifying the api. &amp;lt;pre&amp;gt;mc config host add obj http://obj.umiacs.umd.edu &amp;lt;ACCESS_KEY&amp;gt; &amp;lt;SECRET_KEY&amp;gt; --api S3v2&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_edit_properties.png&amp;diff=12712</id>
		<title>File:WinSCP edit properties.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_edit_properties.png&amp;diff=12712"/>
		<updated>2025-06-25T19:05:03Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_remote_properties.png&amp;diff=12711</id>
		<title>File:WinSCP remote properties.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_remote_properties.png&amp;diff=12711"/>
		<updated>2025-06-25T19:03:38Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12710</id>
		<title>WinSCP</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12710"/>
		<updated>2025-06-25T18:11:25Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.&lt;br /&gt;
&lt;br /&gt;
WinSCP is installed on all UMIACS-supported Windows workstations. &lt;br /&gt;
&lt;br /&gt;
For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.&lt;br /&gt;
&lt;br /&gt;
==Logging Into WinSCP==&lt;br /&gt;
When launching WinSCP, it asks you to log into a remote host.&lt;br /&gt;
* If you would like to upload files to [[OBJ]], follow the login instructions on the [[S3Clients#WinSCP | S3 Clients]] page.&lt;br /&gt;
&lt;br /&gt;
To access a UMIACS directory, such as your [[NFShomes]] directory, [[WebSpace | legacy web page directories]], or [[FileTransferProtocol | legacy FTP directories]]:&lt;br /&gt;
* For the File protocol, select SFTP.&lt;br /&gt;
* For the host name, enter your [[Nexus#Access | Nexus submission node]].&lt;br /&gt;
* Enter your UMD username and password.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_sftp_login.png|500px]]&lt;br /&gt;
&lt;br /&gt;
After logging in, WinSCP shows two directories. &lt;br /&gt;
&lt;br /&gt;
The left side shows a directory on your computer and the right side shows a directory on the remote host.&lt;br /&gt;
&lt;br /&gt;
==Changing Directories==&lt;br /&gt;
===Changing the Local Directory===&lt;br /&gt;
To change the directory on your computer, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Local&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the left-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_local_dir.png|500px]]&lt;br /&gt;
&lt;br /&gt;
===Changing the Remote Directory===&lt;br /&gt;
To change the UMIACS directory, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Remote&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the right-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_remote_dir.png|600px]]&lt;br /&gt;
&lt;br /&gt;
When changing the remote directory, you have to put in the path to that directory.&lt;br /&gt;
&lt;br /&gt;
Below is a list of frequently used directories and their paths. &lt;br /&gt;
&lt;br /&gt;
If you would like to access a specific directory and you don&#039;t know the path, please contact the [[HelpDesk]].&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; cellpadding=&amp;quot;20&amp;quot;&lt;br /&gt;
! Name&lt;br /&gt;
! Path&lt;br /&gt;
|-&lt;br /&gt;
| NFShomes&lt;br /&gt;
| /fs/nfshomes/&amp;lt;username&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Nexus Scratch&lt;br /&gt;
| /fs/nexus-scratch/&amp;lt;username&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| www-umiacs&lt;br /&gt;
| /fs/www&lt;br /&gt;
|-&lt;br /&gt;
| www-users&lt;br /&gt;
| /fs/www-users&lt;br /&gt;
|-&lt;br /&gt;
| ftp-umiacs&lt;br /&gt;
| /fs/ftp&lt;br /&gt;
|}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Transferring Files==&lt;br /&gt;
To transfer files between your computer and the remote directory, simply drag the files from one side to the other.&lt;br /&gt;
&lt;br /&gt;
You can also select the files and use the corresponding button.&lt;br /&gt;
===Uploading Files===&lt;br /&gt;
To upload files from your computer to the remote directory, select one or more files on the left-hand side and then click the Upload button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_upload_local_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Upload button, another window pops up. You can enter a different directory to change where the files are uploaded. Click OK to start the upload. &lt;br /&gt;
&lt;br /&gt;
===Downloading Files===&lt;br /&gt;
To download files from the remote directory to your computer, select one or more files on the right-hand side and then click the Download button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_download_remote_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Download button, another window pops up. You can enter a different directory to change where the files are downloaded. Click OK to start the download.&lt;br /&gt;
&lt;br /&gt;
==Opening Files==&lt;br /&gt;
Double clicking on a file will open the file in a text editor. This works for small, simple files, such as .txt and .html files. &lt;br /&gt;
&lt;br /&gt;
To open more complex files, such as .pdf files, select the file and then click &amp;quot;Open&amp;quot; from the &amp;quot;Files&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_open_file.png|500px]]&lt;br /&gt;
&lt;br /&gt;
==Saving the Workspace==&lt;br /&gt;
If you frequently move files from your computer to a specific directory, you can save the workspace so it will open to that directory when you launch WinSCP.&lt;br /&gt;
&lt;br /&gt;
First, change the local and remote directories to the directories you want using the instructions above.&lt;br /&gt;
&lt;br /&gt;
Then, click &amp;quot;Save Workspace&amp;quot; under the &amp;quot;Tabs&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_save_workspace.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Another window pops up letting you set the name of the workspace.&lt;br /&gt;
&lt;br /&gt;
You can check the box next to &amp;quot;Create desktop shortcut&amp;quot; so it will save an icon on your desktop that will open up to the saved directories.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_save_workspace_window.png|300px]]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12708</id>
		<title>WinSCP</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12708"/>
		<updated>2025-06-25T15:08:18Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.&lt;br /&gt;
&lt;br /&gt;
WinSCP is installed on all UMIACS-supported Windows workstations. &lt;br /&gt;
&lt;br /&gt;
For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.&lt;br /&gt;
&lt;br /&gt;
==Logging Into WinSCP==&lt;br /&gt;
When launching WinSCP, it asks you to log into a remote host.&lt;br /&gt;
* If you would like to upload files to [[OBJ]], follow the login instructions on the [[S3Clients#WinSCP | S3 Clients]] page.&lt;br /&gt;
&lt;br /&gt;
To access a UMIACS directory, such as /fs/nexus-scratch:&lt;br /&gt;
* For the File protocol, select SFTP.&lt;br /&gt;
* For the host name, enter your [[Nexus#Access | Nexus submission node]].&lt;br /&gt;
* Enter your UMD username and password.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_sftp_login.png|500px]]&lt;br /&gt;
&lt;br /&gt;
After logging in, WinSCP shows two directories. &lt;br /&gt;
&lt;br /&gt;
The left side shows a directory on your computer and the right side shows a directory on the remote host.&lt;br /&gt;
&lt;br /&gt;
==Changing Directories==&lt;br /&gt;
===Changing the Local Directory===&lt;br /&gt;
To change the directory on your computer, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Local&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the left-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_local_dir.png|500px]]&lt;br /&gt;
&lt;br /&gt;
===Changing the Remote Directory===&lt;br /&gt;
To change the UMIACS directory, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Remote&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the right-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_remote_dir.png|600px]]&lt;br /&gt;
&lt;br /&gt;
==Transferring Files==&lt;br /&gt;
To transfer files between your computer and the remote directory, simply drag the files from one side to the other.&lt;br /&gt;
&lt;br /&gt;
You can also select the files and use the corresponding button.&lt;br /&gt;
===Uploading Files===&lt;br /&gt;
To upload files from your computer to the remote directory, select one or more files on the left-hand side and then click the Upload button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_upload_local_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Upload button, another window pops up. You can enter a different directory to change where the files are uploaded. Click OK to start the upload. &lt;br /&gt;
&lt;br /&gt;
===Downloading Files===&lt;br /&gt;
To download files from the remote directory to your computer, select one or more files on the right-hand side and then click the Download button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_download_remote_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Download button, another window pops up. You can enter a different directory to change where the files are downloaded. Click OK to start the download.&lt;br /&gt;
&lt;br /&gt;
==Opening Files==&lt;br /&gt;
Double clicking on a file will open the file in a text editor. This works for small, simple files, such as .txt and .html files. &lt;br /&gt;
&lt;br /&gt;
To open more complex files, such as .pdf files, select the file and then click &amp;quot;Open&amp;quot; from the &amp;quot;Files&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_open_file.png|500px]]&lt;br /&gt;
&lt;br /&gt;
==Saving the Workspace==&lt;br /&gt;
If you frequently move files from your computer to a specific directory, you can save the workspace so it will open to that directory when you launch WinSCP.&lt;br /&gt;
&lt;br /&gt;
First, change the local and remote directories to the directories you want using the instructions above.&lt;br /&gt;
&lt;br /&gt;
Then, click &amp;quot;Save Workspace&amp;quot; under the &amp;quot;Tabs&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_save_workspace.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Another window pops up letting you set the name of the workspace.&lt;br /&gt;
&lt;br /&gt;
You can check the box next to &amp;quot;Create desktop shortcut&amp;quot; so it will save an icon on your desktop that will open up to the saved directories.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_save_workspace_window.png|300px]]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_save_workspace_window.png&amp;diff=12707</id>
		<title>File:WinSCP save workspace window.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_save_workspace_window.png&amp;diff=12707"/>
		<updated>2025-06-25T15:04:25Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_save_workspace.png&amp;diff=12706</id>
		<title>File:WinSCP save workspace.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_save_workspace.png&amp;diff=12706"/>
		<updated>2025-06-25T14:47:47Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12705</id>
		<title>WinSCP</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12705"/>
		<updated>2025-06-25T05:32:37Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.&lt;br /&gt;
&lt;br /&gt;
WinSCP is installed on all UMIACS-supported Windows workstations. &lt;br /&gt;
&lt;br /&gt;
For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.&lt;br /&gt;
&lt;br /&gt;
==Logging Into WinSCP==&lt;br /&gt;
When launching WinSCP, it asks you to log into a remote host.&lt;br /&gt;
* If you would like to upload files to [[OBJ]], follow the login instructions on the [[S3Clients#WinSCP | S3 Clients]] page.&lt;br /&gt;
&lt;br /&gt;
To access a UMIACS directory, such as /fs/nexus-scratch:&lt;br /&gt;
* For the File protocol, select SFTP.&lt;br /&gt;
* For the host name, enter your [[Nexus#Access | Nexus submission node]].&lt;br /&gt;
* Enter your UMD username and password.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_sftp_login.png|500px]]&lt;br /&gt;
&lt;br /&gt;
After logging in, WinSCP shows two directories. &lt;br /&gt;
&lt;br /&gt;
The left side shows a directory on your computer and the right side shows a directory on the remote host.&lt;br /&gt;
&lt;br /&gt;
==Changing Directories==&lt;br /&gt;
===Changing the Local Directory===&lt;br /&gt;
To change the directory on your computer, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Local&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the left-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_local_dir.png|500px]]&lt;br /&gt;
&lt;br /&gt;
===Changing the Remote Directory===&lt;br /&gt;
To change the UMIACS directory, select &amp;quot;Open Directory&amp;quot; from the &amp;quot;Remote&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
This controls the section on the right-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_remote_dir.png|600px]]&lt;br /&gt;
&lt;br /&gt;
==Transferring Files==&lt;br /&gt;
To transfer files between your computer and the remote directory, simply drag the files from one side to the other.&lt;br /&gt;
&lt;br /&gt;
You can also select the files and use the corresponding button.&lt;br /&gt;
===Uploading Files===&lt;br /&gt;
To upload files from your computer to the remote directory, select one or more files on the left-hand side and then click the Upload button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_upload_local_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Upload button, another window pops up. You can enter a different directory to change where the files are uploaded. Click OK to start the upload. &lt;br /&gt;
&lt;br /&gt;
===Downloading Files===&lt;br /&gt;
To download files from the remote directory to your computer, select one or more files on the right-hand side and then click the Download button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_download_remote_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
After clicking the Download button, another window pops up. You can enter a different directory to change where the files are downloaded. Click OK to start the download.&lt;br /&gt;
&lt;br /&gt;
==Opening Files==&lt;br /&gt;
Double clicking on a file will open the file in a text editor. This works for small, simple files, such as .txt and .html files. &lt;br /&gt;
&lt;br /&gt;
To open more complex files, such as .pdf files, select the file and then click &amp;quot;Open&amp;quot; from the &amp;quot;Files&amp;quot; menu.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_open_file.png|500px]]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_open_file.png&amp;diff=12704</id>
		<title>File:WinSCP open file.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_open_file.png&amp;diff=12704"/>
		<updated>2025-06-25T05:29:40Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12703</id>
		<title>WinSCP</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12703"/>
		<updated>2025-06-25T03:55:34Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.&lt;br /&gt;
&lt;br /&gt;
WinSCP is installed on all UMIACS-supported Windows workstations. &lt;br /&gt;
&lt;br /&gt;
For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.&lt;br /&gt;
&lt;br /&gt;
==Logging Into WinSCP==&lt;br /&gt;
When launching WinSCP, it will ask you to login to a remote host.&lt;br /&gt;
* If you would like to upload files to [[OBJ]], follow the login instructions on the [[S3Clients#WinSCP | S3 Clients]] page.&lt;br /&gt;
&lt;br /&gt;
To access a UMIACS directory:&lt;br /&gt;
* For the File protocol, select SFTP.&lt;br /&gt;
* For the host name, enter your [[Nexus#Access | Nexus submission node]].&lt;br /&gt;
* Enter your UMD username and password.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_sftp_login.png|500px]]&lt;br /&gt;
&lt;br /&gt;
After logging in, WinSCP shows two directories. &lt;br /&gt;
&lt;br /&gt;
The left side shows a directory on the local computer and the right side shows a directory on the remote host.&lt;br /&gt;
&lt;br /&gt;
==Changing Directories==&lt;br /&gt;
===Changing Local Directory===&lt;br /&gt;
To change the directory on your computer, select Open Directory from the Local menu.&lt;br /&gt;
&lt;br /&gt;
This is the files shown on the left-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_local_dir.png|500px]]&lt;br /&gt;
&lt;br /&gt;
===Changing Remote Directory===&lt;br /&gt;
To change the UMIACS directory, select Open Directory from the Remote menu.&lt;br /&gt;
&lt;br /&gt;
This is the files shown on the right-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_remote_dir.png|600px]]&lt;br /&gt;
&lt;br /&gt;
==Transferring Files==&lt;br /&gt;
===Uploading Files===&lt;br /&gt;
To upload files from your computer to the remote directory, select one or more files on the left-hand side and then click the Upload button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_upload_local_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
===Downloading Files===&lt;br /&gt;
To download files from the remote directory to your computer, select one or more files on the right-hand side and then click the Download button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_download_remote_file.png|300px]]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12702</id>
		<title>WinSCP</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=WinSCP&amp;diff=12702"/>
		<updated>2025-06-24T23:43:48Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Created page with &amp;quot;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.  WinSCP is installed on all UMIACS-supported Windows workstations.   For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.  ==Using WinSCP== When launching WinSCP, it will ask you to login to a remote host. * If you would like to upload files to OBJ, follow the login instructions on the  S3 Clients page.  To acce...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WinSCP is a free Windows file transfer application used to copy data to and from a remote host.&lt;br /&gt;
&lt;br /&gt;
WinSCP is installed on all UMIACS-supported Windows workstations. &lt;br /&gt;
&lt;br /&gt;
For all other Windows hosts, WinSCP can be downloaded from https://winscp.net/eng/index.php.&lt;br /&gt;
&lt;br /&gt;
==Using WinSCP==&lt;br /&gt;
When launching WinSCP, it will ask you to login to a remote host.&lt;br /&gt;
* If you would like to upload files to [[OBJ]], follow the login instructions on the [[S3Clients#WinSCP | S3 Clients]] page.&lt;br /&gt;
&lt;br /&gt;
To access a UMIACS directory:&lt;br /&gt;
* For the File protocol, select SFTP.&lt;br /&gt;
* For the host name, enter your [[Nexus#Access | Nexus submission node]].&lt;br /&gt;
* Enter your UMD username and password.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_sftp_login.png|500px]]&lt;br /&gt;
&lt;br /&gt;
After logging in, WinSCP shows two directories. &lt;br /&gt;
&lt;br /&gt;
The left side shows a directory on the local computer and the right side shows a directory on the remote host.&lt;br /&gt;
&lt;br /&gt;
===Changing Local Directory===&lt;br /&gt;
To change the directory on your computer, select Open Directory from the Local menu.&lt;br /&gt;
&lt;br /&gt;
This is the files shown on the left-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_local_dir.png|500px]]&lt;br /&gt;
&lt;br /&gt;
===Changing Remote Directory===&lt;br /&gt;
To change the UMIACS directory, select Open Directory from the Remote menu.&lt;br /&gt;
&lt;br /&gt;
This is the files shown on the right-hand side.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_change_remote_dir.png|600px]]&lt;br /&gt;
&lt;br /&gt;
===Uploading Files===&lt;br /&gt;
To upload files from your computer to the remote directory, select one or more files on the left-hand side and then click the Upload button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_upload_local_file.png|300px]]&lt;br /&gt;
&lt;br /&gt;
===Downloading Files===&lt;br /&gt;
To download files from the remote directory to your computer, select one or more files on the right-hand side and then click the Download button.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCP_download_remote_file.png|300px]]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_download_remote_file.png&amp;diff=12701</id>
		<title>File:WinSCP download remote file.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_download_remote_file.png&amp;diff=12701"/>
		<updated>2025-06-24T23:42:28Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_upload_local_file.png&amp;diff=12700</id>
		<title>File:WinSCP upload local file.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_upload_local_file.png&amp;diff=12700"/>
		<updated>2025-06-24T23:39:30Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_change_remote_dir.png&amp;diff=12697</id>
		<title>File:WinSCP change remote dir.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_change_remote_dir.png&amp;diff=12697"/>
		<updated>2025-06-24T17:53:37Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Showing where to change the remote directory in WinSCP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Showing where to change the remote directory in WinSCP&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_change_local_dir.png&amp;diff=12696</id>
		<title>File:WinSCP change local dir.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_change_local_dir.png&amp;diff=12696"/>
		<updated>2025-06-24T17:53:01Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Showing where to change the local directory in WinSCP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Showing where to change the local directory in WinSCP&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_sftp_login.png&amp;diff=12695</id>
		<title>File:WinSCP sftp login.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=File:WinSCP_sftp_login.png&amp;diff=12695"/>
		<updated>2025-06-24T17:51:31Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: WinSCP login prompt, logging in as SFTP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
WinSCP login prompt, logging in as SFTP&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=S3Clients&amp;diff=12059</id>
		<title>S3Clients</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=S3Clients&amp;diff=12059"/>
		<updated>2024-10-10T20:39:09Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Many popular S3 desktop clients can be used to access the [[OBJ | UMIACS Object Store]].  These tools complement the [[UMobj]] command line utilities and the built-in web interface by providing integration with the native file explorer on your desktop machine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Many of these clients have features that are not supported by our Object Store in UMIACS.  One prominent example of this is permissions. We suggest you instead manage permissions from the [https://obj.umiacs.umd.edu/obj built-in web application] for the Object Store.&lt;br /&gt;
&lt;br /&gt;
=Graphical Clients=&lt;br /&gt;
&lt;br /&gt;
==Cyberduck==&lt;br /&gt;
https://cyberduck.io/&lt;br /&gt;
&lt;br /&gt;
This is a free Windows and Mac S3 browser (it is however nagware that asks for a donation).  It supports our S3 Object Store using the &amp;quot;S3 (Amazon Simple Storage Service)&amp;quot;  drop down menu choice in the add bookmark dialog.&lt;br /&gt;
&lt;br /&gt;
[[Image:Cyberduck.png|400px]]&lt;br /&gt;
&lt;br /&gt;
The following fields are required:&lt;br /&gt;
* &#039;&#039;&#039;Server&#039;&#039;&#039; - This is your object store (&amp;lt;code&amp;gt;obj.umiacs.umd.edu&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &#039;&#039;&#039;Access Key ID&#039;&#039;&#039; - This is your access key as provided to you in the object store&lt;br /&gt;
* &#039;&#039;&#039;Password&#039;&#039;&#039; - This is your secret key as provided to you in the object store&lt;br /&gt;
&lt;br /&gt;
You will be prompted for your secret key when you connect and may choose to save the password.&lt;br /&gt;
&lt;br /&gt;
==WinSCP==&lt;br /&gt;
* https://winscp.net/eng/index.php&lt;br /&gt;
&lt;br /&gt;
This is a free Windows file transfer application. It supports our S3 Object Store using the &amp;quot;Amazon S3&amp;quot; drop down menu choice under File protocol when logging in.&lt;br /&gt;
&lt;br /&gt;
[[Image:WinSCPS3.png|400px]]&lt;br /&gt;
&lt;br /&gt;
The following fields are required:&lt;br /&gt;
* &#039;&#039;&#039;Host name&#039;&#039;&#039; - This is your object store (&amp;lt;code&amp;gt;obj.umiacs.umd.edu&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &#039;&#039;&#039;Access key ID&#039;&#039;&#039; - This is your access key as provided to you in the object store&lt;br /&gt;
* &#039;&#039;&#039;Secret access key&#039;&#039;&#039; - This is your secret key as provided to you in the object store&lt;br /&gt;
&lt;br /&gt;
==Transmit==&lt;br /&gt;
* http://panic.com/transmit/&lt;br /&gt;
This is a paid file transfer application for Mac.  It supports our S3 Object Store using the &amp;quot;S3&amp;quot; menu choice after clicking the plus sign to add a favorite.&lt;br /&gt;
&lt;br /&gt;
[[Image:Transmit.png|400px]]&lt;br /&gt;
&lt;br /&gt;
The following fields are required:&lt;br /&gt;
* &#039;&#039;&#039;Server&#039;&#039;&#039; - This is your object store (&amp;lt;code&amp;gt;obj.umiacs.umd.edu&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &#039;&#039;&#039;Access Key ID&#039;&#039;&#039; - This is your access key as provided to you in the object store&lt;br /&gt;
* &#039;&#039;&#039;Secret&#039;&#039;&#039; - This is your secret key as provided to you in the object store&lt;br /&gt;
&lt;br /&gt;
These settings can be saved as a favorite for easy access.  Transmit also allows you to mount your Obj buckets as local disks, which will support easy drag-and-drop of files.&lt;br /&gt;
&lt;br /&gt;
=Command Line Clients=&lt;br /&gt;
==s3cmd==&lt;br /&gt;
Command line client for accessing S3-like services.&lt;br /&gt;
&lt;br /&gt;
* http://s3tools.org/s3cmd&lt;br /&gt;
&lt;br /&gt;
You need to configure a file like &amp;lt;code&amp;gt;~/.s3cfg&amp;lt;/code&amp;gt; that looks like the following with your ACCESS_KEY and SECRET_KEY substituted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[default]&lt;br /&gt;
access_key = &amp;lt;ACCESS_KEY&amp;gt;&lt;br /&gt;
host_base = obj.umiacs.umd.edu&lt;br /&gt;
host_bucket = %(bucket)s.obj.umiacs.umd.edu&lt;br /&gt;
secret_key = &amp;lt;SECRET_KEY&amp;gt;&lt;br /&gt;
use_https = True&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==mc==&lt;br /&gt;
The MinIO Client is a comprehensive single binary (Go) command line client for cloud based storage services.&lt;br /&gt;
&lt;br /&gt;
* https://min.io/download&lt;br /&gt;
&lt;br /&gt;
You can run this client on supported UMIACS systems through adding it via our software [[Modules|module]].&lt;br /&gt;
&amp;lt;pre&amp;gt;module add mc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will need to setup a cloud provider for Obj by running the following command (substituting in your actual ACCESS_KEY and SECRET_KEY for your personal account or [[OBJ#LabGroups | LabGroup]] in the [https://obj.umiacs.umd.edu/obj/user/ Object Store]).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mc config host add obj http://obj.umiacs.umd.edu &amp;lt;ACCESS_KEY&amp;gt; &amp;lt;SECRET_KEY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see what host(s) you have configured with the command &amp;lt;code&amp;gt;mc config host ls&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mc config host ls&lt;br /&gt;
...&lt;br /&gt;
obj&lt;br /&gt;
  URL       : http://obj.umiacs.umd.edu&lt;br /&gt;
  AccessKey : (redacted)&lt;br /&gt;
  SecretKey : (redacted)&lt;br /&gt;
  API       : s3v4&lt;br /&gt;
  Path      : auto&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can then use the normal &amp;lt;code&amp;gt;mc&amp;lt;/code&amp;gt; commands like the following to list the contents of a bucket.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mc ls obj/iso&lt;br /&gt;
[2017-02-10 16:45:04 EST] 3.5GiB rhel-server-7.3-x86_64-dvd.iso&lt;br /&gt;
[2017-02-13 12:21:33 EST] 4.0GiB rhel-workstation-7.3-x86_64-dvd.iso&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also the ability to search for file globs of specific files using the &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt; sub-command for &amp;lt;code&amp;gt;mc&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mc find derek/derek_support --name &amp;quot;*.log&amp;quot;&lt;br /&gt;
derek/derek_support/mds_20170918/ceph-mds.objmds01.log&lt;br /&gt;
derek/derek_support/satellite.log&lt;br /&gt;
derek/derek_support/umiacs-49168.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The full MinIO Client documentation can be found here: https://min.io/docs/minio/linux/reference/minio-mc.html.&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Template:Nfshomes&amp;diff=11998</id>
		<title>Template:Nfshomes</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Template:Nfshomes&amp;diff=11998"/>
		<updated>2024-08-22T17:21:12Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Updated the command to not use quota&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You have 30GB of home directory storage available at &amp;lt;code&amp;gt;/nfshomes/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;.  It has both [[Snapshots]] and [[TSM | Backups]] enabled.&lt;br /&gt;
&lt;br /&gt;
Home directories are intended to store personal or configuration files only.  We encourage you to not share any data in your home directory.  You are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: To check your quota on this directory, use the command &amp;lt;code&amp;gt;df -h ~&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11906</id>
		<title>Nexus/GAMMA</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11906"/>
		<updated>2024-06-25T18:25:22Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Added project limits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the [[Nexus]]. Only GAMMA lab members are able to run non-interruptible jobs on these nodes.&lt;br /&gt;
&lt;br /&gt;
=Access=&lt;br /&gt;
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission host that has additional local storage available.&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please do not run anything on the login node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.&lt;br /&gt;
&lt;br /&gt;
=Quality of Service=&lt;br /&gt;
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard job QoSes]] in the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; partition using the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; account.&lt;br /&gt;
&lt;br /&gt;
The additional job QoSes for the GAMMA partition specifically are:&lt;br /&gt;
* &amp;lt;code&amp;gt;huge-long&amp;lt;/code&amp;gt;: Allows for longer jobs using higher overall resources.&lt;br /&gt;
&lt;br /&gt;
Please note that the partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Nodenames&lt;br /&gt;
! Type&lt;br /&gt;
! Quantity&lt;br /&gt;
! CPU cores per node&lt;br /&gt;
! Memory per node&lt;br /&gt;
! GPUs per node&lt;br /&gt;
|-&lt;br /&gt;
|gammagpu[00-04,06-09]&lt;br /&gt;
|A5000 GPU Node&lt;br /&gt;
|9&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- &lt;br /&gt;
|gammagpu05&lt;br /&gt;
|A4000 GPU Node&lt;br /&gt;
|1&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- class=&amp;quot;sortbottom&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
!Total&lt;br /&gt;
|10&lt;br /&gt;
|320&lt;br /&gt;
|2560GB&lt;br /&gt;
|80&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Example=&lt;br /&gt;
From &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; you can run the following example to submit an interactive job.  Please note that you need to specify the &amp;lt;code&amp;gt;--partition&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt;.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash&lt;br /&gt;
$ hostname&lt;br /&gt;
gammagpu01.umiacs.umd.edu&lt;br /&gt;
$ nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66)&lt;br /&gt;
GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd)&lt;br /&gt;
GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193)&lt;br /&gt;
GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3)&lt;br /&gt;
GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a)&lt;br /&gt;
GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859)&lt;br /&gt;
GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe)&lt;br /&gt;
GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use SBATCH to submit your job.  Here are two examples on how to do that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OR&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch script.sh&lt;br /&gt;
&lt;br /&gt;
// script.sh //&lt;br /&gt;
&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --gres=gpu:8&lt;br /&gt;
#SBATCH --account=gamma&lt;br /&gt;
#SBATCH --partition=gamma&lt;br /&gt;
#SBATCH --qos=huge-long&lt;br /&gt;
#SBATCH --time=1-23:00:00&lt;br /&gt;
&lt;br /&gt;
python your_file.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage=&lt;br /&gt;
There are 3 types of user storage available to users in GAMMA:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
There is also read-only storage available for Dataset directories.&lt;br /&gt;
&lt;br /&gt;
GAMMA users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
{{Nfshomes}}&lt;br /&gt;
&lt;br /&gt;
===Project Directories===&lt;br /&gt;
You can request project based allocations for up to 8TB and up to 180 days with approval from a GAMMA faculty member.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that approved the project in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/gamma-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).&lt;br /&gt;
* If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.&lt;br /&gt;
* If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible.&lt;br /&gt;
** If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible.&lt;br /&gt;
** If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection, there are no snapshots and the data is not backed up. &lt;br /&gt;
There are two types of scratch directories:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You are allocated 100GB of scratch space via NFS from &amp;lt;code&amp;gt;/gammascratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you may not see your directory if you run &amp;lt;code&amp;gt;ls /gammascratch&amp;lt;/code&amp;gt; but it will be mounted when you &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into your /gammascratch directory.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 200GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 200GB, you will need faculty approval. &lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
These file systems are not available over [[NFS]] and &#039;&#039;&#039;there are no backups or snapshots available&#039;&#039;&#039; for these file systems.&lt;br /&gt;
&lt;br /&gt;
* Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These directories are local to each node, ie. the &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; on two different nodes are completely separate.&lt;br /&gt;
** These directories are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
** These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
* Gamma has invested in a 20TB NVMe scratch file system on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; that is available as &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords. &lt;br /&gt;
** The &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; doesn&#039;t have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/gamma-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of GAMMA datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA here].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/Vulcan&amp;diff=11860</id>
		<title>Nexus/Vulcan</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/Vulcan&amp;diff=11860"/>
		<updated>2024-06-03T18:07:16Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Vulcan standalone cluster&#039;s compute nodes have folded into [[Nexus]] as of the scheduled [[MonthlyMaintenanceWindow | maintenance window]] for August 2023 (Thursday 08/17/2023, 5-8pm).&lt;br /&gt;
&lt;br /&gt;
The Nexus cluster already has a large pool of compute resources made possible through college-level funding for UMIACS and CSD faculty. Details on common nodes already in the cluster (Tron partition) can be found [[Nexus/Tron | here]].&lt;br /&gt;
&lt;br /&gt;
Please [[HelpDesk | contact staff]] with any questions or concerns.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
You can [[SSH]] to &amp;lt;code&amp;gt;nexusvulcan.umiacs.umd.edu&amp;lt;/code&amp;gt; to log in to a submission host.&lt;br /&gt;
&lt;br /&gt;
If you store something in a local directory (/tmp, /scratch0) on one of the two submission hosts, you will need to connect to that same submission host to access it later. The actual submission hosts are:&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusvulcan00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusvulcan01.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All partitions, QoSes, and account names from the standalone Vulcan cluster have been moved over to Nexus. However, please note that &amp;lt;code&amp;gt;vulcan-&amp;lt;/code&amp;gt; is prepended to all of the values that were present in the standalone Vulcan cluster to distinguish them from existing values in Nexus. The lone exception is the base account that was named &amp;lt;code&amp;gt;vulcan&amp;lt;/code&amp;gt; in the standalone cluster (it is also named just &amp;lt;code&amp;gt;vulcan&amp;lt;/code&amp;gt; in Nexus).&lt;br /&gt;
&lt;br /&gt;
Here are some before/after examples of job submission with various parameters:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Standalone Vulcan cluster submission command&lt;br /&gt;
! Nexus cluster submission command&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=dpart --qos=medium --account=abhinav --gres=gpu:gtx1080ti:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=vulcan-dpart --qos=vulcan-medium --account=vulcan-abhinav --gres=gpu:gtx1080ti:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=cpu --qos=cpu --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=vulcan-cpu --qos=vulcan-cpu --account=vulcan --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=scavenger --qos=scavenger --account=vulcan --gres=gpu:4 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=vulcan-scavenger --qos=vulcan-scavenger --account=vulcan --gres=gpu:4 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Vulcan users (exclusively) can schedule non-interruptible jobs on Vulcan nodes with any non-scavenger job parameters. Please note that the &amp;lt;code&amp;gt;vulcan-dpart&amp;lt;/code&amp;gt; partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on all vulcan## in aggregate nodes plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use. It also has a max submission limit of 500 jobs per user simultaneously so as to not overload the cluster. This is codified by the partition QoS named &#039;&#039;&#039;vulcan&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Please note that the Vulcan compute nodes are also in the institute-wide &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition in Nexus. Vulcan users still have scavenging priority over these nodes via the &amp;lt;code&amp;gt;vulcan-scavenger&amp;lt;/code&amp;gt; partition (i.e., all &amp;lt;code&amp;gt;vulcan-&amp;lt;/code&amp;gt; partition jobs (other than &amp;lt;code&amp;gt;vulcan-scavenger&amp;lt;/code&amp;gt;) can preempt both &amp;lt;code&amp;gt;vulcan-scavenger&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition jobs, and &amp;lt;code&amp;gt;vulcan-scavenger&amp;lt;/code&amp;gt; partition jobs can preempt &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition jobs).&lt;br /&gt;
&lt;br /&gt;
==Nodes==&lt;br /&gt;
There are currently 45 [[Nexus/Vulcan/GPUs | GPU nodes]] available running a mixture of NVIDIA RTX A6000, NVIDIA RTX A5000, NVIDIA RTX A4000, NVIDIA Quadro P6000, NVIDIA GeForce GTX 1080 Ti, NVIDIA GeForce RTX 2080 Ti, and NVIDIA Tesla P100 cards. There are also 4 CPU-only nodes available.&lt;br /&gt;
&lt;br /&gt;
All nodes are scheduled with the [[SLURM]] resource manager.&lt;br /&gt;
&lt;br /&gt;
==Partitions==&lt;br /&gt;
There are three partitions available to general Vulcan [[SLURM]] users.  You must specify a partition when submitting your job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;vulcan-dpart&#039;&#039;&#039; - This is the default partition. Job allocations are guaranteed. Only nodes with GPUs from architectures before NVIDIA&#039;s [https://www.nvidia.com/en-us/data-center/ampere-architecture/ Ampere architecture] are included in this partition.&lt;br /&gt;
* &#039;&#039;&#039;vulcan-scavenger&#039;&#039;&#039; - This is the alternate partition that allows jobs longer run times and more resources but is preemptable when jobs in other &amp;lt;code&amp;gt;vulcan-&amp;lt;/code&amp;gt; partitions are ready to be scheduled.&lt;br /&gt;
* &#039;&#039;&#039;vulcan-cpu&#039;&#039;&#039; - This partition is for CPU focused jobs. Job allocations are guaranteed.&lt;br /&gt;
&lt;br /&gt;
There are a few additional partitions available to subsets of Vulcan users based on specific requirements.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;vulcan-ampere&#039;&#039;&#039; - This partition contains nodes with GPUs from NVIDIA&#039;s [https://www.nvidia.com/en-us/data-center/ampere-architecture/ Ampere architecture]. Job allocations are guaranteed. &lt;br /&gt;
*: As of Thursday 02/29/2024 at 12pm, there is a 4 hour time limit on interactive jobs in this partition. If you need to run longer jobs, you will need to modify your workflow into a job that can be submitted as a batch script.&lt;br /&gt;
*: As of Thursday 03/21/2024 at 5pm, there is a limit of 4 CPUs and 48G memory maximum per GPU requested by a job. If you need to run jobs with more CPUs/memory, you will either need to request more GPUs in the job or use a different partition.&lt;br /&gt;
&lt;br /&gt;
: Submission is restricted to the Slurm [[#Accounts | accounts]] of the faculty who invested in these nodes:&lt;br /&gt;
:* Abhinav Shrivastava (vulcan-abhinav)&lt;br /&gt;
:* Jia-Bin Huang (vulcan-jbhuang)&lt;br /&gt;
:* Christopher Metzler (vulcan-metzler)&lt;br /&gt;
:* Matthias Zwicker (vulcan-zwicker)&lt;br /&gt;
&lt;br /&gt;
==Accounts==&lt;br /&gt;
Vulcan has a base SLURM account &amp;lt;code&amp;gt;vulcan&amp;lt;/code&amp;gt; which has a modest number of guaranteed billing resources available to all cluster users at any given time.  Other faculty that have invested in Vulcan compute infrastructure have an additional account provided to their sponsored accounts on the cluster.&lt;br /&gt;
&lt;br /&gt;
If you do not specify an account when submitting your job, you will receive the &#039;&#039;&#039;vulcan&#039;&#039;&#039; account.  If your faculty sponsor has their own account, it is recommended to use that account for job submission.&lt;br /&gt;
&lt;br /&gt;
The current faculty accounts are:&lt;br /&gt;
* vulcan-abhinav&lt;br /&gt;
* vulcan-djacobs&lt;br /&gt;
* vulcan-jbhuang&lt;br /&gt;
* vulcan-lsd&lt;br /&gt;
* vulcan-metzler&lt;br /&gt;
* vulcan-rama&lt;br /&gt;
* vulcan-ramani&lt;br /&gt;
* vulcan-yaser&lt;br /&gt;
* vulcan-zwicker&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show account format=account%20,description%30,organization%10&lt;br /&gt;
             Account                          Descr        Org&lt;br /&gt;
-------------------- ------------------------------ ----------&lt;br /&gt;
                 ...                            ...        ...&lt;br /&gt;
              vulcan                         vulcan     vulcan&lt;br /&gt;
      vulcan-abhinav   vulcan - abhinav shrivastava     vulcan&lt;br /&gt;
      vulcan-djacobs          vulcan - david jacobs     vulcan&lt;br /&gt;
      vulcan-jbhuang         vulcan - jia-bin huang     vulcan&lt;br /&gt;
          vulcan-lsd           vulcan - larry davis     vulcan&lt;br /&gt;
      vulcan-metzler         vulcan - chris metzler     vulcan&lt;br /&gt;
         vulcan-rama        vulcan - rama chellappa     vulcan&lt;br /&gt;
       vulcan-ramani     vulcan - ramani duraiswami     vulcan&lt;br /&gt;
        vulcan-yaser          vulcan - yaser yacoob     vulcan&lt;br /&gt;
      vulcan-zwicker      vulcan - matthias zwicker     vulcan&lt;br /&gt;
                 ...                            ...        ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Faculty can manage this list of users via our [https://intranet.umiacs.umd.edu/directory/secgroup/ Directory application] in the Security Groups section.  The security group that controls access has the prefix &amp;lt;code&amp;gt;vulcan_&amp;lt;/code&amp;gt; and then the faculty username.  It will also list &amp;lt;code&amp;gt;slurm://nexusctl.umiacs.umd.edu&amp;lt;/code&amp;gt; as the associated URI.&lt;br /&gt;
&lt;br /&gt;
You can check your account associations by running the &#039;&#039;&#039;show_assoc&#039;&#039;&#039; command to see the accounts you are associated with.  Please [[HelpDesk | contact staff]] and include your faculty member in the conversation if you do not see the appropriate association. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_assoc&lt;br /&gt;
      User          Account MaxJobs       GrpTRES                                                                              QOS&lt;br /&gt;
---------- ---------------- ------- ------------- --------------------------------------------------------------------------------&lt;br /&gt;
       ...              ...     ...                                                                                            ...&lt;br /&gt;
   abhinav           vulcan      48                                       vulcan-cpu,vulcan-default,vulcan-medium,vulcan-scavenger&lt;br /&gt;
   abhinav   vulcan-abhinav      48                           vulcan-cpu,vulcan-default,vulcan-high,vulcan-medium,vulcan-scavenger&lt;br /&gt;
       ...              ...     ...                                                                                            ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also see the total number of Track-able Resources (TRES) allowed for each account by running the following command. Please make sure you give the appropriate account that you are looking for. As shown below, there is a concurrent limit of 64 total GPUs for all users not in a contributing faculty group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show assoc account=vulcan format=user,account,qos,grptres&lt;br /&gt;
      User    Account                  QOS       GrpTRES&lt;br /&gt;
---------- ---------- -------------------- -------------&lt;br /&gt;
               vulcan                        gres/gpu=64&lt;br /&gt;
                  ...                                ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==QoS==&lt;br /&gt;
You need to decide the QOS to submit with which will set a certain number of restrictions to your job.  If you do not specify a QoS when submitting your job using the &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt; parameter, you will receive the &#039;&#039;&#039;vulcan-default&#039;&#039;&#039; QoS assuming you are using a Vulcan account.&lt;br /&gt;
&lt;br /&gt;
The following &amp;lt;code&amp;gt;sacctmgr&amp;lt;/code&amp;gt; command will list the current QOS.  Either the &amp;lt;code&amp;gt;vulcan-default&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;vulcan-medium&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;vulcan-high&amp;lt;/code&amp;gt; QOS is required for the vulcan-dpart partition.  Please note that only faculty accounts (see above) have access to the &amp;lt;code&amp;gt;vulcan-high&amp;lt;/code&amp;gt; QoS.&lt;br /&gt;
&lt;br /&gt;
The following example will show you the current limits that the QOS have. The output is truncated to show only relevant Vulcan QoS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_qos&lt;br /&gt;
                Name     MaxWall                        MaxTRES MaxJobsPU                      MaxTRESPU &lt;br /&gt;
-------------------- ----------- ------------------------------ --------- ------------------------------ &lt;br /&gt;
...&lt;br /&gt;
          vulcan-cpu  2-00:00:00                cpu=1024,mem=4T         4                                &lt;br /&gt;
      vulcan-default  7-00:00:00       cpu=4,gres/gpu=1,mem=32G         2                                &lt;br /&gt;
       vulcan-exempt  7-00:00:00     cpu=32,gres/gpu=8,mem=256G         2                                &lt;br /&gt;
         vulcan-high  1-12:00:00     cpu=16,gres/gpu=4,mem=128G         2                                &lt;br /&gt;
        vulcan-janus  3-00:00:00    cpu=32,gres/gpu=10,mem=256G                                          &lt;br /&gt;
       vulcan-medium  3-00:00:00       cpu=8,gres/gpu=2,mem=64G         2                                &lt;br /&gt;
       vulcan-sailon  3-00:00:00     cpu=32,gres/gpu=8,mem=256G                              gres/gpu=48 &lt;br /&gt;
    vulcan-scavenger  3-00:00:00     cpu=32,gres/gpu=8,mem=256G                                          &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ show_partition_qos&lt;br /&gt;
                Name MaxSubmitPU                      MaxTRESPU              GrpTRES &lt;br /&gt;
-------------------- ----------- ------------------------------ -------------------- &lt;br /&gt;
...&lt;br /&gt;
              vulcan         500                                 cpu=1760,mem=15824G &lt;br /&gt;
    vulcan-scavenger         500                                                     &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Storage==&lt;br /&gt;
Vulcan has the following storage available.  Please also review UMIACS [[LocalDataStorage | Local Data Storage]] policies including any volume that is labeled as scratch.&lt;br /&gt;
&lt;br /&gt;
Vulcan users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
{{Nfshomes}}&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the Vulcan compute infrastructure:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You have 300GB of scratch storage available at &amp;lt;code&amp;gt;/vulcanscratch/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you will need to &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into the directory or request/specify a fully qualified file path to access this.&lt;br /&gt;
&lt;br /&gt;
You may request a temporary increase of up to 500GB total space for a maximum of 120 days without any faculty approval by [[HelpDesk | contacting staff]].  Once the temporary increase period is over, you will be contacted and given a one-week window of opportunity to clean and secure your data before staff will forcibly remove data to get your space back under 300GB.  If you need space beyond 500GB or for longer than 120 days, you will need faculty approval and/or a project directory.&lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These are almost always more performant than any other storage available to the job.  However, you must stage their data within the confine of their job and stage the data out before the end of their job.&lt;br /&gt;
&lt;br /&gt;
These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month at 1am.  Different nodes will run the maintenance jobs on different days of the month to ensure the cluster is still highly available at all times.  Please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/vulcan-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of Vulcan datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=Vulcan here].&lt;br /&gt;
&lt;br /&gt;
===Project Storage===&lt;br /&gt;
Users within the Vulcan compute infrastructure can request project based allocations for up to 10TB for up to 180 days by [[HelpDesk | contacting staff]] with approval from the Vulcan faculty manager (Dr. Shrivastava).  These allocations will be available from &amp;lt;code&amp;gt;/fs/vulcan-projects&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/fs/cfar-projects&amp;lt;/code&amp;gt; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation for up to another 180 days (requires re-approval from Dr. Shrivastava).&lt;br /&gt;
* If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.&lt;br /&gt;
* If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible.&lt;br /&gt;
** If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible.&lt;br /&gt;
** If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
Project storage is fully protected.  It has [[Snapshots | snapshots]] enabled and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
===Object Storage===&lt;br /&gt;
All Vulcan users can request project allocations in the [https://obj.umiacs.umd.edu/obj/help UMIACS Object Store]. Please [[HelpDesk | contact staff]] with a short project name and the amount of storage you will need to get started.&lt;br /&gt;
&lt;br /&gt;
To access this storage, you&#039;ll need to use a [[S3Clients | S3 client]] or our [[UMobj]] command line utilities.&lt;br /&gt;
&lt;br /&gt;
An example on how to use the umobj command line utilities can be found [[UMobj/Example | here]].  A full set of documentation for the utilities can be found on the [https://gitlab.umiacs.umd.edu/staff/umobj/blob/master/README.md#umobj umobj Gitlab page].&lt;br /&gt;
&lt;br /&gt;
==Migration==&lt;br /&gt;
===Home Directories===&lt;br /&gt;
The [[Nexus]] uses [[NFShomes]] home directories - if your UMIACS account was created before February 22nd, 2023, you were using &amp;lt;code&amp;gt;/cfarhomes/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt; as your home directory on the standalone Vulcan cluster. While &amp;lt;code&amp;gt;/cfarhomes&amp;lt;/code&amp;gt; is available on Nexus, your shell initialization scripts from it will not automatically load. Please copy over anything you need to your &amp;lt;code&amp;gt;/nfshomes/&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt; directory at your earliest convenience, as &amp;lt;code&amp;gt;/cfarhomes&amp;lt;/code&amp;gt; will be retired in a two phase process:&lt;br /&gt;
* Fri 11/17/2023, 5pm: cfarhomes directories are made read-only&lt;br /&gt;
* Thu 12/21/2023, 5-8pm ([[MonthlyMaintenanceWindow |monthly maintenance window]]): cfarhomes directories are taken offline&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/CML&amp;diff=11566</id>
		<title>Nexus/CML</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/CML&amp;diff=11566"/>
		<updated>2024-02-07T18:27:08Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Changed the starting /cmlscratch to 200GB&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[CML]] standalone cluster&#039;s compute nodes have folded into [[Nexus]] as of the scheduled [[MonthlyMaintenanceWindow | maintenance window]] for August 2023 (Thursday 08/17/2023, 5-8pm).&lt;br /&gt;
&lt;br /&gt;
The Nexus cluster already has a large pool of compute resources made possible through college-level funding for UMIACS and CSD faculty. Details on common nodes already in the cluster (Tron partition) can be found [[Nexus/Tron | here]].&lt;br /&gt;
&lt;br /&gt;
Please [[HelpDesk | contact staff]] with any questions or concerns.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
The Nexus cluster submission nodes that are allocated to CML are &amp;lt;code&amp;gt;nexuscml00.umiacs.umd.edu&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nexuscml01.umiacs.umd.edu&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
All partitions, QoSes, and account names from the standalone CML cluster have been moved over to Nexus. However, please note that &amp;lt;code&amp;gt;cml-&amp;lt;/code&amp;gt; is prepended to all of the values that were present in the standalone CML cluster to distinguish them from existing values in Nexus. The lone exception is the base account that was named &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; in the standalone cluster (it is also named just &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; in Nexus).&lt;br /&gt;
&lt;br /&gt;
Here are some before/after examples of job submission with various parameters:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Standalone CML cluster submission command&lt;br /&gt;
! Nexus cluster submission command&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=dpart --qos=medium --account=tomg --gres=gpu:rtx2080ti:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=cml-dpart --qos=cml-medium --account=cml-tomg --gres=gpu:rtx2080ti:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=cpu --qos=cpu --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=cml-cpu --qos=cml-cpu --account=cml --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=scavenger --qos=scavenger --account=scavenger --gres=gpu:4 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;srun --partition=cml-scavenger --qos=cml-scavenger --account=cml-scavenger --gres=gpu:4 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
CML users (exclusively) can schedule non-interruptible jobs on CML nodes with any non-scavenger job parameters. Please note that the &amp;lt;code&amp;gt;cml-dpart&amp;lt;/code&amp;gt; partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on all cml## nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use. It also has a max submission limit of 500 jobs per user simultaneously so as to not overload the cluster. This is codified by the partition QoS named &#039;&#039;&#039;cml&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Please note that the CML compute nodes are also in the institute-wide &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition in Nexus. CML users still have scavenging priority over these nodes via the &amp;lt;code&amp;gt;cml-scavenger&amp;lt;/code&amp;gt; partition (i.e., all &amp;lt;code&amp;gt;cml-&amp;lt;/code&amp;gt; partition jobs (other than &amp;lt;code&amp;gt;cml-scavenger&amp;lt;/code&amp;gt;) can preempt both &amp;lt;code&amp;gt;cml-scavenger&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition jobs, and &amp;lt;code&amp;gt;cml-scavenger&amp;lt;/code&amp;gt; partition jobs can preempt &amp;lt;code&amp;gt;scavenger&amp;lt;/code&amp;gt; partition jobs).&lt;br /&gt;
&lt;br /&gt;
==Partitions==&lt;br /&gt;
There are three partitions available to general CML [[SLURM]] users.  You must specify a partition when submitting your job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;cml-dpart&#039;&#039;&#039; - This is the default partition. Job allocations are guaranteed.&lt;br /&gt;
* &#039;&#039;&#039;cml-scavenger&#039;&#039;&#039; - This is the alternate partition that allows jobs longer run times and more resources but is preemptable when jobs in other &amp;lt;code&amp;gt;cml-&amp;lt;/code&amp;gt; partitions are ready to be scheduled.&lt;br /&gt;
* &#039;&#039;&#039;cml-cpu&#039;&#039;&#039; - This partition is for CPU focused jobs. Job allocations are guaranteed.&lt;br /&gt;
&lt;br /&gt;
There is one additional partition available solely to Dr. Furong Huang&#039;s sponsored accounts.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;cml-furongh&#039;&#039;&#039; - This partition is for exclusive priority access to Dr. Huang&#039;s purchased A6000 node. Job allocations are guaranteed.&lt;br /&gt;
&lt;br /&gt;
==Accounts==&lt;br /&gt;
The Center has a base SLURM account &amp;lt;code&amp;gt;cml&amp;lt;/code&amp;gt; which has a modest number of guaranteed billing resources available to all cluster users at any given time.  Other faculty that have invested in the cluster have an additional account provided to their sponsored accounts on the cluster, which provides a number of guaranteed billing resources corresponding to the amount that they invested.&lt;br /&gt;
&lt;br /&gt;
If you do not specify an account when submitting your job, you will receive the &#039;&#039;&#039;cml&#039;&#039;&#039; account.  If your faculty sponsor has their own account, it is recommended to use that account for job submission.&lt;br /&gt;
&lt;br /&gt;
The current faculty accounts are:&lt;br /&gt;
* cml-abhinav&lt;br /&gt;
* cml-cameron&lt;br /&gt;
* cml-furongh&lt;br /&gt;
* cml-hajiagha&lt;br /&gt;
* cml-john&lt;br /&gt;
* cml-ramani&lt;br /&gt;
* cml-sfeizi&lt;br /&gt;
* cml-tokekar&lt;br /&gt;
* cml-tomg&lt;br /&gt;
* cml-zhou&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show account format=account%20,description%30,organization%10&lt;br /&gt;
             Account                          Descr        Org&lt;br /&gt;
-------------------- ------------------------------ ----------&lt;br /&gt;
                 ...                            ...        ...&lt;br /&gt;
                 cml                            cml        cml&lt;br /&gt;
         cml-abhinav      cml - abhinav shrivastava        cml&lt;br /&gt;
         cml-cameron            cml - maria cameron        cml&lt;br /&gt;
         cml-furongh             cml - furong huang        cml&lt;br /&gt;
        cml-hajiagha      cml - mohammad hajiaghayi        cml&lt;br /&gt;
            cml-john           cml - john dickerson        cml&lt;br /&gt;
          cml-ramani        cml - ramani duraiswami        cml&lt;br /&gt;
       cml-scavenger                cml - scavenger        cml&lt;br /&gt;
          cml-sfeizi             cml - soheil feizi        cml&lt;br /&gt;
         cml-tokekar           cml - pratap tokekar        cml&lt;br /&gt;
            cml-tomg            cml - tom goldstein        cml&lt;br /&gt;
            cml-zhou              cml - tianyi zhou        cml&lt;br /&gt;
                 ...                            ...        ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Faculty can manage this list of users via our [https://intranet.umiacs.umd.edu/directory/secgroup/ Directory application] in the Security Groups section.  The security group that controls access has the prefix &amp;lt;code&amp;gt;cml_&amp;lt;/code&amp;gt; and then the faculty username.  It will also list &amp;lt;code&amp;gt;slurm://nexusctl.umiacs.umd.edu&amp;lt;/code&amp;gt; as the associated URI.&lt;br /&gt;
&lt;br /&gt;
You can check your account associations by running the &#039;&#039;&#039;show_assoc&#039;&#039;&#039; command to see the accounts you are associated with.  Please [[HelpDesk | contact staff]] and include your faculty member in the conversation if you do not see the appropriate association. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_assoc&lt;br /&gt;
      User          Account MaxJobs       GrpTRES                                                QOS&lt;br /&gt;
---------- ---------------- ------- ------------- --------------------------------------------------&lt;br /&gt;
       ...              ...                                                                      ...&lt;br /&gt;
      tomg              cml                                           cml-cpu,cml-default,cml-medium&lt;br /&gt;
      tomg    cml-scavenger                                                            cml-scavenger&lt;br /&gt;
      tomg         cml-tomg                                          cml-default,cml-high,cml-medium&lt;br /&gt;
       ...              ...                                                                      ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also see the total number of Track-able Resources (TRES) allowed for each account by running the following command. Please make sure you give the appropriate account that you are looking for. The billing number displayed here is the sum of [[SLURM/Priority#Modern | resource weightings]] for all nodes appropriated to that account.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacctmgr show assoc account=cml format=user,account,qos,grptres&lt;br /&gt;
      User    Account                  QOS       GrpTRES&lt;br /&gt;
---------- ---------- -------------------- -------------&lt;br /&gt;
                  cml                       billing=7732&lt;br /&gt;
                  ...                                ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==QoS==&lt;br /&gt;
CML currently has 5 QoS for the &#039;&#039;&#039;cml-dpart&#039;&#039;&#039; partition (though &amp;lt;code&amp;gt;high_long&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;very_high&amp;lt;/code&amp;gt; may not be available to all faculty accounts), 1 QoS for the &#039;&#039;&#039;cml-scavenger&#039;&#039;&#039; partition, and 1 QoS for the &#039;&#039;&#039;cml-cpu&#039;&#039;&#039; partition.  If you do not specify a QoS when submitting your job using the &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt; parameter, you will receive the &#039;&#039;&#039;cml-default&#039;&#039;&#039; QoS assuming you are using a CML account.&lt;br /&gt;
&lt;br /&gt;
The important part here is that in different QoS you can have a shorter/longer maximum wall time, a different total number of jobs running at once, and a different maximum number of track-able resources (TRES) for the job.  In the cml-scavenger QoS, one more constraint that you are restricted by is the total number of TRES per user (over multiple jobs). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_qos&lt;br /&gt;
                Name     MaxWall                        MaxTRES MaxJobsPU                      MaxTRESPU                                                                             &lt;br /&gt;
-------------------- ----------- ------------------------------ --------- ------------------------------      &lt;br /&gt;
...                                                                       &lt;br /&gt;
             cml-cpu  7-00:00:00                                        8                                                                                                            &lt;br /&gt;
         cml-default  7-00:00:00       cpu=4,gres/gpu=1,mem=32G         2                                                                                                            &lt;br /&gt;
            cml-high  1-12:00:00     cpu=16,gres/gpu=4,mem=128G         2                                                                                                            &lt;br /&gt;
       cml-high_long 14-00:00:00              cpu=32,gres/gpu=8         8                     gres/gpu=8                                                                             &lt;br /&gt;
          cml-medium  3-00:00:00       cpu=8,gres/gpu=2,mem=64G         2                                                                                                            &lt;br /&gt;
       cml-scavenger  3-00:00:00                                                             gres/gpu=24                                                                             &lt;br /&gt;
       cml-very_high  1-12:00:00     cpu=32,gres/gpu=8,mem=256G         8                    gres/gpu=12            &lt;br /&gt;
...                                                                                  &lt;br /&gt;
&lt;br /&gt;
$ show_partition_qos&lt;br /&gt;
                Name MaxSubmitPU                      MaxTRESPU              GrpTRES &lt;br /&gt;
-------------------- ----------- ------------------------------ -------------------- &lt;br /&gt;
...&lt;br /&gt;
                 cml         500                                 cpu=1128,mem=11381G &lt;br /&gt;
       cml-scavenger         500                    gres/gpu=24                      &lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Storage==&lt;br /&gt;
There are 3 types of user storage available to users in the CML:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
There are also 2 types of read-only storage available for common use among users in the CML:&lt;br /&gt;
* Dataset directories&lt;br /&gt;
* Model directories&lt;br /&gt;
&lt;br /&gt;
CML users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
Home directories in the CML computational infrastructure are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your username.  These home directories have very limited storage (30GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: To check your quota on this directory you will need to use the &amp;lt;code&amp;gt;quota -s&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
===Project Directories===&lt;br /&gt;
You can request project based allocations for up to 6TB for up to 120 days with approval from a CML faculty member and the director of CML.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that the project is under involved in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/cml-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation for up to another 120 days (requires re-approval from a CML faculty member and the director of CML).  If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.  If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible. If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible. If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the CML compute infrastructure:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You are allocated 200GB of scratch space via NFS from &amp;lt;code&amp;gt;/cmlscratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you will need to &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into the directory or request/specify a fully qualified file path to access this.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 800GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 800GB, you will need faculty approval and/or a project directory. Space increases beyond 800GB also have a maximum request period of 120 days (as with project directories), after which they will need to be renewed with re-approval from a CML faculty member and the director of CML.&lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
&lt;br /&gt;
These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/cml-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of CML datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=CML here].&lt;br /&gt;
&lt;br /&gt;
===Models===&lt;br /&gt;
We have read-only model storage available at &amp;lt;code&amp;gt;/fs/cml-models&amp;lt;/code&amp;gt;.  If there are models that you would like to see downloaded and made available, please see [[Datasets | this page]].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11534</id>
		<title>Nexus/GAMMA</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11534"/>
		<updated>2024-01-24T18:42:59Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the [[Nexus]]. Only GAMMA lab members are able to run non-interruptible jobs on these nodes.&lt;br /&gt;
&lt;br /&gt;
=Access=&lt;br /&gt;
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission host that has additional local storage available.&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please do not run anything on the login node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.&lt;br /&gt;
&lt;br /&gt;
=Quality of Service=&lt;br /&gt;
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard job QoSes]] in the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; partition using the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; account.&lt;br /&gt;
&lt;br /&gt;
The additional job QoSes for the GAMMA partition specifically are:&lt;br /&gt;
* &amp;lt;code&amp;gt;huge-long&amp;lt;/code&amp;gt;: Allows for longer jobs using higher overall resources.&lt;br /&gt;
&lt;br /&gt;
Please note that the partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Nodenames&lt;br /&gt;
! Type&lt;br /&gt;
! Quantity&lt;br /&gt;
! CPUs&lt;br /&gt;
! Memory&lt;br /&gt;
! GPUs&lt;br /&gt;
|-&lt;br /&gt;
|gammagpu[00-04,06-09]&lt;br /&gt;
|A5000 GPU Node&lt;br /&gt;
|9&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- &lt;br /&gt;
|gammagpu05&lt;br /&gt;
|A4000 GPU Node&lt;br /&gt;
|1&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- class=&amp;quot;sortbottom&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
!Total&lt;br /&gt;
|10&lt;br /&gt;
|320&lt;br /&gt;
|2560GB&lt;br /&gt;
|80&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Example=&lt;br /&gt;
From &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; you can run the following example to submit an interactive job.  Please note that you need to specify the &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--partition&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt;.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash&lt;br /&gt;
$ hostname&lt;br /&gt;
gammagpu01.umiacs.umd.edu&lt;br /&gt;
$ nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66)&lt;br /&gt;
GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd)&lt;br /&gt;
GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193)&lt;br /&gt;
GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3)&lt;br /&gt;
GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a)&lt;br /&gt;
GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859)&lt;br /&gt;
GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe)&lt;br /&gt;
GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use SBATCH to submit your job.  Here are two examples on how to do that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OR&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch script.sh&lt;br /&gt;
&lt;br /&gt;
// script.sh //&lt;br /&gt;
&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --gres=gpu:8&lt;br /&gt;
#SBATCH --account=gamma&lt;br /&gt;
#SBATCH --partition=gamma&lt;br /&gt;
#SBATCH --qos=huge-long&lt;br /&gt;
#SBATCH --time=1-23:00:00&lt;br /&gt;
&lt;br /&gt;
python your_file.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage=&lt;br /&gt;
There are 3 types of user storage available to users in GAMMA:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
There is also read-only storage available for Dataset directories.&lt;br /&gt;
&lt;br /&gt;
GAMMA users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
Home directories are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your UMIACS username.  These home directories have very limited storage (30GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
===Project Directories===&lt;br /&gt;
You can request project based allocations with approval from a GAMMA faculty member.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that approved the project in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/gamma-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).  If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.  If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible. If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible. If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection, there are no snapshots and the data is not backed up. &lt;br /&gt;
There are two types of scratch directories:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You are allocated 100GB of scratch space via NFS from &amp;lt;code&amp;gt;/gammascratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you may not see your directory if you run &amp;lt;code&amp;gt;ls /gammascratch&amp;lt;/code&amp;gt; but it will be mounted when you &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into your /gammascratch directory.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 200GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 200GB, you will need faculty approval. &lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
These file systems are not available over [[NFS]] and &#039;&#039;&#039;there are no backups or snapshots available&#039;&#039;&#039; for these file systems.&lt;br /&gt;
&lt;br /&gt;
* Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These directories are local to each node, ie. the &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; on two different nodes are completely separate.&lt;br /&gt;
** These directories are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
** These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
* Gamma has invested in a 20TB NVMe scratch file system on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; that is available as &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords. &lt;br /&gt;
** The &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; doesn&#039;t have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/gamma-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of GAMMA datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA here].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11509</id>
		<title>Nexus/GAMMA</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11509"/>
		<updated>2024-01-05T21:01:19Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the [[Nexus]]. Only GAMMA lab members are able to run non-interruptible jobs on these nodes.&lt;br /&gt;
&lt;br /&gt;
=Access=&lt;br /&gt;
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission host that has additional local storage available.&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please do not run anything on the login node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.&lt;br /&gt;
&lt;br /&gt;
=Quality of Service=&lt;br /&gt;
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard job QoSes]] in the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; partition using the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; account.&lt;br /&gt;
&lt;br /&gt;
The additional job QoSes for the GAMMA partition specifically are:&lt;br /&gt;
* &amp;lt;code&amp;gt;huge-long&amp;lt;/code&amp;gt;: Allows for longer jobs using higher overall resources.&lt;br /&gt;
&lt;br /&gt;
Please note that the partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Nodenames&lt;br /&gt;
! Type&lt;br /&gt;
! Quantity&lt;br /&gt;
! CPUs&lt;br /&gt;
! Memory&lt;br /&gt;
! GPUs&lt;br /&gt;
|-&lt;br /&gt;
|gammagpu[00-04,06-09]&lt;br /&gt;
|A5000 GPU Node&lt;br /&gt;
|9&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- &lt;br /&gt;
|gammagpu05&lt;br /&gt;
|A4000 GPU Node&lt;br /&gt;
|1&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- class=&amp;quot;sortbottom&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
!Total&lt;br /&gt;
|10&lt;br /&gt;
|320&lt;br /&gt;
|2560GB&lt;br /&gt;
|80&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Example=&lt;br /&gt;
From &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; you can run the following example to submit an interactive job.  Please note that you need to specify the &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--partition&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt;.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash&lt;br /&gt;
$ hostname&lt;br /&gt;
gammagpu01.umiacs.umd.edu&lt;br /&gt;
$ nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66)&lt;br /&gt;
GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd)&lt;br /&gt;
GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193)&lt;br /&gt;
GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3)&lt;br /&gt;
GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a)&lt;br /&gt;
GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859)&lt;br /&gt;
GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe)&lt;br /&gt;
GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use SBATCH to submit your job.  Here are two examples on how to do that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OR&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch script.sh&lt;br /&gt;
&lt;br /&gt;
// script.sh //&lt;br /&gt;
&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --gres=gpu:8&lt;br /&gt;
#SBATCH --account=gamma&lt;br /&gt;
#SBATCH --partition=gamma&lt;br /&gt;
#SBATCH --qos=huge-long&lt;br /&gt;
#SBATCH --time=1-23:00:00&lt;br /&gt;
&lt;br /&gt;
python your_file.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage=&lt;br /&gt;
There are 3 types of user storage available to users in GAMMA:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
There is also read-only storage available for Dataset directories.&lt;br /&gt;
&lt;br /&gt;
GAMMA users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
Home directories are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your UMIACS username.  These home directories have very limited storage (30GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
===Project Directories===&lt;br /&gt;
You can request project based allocations with approval from a GAMMA faculty member.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that approved the project in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/gamma-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).  If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.  If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible. If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible. If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection, there are no snapshots and the data is not backed up. &lt;br /&gt;
There are two types of scratch directories:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You are allocated 50GB of scratch space via NFS from &amp;lt;code&amp;gt;/gammascratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you may not see your directory if you run &amp;lt;code&amp;gt;ls /gammascratch&amp;lt;/code&amp;gt; but it will be mounted when you &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into your /gammascratch directory.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 200GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 200GB, you will need faculty approval. &lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
These file systems are not available over [[NFS]] and &#039;&#039;&#039;there are no backups or snapshots available&#039;&#039;&#039; for these file systems.&lt;br /&gt;
&lt;br /&gt;
* Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These directories are local to each node, ie. the &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; on two different nodes are completely separate.&lt;br /&gt;
&lt;br /&gt;
** These directories are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
&lt;br /&gt;
** These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
* Gamma has invested in a 20TB NVMe scratch file system on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; that is available as &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords. &lt;br /&gt;
&lt;br /&gt;
** The &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; doesn&#039;t have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/gamma-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of GAMMA datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA here].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Datasets&amp;diff=11483</id>
		<title>Datasets</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Datasets&amp;diff=11483"/>
		<updated>2024-01-02T22:03:30Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Add GAMMA&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS hosts a number of datasets in read-only mode on some shared filesystems used by our [[SLURM]] computing clusters. The motivation behind this is to provide publicly accessible datasets in a well-defined location in order to de-duplicate their use elsewhere, thus reducing overall storage usage.&lt;br /&gt;
&lt;br /&gt;
==Dataset Directories==&lt;br /&gt;
* [[Nexus/CML | CML]] (&amp;lt;code&amp;gt;/fs/cml-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [https://info.umiacs.umd.edu/datasets/list/?q=CML List of datasets] -- faculty approver Tom Goldstein&lt;br /&gt;
* [[Nexus/GAMMA | GAMMA]] (&amp;lt;code&amp;gt;/fs/gamma-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA List of datasets] -- faculty approver Dinesh Manocha&lt;br /&gt;
* [[Nexus]] (&amp;lt;code&amp;gt;/fs/nexus-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [https://info.umiacs.umd.edu/datasets/list/?q=Nexus List of datasets]&lt;br /&gt;
* [[Nexus/Vulcan | Vulcan]] (&amp;lt;code&amp;gt;/fs/vulcan-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [https://info.umiacs.umd.edu/datasets/list/?q=Vulcan List of datasets] -- faculty approver Abhinav Shrivastava&lt;br /&gt;
&lt;br /&gt;
==Requesting a new dataset==&lt;br /&gt;
You can request a new dataset by [[HelpDesk | contacting staff]] with a link to the dataset&#039;s official download location. Torrents or other peer-to-peer re-hosting are not allowed unless sanctioned by the dataset owners. &lt;br /&gt;
* &#039;&#039;&#039;CML/GAMMA/Vulcan&#039;&#039;&#039;: If the uncompressed/final dataset size is over 100GB, staff will first contact the faculty approver for the cluster to ensure they approve of using the storage space. If the size is under 100GB, no faculty approval is required. Staff will then inspect the dataset and see if there are any terms and conditions that must be agreed to before downloading.&lt;br /&gt;
* &#039;&#039;&#039;Nexus&#039;&#039;&#039;: Please let staff know which faculty member&#039;s research you are working on that requires use of the dataset you are requesting. Then, if the uncompressed/final dataset size is over 50GB, the [https://www.umiacs.umd.edu/people/computing-staff Director of Computing Facilities] must first approve of using the storage space. If the size is under 50GB, no approval is required. Staff will then inspect the dataset and see if there are any terms and conditions that must be agreed to before downloading.&lt;br /&gt;
&lt;br /&gt;
If there are no terms and conditions, staff will download/extract the dataset, copy it to the appropriate location depending on what cluster you are requesting it for, and let you know when it is available for use on that cluster.&lt;br /&gt;
&lt;br /&gt;
If there are terms and conditions in excess of well-defined [https://creativecommons.org/about/cclicenses/ Creative Commons licenses], staff will first need to have the dataset&#039;s terms and conditions approved by [https://ora.umd.edu/ UMD&#039;s Office of Research Administration (ORA)]. Since the dataset will be hosted by UMIACS, as an institution, in a location accessible by all users of a cluster, not all of whom will have individually agreed to the terms and conditions, having a single person, staff member or not, agree to a set of terms and conditions is not sufficient to host it. After approval by ORA, staff will perform the same steps as mentioned above (download/extract, copy to appropriate location, and let you know when ready).&lt;br /&gt;
&lt;br /&gt;
==Dataset use==&lt;br /&gt;
All datasets are read-only to users. Any intermediate data generated from a dataset will need to be stored in a location other than the shared filesystem hosting the dataset and other datasets.&lt;br /&gt;
&lt;br /&gt;
Exceptions may be granted if there is a set of intermediate data generated from a dataset that you believe will be useful to a subset of a cluster&#039;s users. If you suspect this is the case for some of your generated data, please [[HelpDesk | contact staff]]. We will follow the above procedure and upon whatever approvals may be necessary, copy the intermediate data into the shared filesystem.&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11482</id>
		<title>Nexus/GAMMA</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Nexus/GAMMA&amp;diff=11482"/>
		<updated>2024-01-02T21:56:15Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Added info about GAMMA&amp;#039;s network storage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://gamma.umd.edu/ GAMMA] lab has a partition of GPU nodes available in the [[Nexus]]. Only GAMMA lab members are able to run non-interruptible jobs on these nodes.&lt;br /&gt;
&lt;br /&gt;
=Access=&lt;br /&gt;
You can always find out what hosts you have access to submit via the [[Nexus#Access]] page.  The GAMMA lab in particular has a special submission host that has additional local storage available.&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please do not run anything on the login node. Always allocate yourself machines on the compute nodes (see instructions below) to run any job.&lt;br /&gt;
&lt;br /&gt;
=Quality of Service=&lt;br /&gt;
GAMMA users have access to all of the [[Nexus#Quality_of_Service_.28QoS.29 | standard job QoSes]] in the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; partition using the &amp;lt;code&amp;gt;gamma&amp;lt;/code&amp;gt; account.&lt;br /&gt;
&lt;br /&gt;
The additional job QoSes for the GAMMA partition specifically are:&lt;br /&gt;
* &amp;lt;code&amp;gt;huge-long&amp;lt;/code&amp;gt;: Allows for longer jobs using higher overall resources.&lt;br /&gt;
&lt;br /&gt;
Please note that the partition has a &amp;lt;code&amp;gt;GrpTRES&amp;lt;/code&amp;gt; limit of 100% of the available cores/RAM on the partition-specific nodes in aggregate plus 50% of the available cores/RAM on legacy## nodes in aggregate, so your job may need to wait if all available cores/RAM (or GPUs) are in use.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Nodenames&lt;br /&gt;
! Type&lt;br /&gt;
! Quantity&lt;br /&gt;
! CPUs&lt;br /&gt;
! Memory&lt;br /&gt;
! GPUs&lt;br /&gt;
|-&lt;br /&gt;
|gammagpu[00-04,06-09]&lt;br /&gt;
|A5000 GPU Node&lt;br /&gt;
|9&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- &lt;br /&gt;
|gammagpu05&lt;br /&gt;
|A4000 GPU Node&lt;br /&gt;
|1&lt;br /&gt;
|32&lt;br /&gt;
|256GB&lt;br /&gt;
|8&lt;br /&gt;
|- class=&amp;quot;sortbottom&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
!Total&lt;br /&gt;
|10&lt;br /&gt;
|320&lt;br /&gt;
|2560GB&lt;br /&gt;
|80&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Example=&lt;br /&gt;
From &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; you can run the following example to submit an interactive job.  Please note that you need to specify the &amp;lt;code&amp;gt;--account&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--partition&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--qos&amp;lt;/code&amp;gt;.  Please refer to our [[SLURM]] documentation about about how to further customize your submissions including making a batch submission.  The following command will allocate 8 GPUs for 2 days in an interactive session.  Change parameters accordingly to your needs.  We discourage use of srun and promote use of sbatch for fair use of GPUs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long bash&lt;br /&gt;
$ hostname&lt;br /&gt;
gammagpu01.umiacs.umd.edu&lt;br /&gt;
$ nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A5000 (UUID: GPU-cdfb2e0c-d69f-354b-02f4-15161dc7fa66)&lt;br /&gt;
GPU 1: NVIDIA RTX A5000 (UUID: GPU-be53e7a1-b8fd-7089-3cac-7a2fbf4ec7dd)&lt;br /&gt;
GPU 2: NVIDIA RTX A5000 (UUID: GPU-774efbb1-d7ec-a0bb-e992-da9d1fa6b193)&lt;br /&gt;
GPU 3: NVIDIA RTX A5000 (UUID: GPU-d1692181-c7de-e273-5f95-53ad381614c3)&lt;br /&gt;
GPU 4: NVIDIA RTX A5000 (UUID: GPU-ba51fd6c-37bf-1b95-5f68-987c18a6292a)&lt;br /&gt;
GPU 5: NVIDIA RTX A5000 (UUID: GPU-c1224a2a-4a3b-ff16-0308-4f36205b9859)&lt;br /&gt;
GPU 6: NVIDIA RTX A5000 (UUID: GPU-8d20d6cd-abf5-2630-ab88-6bba438c55fe)&lt;br /&gt;
GPU 7: NVIDIA RTX A5000 (UUID: GPU-93170910-5d94-6da5-8a24-f561d7da1e2d)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use SBATCH to submit your job.  Here are two examples on how to do that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --pty --gres=gpu:8 --account=gamma --partition=gamma --qos=huge-long --time=1-23:00:00 script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OR&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch script.sh&lt;br /&gt;
&lt;br /&gt;
// script.sh //&lt;br /&gt;
&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --gres=gpu:8&lt;br /&gt;
#SBATCH --account=gamma&lt;br /&gt;
#SBATCH --partition=gamma&lt;br /&gt;
#SBATCH --qos=huge-long&lt;br /&gt;
#SBATCH --time=1-23:00:00&lt;br /&gt;
&lt;br /&gt;
python your_file.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage=&lt;br /&gt;
There are 3 types of user storage available to users in GAMMA:&lt;br /&gt;
* Home directories&lt;br /&gt;
* Project directories&lt;br /&gt;
* Scratch directories&lt;br /&gt;
&lt;br /&gt;
There is also read-only storage available for Dataset directories.&lt;br /&gt;
&lt;br /&gt;
GAMMA users can also request [[Nexus#Project_Allocations | Nexus project allocations]].&lt;br /&gt;
&lt;br /&gt;
===Home Directories===&lt;br /&gt;
Home directories are available from the Institute&#039;s [[NFShomes]] as &amp;lt;code&amp;gt;/nfshomes/USERNAME&amp;lt;/code&amp;gt; where USERNAME is your UMIACS username.  These home directories have very limited storage (30GB, cannot be increased) and are intended for your personal files, configuration and source code.  Your home directory is &#039;&#039;&#039;not&#039;&#039;&#039; intended for data sets or other large scale data holdings.  Users are encouraged to utilize our [[GitLab]] infrastructure to host your code repositories.&lt;br /&gt;
&lt;br /&gt;
Your home directory data is fully protected and has both [[Snapshots | snapshots]] and is [[NightlyBackups | backed up nightly]].&lt;br /&gt;
&lt;br /&gt;
===Project Directories===&lt;br /&gt;
You can request project based allocations with approval from a GAMMA faculty member.  &lt;br /&gt;
&lt;br /&gt;
To request an allocation, please [[HelpDesk | contact staff]] with the faculty member(s) that approved the project in the conversation.  Please include the following details:&lt;br /&gt;
* Project Name (short)&lt;br /&gt;
* Description&lt;br /&gt;
* Size (1TB, 2TB, etc.)&lt;br /&gt;
* Length in days (30 days, 90 days, etc.)&lt;br /&gt;
* Other user(s) that need to access the allocation, if any&lt;br /&gt;
&lt;br /&gt;
These allocations will be available from &#039;&#039;&#039;/fs/gamma-projects&#039;&#039;&#039; under a name that you provide when you request the allocation.  Near the end of the allocation period, staff will contact you and ask if you would like to renew the allocation (requires re-approval from a GAMMA faculty member).  If you are no longer in need of the storage allocation, you will need to relocate all desired data within two weeks of the end of the allocation period.  Staff will then remove the allocation.  If you do not respond to staff&#039;s request by the end of the allocation period, staff will make the allocation temporarily inaccessible. If you do respond asking for renewal but the original faculty approver does not respond within two weeks of the end of the allocation period, staff will also make the allocation temporarily inaccessible. If one month from the end of the allocation period is reached without both you and the faculty approver responding, staff will remove the allocation.&lt;br /&gt;
&lt;br /&gt;
This data is backed up nightly.&lt;br /&gt;
&lt;br /&gt;
===Scratch Directories===&lt;br /&gt;
Scratch data has no data protection, there are no snapshots and the data is not backed up. &lt;br /&gt;
There are two types of scratch directories:&lt;br /&gt;
* Network scratch directory&lt;br /&gt;
* Local scratch directories&lt;br /&gt;
&lt;br /&gt;
====Network Scratch Directory====&lt;br /&gt;
You are allocated 100GB of scratch space via NFS from &amp;lt;code&amp;gt;/gammascratch/$username&amp;lt;/code&amp;gt;.  &#039;&#039;&#039;It is not backed up or protected in any way.&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
This directory is &#039;&#039;&#039;automounted&#039;&#039;&#039; so you may not see your directory if you run &amp;lt;code&amp;gt;ls /gammascratch&amp;lt;/code&amp;gt; but it will be mounted when you &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; into your /gammascratch directory.&lt;br /&gt;
&lt;br /&gt;
You may request a permanent increase of up to 200GB total space without any faculty approval by [[HelpDesk | contacting staff]].  If you need space beyond 200GB, you will need faculty approval. &lt;br /&gt;
&lt;br /&gt;
This file system is available on all submission, data management, and computational nodes within the cluster.&lt;br /&gt;
&lt;br /&gt;
====Local Scratch Directories====&lt;br /&gt;
These file systems are not available over [[NFS]] and &#039;&#039;&#039;there are no backups or snapshots available&#039;&#039;&#039; for these file systems.&lt;br /&gt;
&lt;br /&gt;
* Each computational node that you can schedule compute jobs on has one or more local scratch directories.  These are always named &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;, etc.  These directories are local to each node, ie. the &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; on two different nodes are completely separate.&lt;br /&gt;
&lt;br /&gt;
** These directories are almost always more performant than any other storage available to the job.  However, you must stage data to these directories within the confines of your jobs and stage the data out before the end of your jobs.&lt;br /&gt;
&lt;br /&gt;
** These local scratch directories have a tmpwatch job which will &#039;&#039;&#039;delete unaccessed data after 90 days&#039;&#039;&#039;, scheduled via maintenance jobs to run once a month during our monthly maintenance windows.  Again, please make sure you secure any data you write to these directories at the end of your job.&lt;br /&gt;
&lt;br /&gt;
* Gamma has invested in a 20TB NVMe scratch file system on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; that is available as &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt;.  To utilize this space, you will need to copy data from/to this over SSH from a compute node.  To make this easier, you may want to setup [[SSH]] keys that will allow you to copy data without prompting for passwords. &lt;br /&gt;
&lt;br /&gt;
** The &amp;lt;code&amp;gt;/scratch1&amp;lt;/code&amp;gt; directory on &amp;lt;code&amp;gt;nexusgamma00.umiacs.umd.edu&amp;lt;/code&amp;gt; doesn&#039;t have a tmpwatch. The files in this directory need to be manually removed once they are no longer needed.&lt;br /&gt;
&lt;br /&gt;
===Datasets===&lt;br /&gt;
We have read-only dataset storage available at &amp;lt;code&amp;gt;/fs/gamma-datasets&amp;lt;/code&amp;gt;.  If there are datasets that you would like to see curated and available, please see [[Datasets | this page]].&lt;br /&gt;
&lt;br /&gt;
The list of GAMMA datasets we currently host can be viewed [https://info.umiacs.umd.edu/datasets/list/?q=GAMMA here].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Datasets&amp;diff=11392</id>
		<title>Datasets</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Datasets&amp;diff=11392"/>
		<updated>2023-10-20T18:10:28Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Updated the links for CML and Vulcan&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;UMIACS hosts a number of datasets in read-only mode on some shared filesystems used by some of our [[SLURM]] computing clusters. The motivation behind this is to provide commonly-used datasets in a well-defined location in order to de-duplicate their use elsewhere, thus reducing overall storage usage.&lt;br /&gt;
&lt;br /&gt;
==Dataset Directories==&lt;br /&gt;
* [[Nexus/CML | CML]] (&amp;lt;code&amp;gt;/fs/cml-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [[Nexus/CML#Datasets | List of datasets]] -- faculty approver Tom Goldstein&lt;br /&gt;
* [[Nexus]] (&amp;lt;code&amp;gt;/fs/nexus-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [[Nexus#Datasets | List of datasets]] &lt;br /&gt;
* [[Nexus/Vulcan | Vulcan]] (&amp;lt;code&amp;gt;/fs/vulcan-datasets&amp;lt;/code&amp;gt;)&lt;br /&gt;
** [[Nexus/Vulcan#Datasets | List of datasets]] -- faculty approver Abhinav Shrivastava&lt;br /&gt;
&lt;br /&gt;
==Requesting a new dataset==&lt;br /&gt;
You can request a new dataset by [[HelpDesk | contacting staff]] with a link to the dataset&#039;s official download location. Torrents or other peer-to-peer re-hosting are not allowed unless sanctioned by the dataset owners. &lt;br /&gt;
* &#039;&#039;&#039;CML/Vulcan&#039;&#039;&#039;: If the uncompressed/final dataset size is over 100GB, staff will first contact the faculty approver for the cluster to ensure they approve of using the storage space. If the size is under 100GB, no faculty approval is required. Staff will then inspect the dataset and see if there are any terms and conditions that must be agreed to before downloading.&lt;br /&gt;
* &#039;&#039;&#039;Nexus&#039;&#039;&#039;: Please let staff know which faculty member&#039;s research you are working on that requires use of the dataset you are requesting. Then, if the uncompressed/final dataset size is over 50GB, the [https://www.umiacs.umd.edu/people/computing-staff Director of Computing Facilities] must first approve of using the storage space. If the size is under 50GB, no approval is required. Staff will then inspect the dataset and see if there are any terms and conditions that must be agreed to before downloading.&lt;br /&gt;
&lt;br /&gt;
If there are no terms and conditions, staff will download/extract the dataset, copy it to the appropriate location depending on what cluster you are requesting it for, and let you know when it is available for use on that cluster.&lt;br /&gt;
&lt;br /&gt;
If there are terms and conditions in excess of well-defined [https://creativecommons.org/about/cclicenses/ Creative Commons licenses], staff will first need to have the dataset&#039;s terms and conditions approved by [https://ora.umd.edu/ UMD&#039;s Office of Research Administration (ORA)]. Since the dataset will be hosted by UMIACS (as an institution) in a location accessible by all users of a cluster, not all of which will have individually agreed to the terms and conditions, having a single person (staff member or not) agree to a set of terms and conditions is not sufficient to host it. After approval, staff will perform the same steps as mentioned above (download/extract, copy to appropriate location, and let you know when ready).&lt;br /&gt;
&lt;br /&gt;
==Dataset use==&lt;br /&gt;
All datasets are read-only to users. Any intermediate data generated from a dataset will need to be stored in a location other than the shared filesystem hosting the dataset and other datasets.&lt;br /&gt;
&lt;br /&gt;
Exceptions may be granted if there is a set of intermediate data generated from a dataset that you believe will be useful to a subset of a cluster&#039;s users. If you suspect this is the case for some of your generated data, please [[HelpDesk | contact staff]]. We will follow the above procedure and upon whatever approvals may be necessary, copy the intermediate data into the shared filesystem.&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=ClassAccounts&amp;diff=11381</id>
		<title>ClassAccounts</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=ClassAccounts&amp;diff=11381"/>
		<updated>2023-10-16T14:49:38Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Overview==&lt;br /&gt;
UMIACS Class Accounts are currently intended to support classes for all of UMIACS/CSD via the [[Nexus]] cluster.  All new class accounts are serviced solely through this cluster.  Faculty may request that a class be supported by following the instructions [[ClassAccounts/Manage | here]].&lt;br /&gt;
&lt;br /&gt;
==Getting an account==&lt;br /&gt;
Your TA will request an account for you. Once this is done, you will be notified by email that you have an account to redeem.  If you have not received an email, please contact your TA. &#039;&#039;&#039;You must redeem the account within 7 days or else the redemption token will expire.&#039;&#039;&#039;  If your redemption token does expire, please contact your TA to have it renewed.&lt;br /&gt;
&lt;br /&gt;
Once you do redeem your account, you will need to wait until you get a confirmation email that your account has been installed.  This is typically done once a day on days that the University is open for business.&lt;br /&gt;
&lt;br /&gt;
===Registering for Duo===&lt;br /&gt;
UMIACS requires that all Class accounts be registered for MFA (multi-factor authentication) under our [[Duo]] instance (note that this is different than UMD&#039;s general Duo instance). &#039;&#039;&#039;You will not be able to log onto the class submission host until you register.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you see the following error in your SSH client you have not yet enrolled/registered in Duo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Access is not allowed because you are not enrolled in Duo. Please contact your organization&#039;s IT help desk.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to register, [https://intranet.umiacs.umd.edu/directory visit our directory app] and log in with your Class username and password. You will then receive a prompt to enroll in Duo. For assistance in enrollment, you can visit our [[Duo | Duo help page]].&lt;br /&gt;
&lt;br /&gt;
Once notified that your account has been installed and you have registered in our Duo instance, you can access the following class submission host(s) using [[SSH]] with your assigned username and your chosen password:&lt;br /&gt;
* &amp;lt;code&amp;gt;nexusclass00.umiacs.umd.edu&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;nexusclass01.umiacs.umd.edu&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cleaning up your account before the end of the semester==&lt;br /&gt;
Class accounts for a given semester are liable to be archived and deleted after that semester&#039;s completion as early as the following:&lt;br /&gt;
* Winter semesters: February 1st of same year&lt;br /&gt;
* Spring semesters: June 1st of same year&lt;br /&gt;
* Summer semesters: September 1st of same year&lt;br /&gt;
* Fall semesters: January 1st of next year&lt;br /&gt;
&lt;br /&gt;
It is your responsibility to ensure you have backed up anything you want to keep from your class account&#039;s personal or group storage (below sections) prior to the relevant date.&lt;br /&gt;
&lt;br /&gt;
==Personal Storage==&lt;br /&gt;
Your home directory has a quota of 30GB and is located at:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/fs/classhomes/&amp;lt;semester&amp;gt;&amp;lt;year&amp;gt;/&amp;lt;coursecode&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;code&amp;gt;&amp;lt;semester&amp;gt;&amp;lt;/code&amp;gt; is either &amp;quot;spring&amp;quot;, &amp;quot;summer&amp;quot;, &amp;quot;fall&amp;quot;, or &amp;quot;winter&amp;quot;, &amp;lt;code&amp;gt;&amp;lt;year&amp;gt;&amp;lt;/code&amp;gt; is the current year e.g., &amp;quot;2021&amp;quot;,  &amp;lt;coursecode&amp;gt; is the class&#039; course code as listed in UMD&#039;s [https://app.testudo.umd.edu/soc/ Schedule of Classes] in all lowercase e.g., &amp;quot;cmsc999z&amp;quot;, and &amp;lt;code&amp;gt;&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt; is the username mentioned in the email you received to redeem the account e.g., &amp;quot;c999z000&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
You can request up to another 100GB of personal storage if you would like by having your TA [[HelpDesk | contact staff]]. This storage will be located at&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/fs/class-projects/&amp;lt;semester&amp;gt;&amp;lt;year&amp;gt;/&amp;lt;coursecode&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group Storage==&lt;br /&gt;
You can also request group storage if you would like by having your TA [[HelpDesk | contact staff]] to specify the usernames of the accounts that should be in the group. Only other class accounts in the same class can be added to the group. The quota will be 100GB multiplied by the number of accounts in the group and will be located at&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/fs/class-projects/&amp;lt;semester&amp;gt;&amp;lt;year&amp;gt;/&amp;lt;coursecode&amp;gt;/&amp;lt;groupname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;code&amp;gt;&amp;lt;groupname&amp;gt;&amp;lt;/code&amp;gt; is composed of:&lt;br /&gt;
* the abbreviated course code as used in the username e.g., &amp;quot;c999z&amp;quot;&lt;br /&gt;
* the character &amp;quot;g&amp;quot;&lt;br /&gt;
* the number of the group (starting at 0 for the first group for the class requested to us) prepended with 0s to make the total group name 8 characters long&lt;br /&gt;
&lt;br /&gt;
e.g., &amp;quot;c999zg00&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==Cluster Usage==&lt;br /&gt;
&#039;&#039;&#039;You may not run computational jobs on any submission host.&#039;&#039;&#039;  You must schedule your jobs with the [[SLURM]] workload manager.  You can also find out more with the public documentation for the [https://slurm.schedmd.com/quickstart.html SLURM Workload Manager].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Any questions or issues with the cluster must be first made through your TA.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Class accounts only have access to the following submission parameters in SLURM.  You may be required to explicitly set each of these in your submission parameters.&lt;br /&gt;
&lt;br /&gt;
* Partition - &amp;lt;code&amp;gt;class&amp;lt;/code&amp;gt;&lt;br /&gt;
* Account - &amp;lt;code&amp;gt;class&amp;lt;/code&amp;gt;&lt;br /&gt;
* QoS - &amp;lt;code&amp;gt;default&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;medium&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;high&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that you will be restricted to 32 total cores, 256GB total RAM, and 4 total GPUs across all jobs you have running at once.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Here is a basic example to schedule a interactive job running bash with a single GPU in the partition &amp;lt;code&amp;gt;class&amp;lt;/code&amp;gt; with the account &amp;lt;code&amp;gt;class&amp;lt;/code&amp;gt; running with the QoS of &amp;lt;code&amp;gt;default&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --pty --partition=class --account=class --qos=default --gres=gpu:1 bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash-4.4$ hostname&lt;br /&gt;
tron14.umiacs.umd.edu&lt;br /&gt;
bash-4.4$ nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-55f2d3b7-9162-8b02-50de-476a012c626c)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Available Nodes===&lt;br /&gt;
You can list the available nodes and their current state with the &amp;lt;code&amp;gt;show_nodes -p class&amp;lt;/code&amp;gt; command.  This list of nodes is not completely static as nodes may be pulled out of service to repair/replace GPUs or other components.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ show_nodes -p class&lt;br /&gt;
NODELIST             CPUS       MEMORY     AVAIL_FEATURES            GRES                             STATE      PARTITION&lt;br /&gt;
tron06               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron07               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron08               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron09               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron10               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron11               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron12               16         128525     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron13               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron14               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron15               16         128520     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron16               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron17               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron18               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron19               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron20               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron21               16         128525     rhel8,AMD,EPYC-7302P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron22               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron23               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron24               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron25               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron26               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron27               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron28               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron29               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron30               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron31               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron32               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron33               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron34               16         128524     rhel8,Zen,EPYC-7313P      gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron35               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron36               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron37               16         128521     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron38               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron39               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron40               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron41               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron42               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron43               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron44               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
tron45               16         128525     rhel8,AMD,EPYC-7302       gpu:rtxa4000:4                   idle       class&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also find more granular information about an individual node with the &amp;lt;code&amp;gt;scontrol show node&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scontrol show node tron27&lt;br /&gt;
NodeName=tron27 Arch=x86_64 CoresPerSocket=16&lt;br /&gt;
   CPUAlloc=0 CPUTot=16 CPULoad=0.00&lt;br /&gt;
   AvailableFeatures=rhel8,AMD,EPYC-7302&lt;br /&gt;
   ActiveFeatures=rhel8,AMD,EPYC-7302&lt;br /&gt;
   Gres=gpu:rtxa4000:4&lt;br /&gt;
   NodeAddr=tron27 NodeHostName=tron27 Version=21.08.8-2&lt;br /&gt;
   OS=Linux 4.18.0-372.19.1.el8_6.x86_64 #1 SMP Mon Jul 18 11:14:02 EDT 2022&lt;br /&gt;
   RealMemory=128521 AllocMem=0 FreeMem=125650 Sockets=1 Boards=1&lt;br /&gt;
   State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=10 Owner=N/A MCS_label=N/A&lt;br /&gt;
   Partitions=class,scavenger,tron&lt;br /&gt;
   BootTime=2022-08-18T17:34:44 SlurmdStartTime=2022-08-19T13:10:47&lt;br /&gt;
   LastBusyTime=2022-08-22T11:20:18&lt;br /&gt;
   CfgTRES=cpu=16,mem=128521M,billing=173,gres/gpu=4,gres/gpu:rtxa4000=4&lt;br /&gt;
   AllocTRES=&lt;br /&gt;
   CapWatts=n/a&lt;br /&gt;
   CurrentWatts=0 AveWatts=0&lt;br /&gt;
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Gurobi&amp;diff=11259</id>
		<title>Gurobi</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Gurobi&amp;diff=11259"/>
		<updated>2023-08-29T16:53:23Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Gurobi Optimizer is a suite of solvers for mathematical programming. It can be accessed through our module tree with the command &amp;lt;code&amp;gt;module add gurobi&amp;lt;/code&amp;gt;. More information on our [[Modules | Modules page]]&lt;br /&gt;
&lt;br /&gt;
Documentation can be found at https://www.gurobi.com/documentation/&lt;br /&gt;
&lt;br /&gt;
==Error 10009==&lt;br /&gt;
Gurobi will not work on the [[Nexus]] submission nodes since they have a public IP address and will give the error message &amp;quot;Error 10009: Server must be on the same subnet&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
If you encounter this error, try running the command in a [[SLURM]] job. If you&#039;re still having issues, please contact the [[HelpDesk]].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=OBJ&amp;diff=11015</id>
		<title>OBJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=OBJ&amp;diff=11015"/>
		<updated>2023-06-12T14:20:41Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Increased faculty allocation to 10TB&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= UMIACS Object Store =&lt;br /&gt;
An object store is a web-based storage solution focused on reliability, scalability and security. It is best suited for public content storage/distribution, archiving data or secure data sharing between users. Our Object Storage can be used through the [https://obj.umiacs.umd.edu/obj web interface], the command line [[UMobj]] utilities, third-party graphical [[S3Clients | clients]], and even programmatically using many popular programming languages.  We support a subset of the Amazon Simple Storage Services [http://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html (S3) API], built around a technology called [http://ceph.com/ Ceph].&lt;br /&gt;
&lt;br /&gt;
= Terminology =&lt;br /&gt;
S3-like storage thinks in terms of buckets and keys. Keys are analogous to files. A bucket is simply a container to a set of keys. There is no actual hierarchy inside of a bucket, but the standard UNIX path separator, a forward slash (/) at the end of a Key name, is interpreted by many clients (including this web site and our umobj utilities) as being a directory delimiter. This allows you to copy data from your local filesystems to your buckets through umobj or third-party clients. You may specify who has what types of access to your buckets via Access Control Lists (ACLs) at the bucket level or the individual key level.&lt;br /&gt;
&lt;br /&gt;
Your data is protected from individual machine failure via replication within the cluster. All data is checksummed in accordance with the Amazon S3 protocol to ensure that data in transit is valid before it is accepted by the cluster. However, there are no backups or snapshots of this data in the cluster, so &#039;&#039;&#039;if a user deletes a key or bucket in the object store, there is no way to restore that information&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Getting Started =&lt;br /&gt;
UMIACS users are allocated 50GB of storage.  Faculty are allocated 10TB. To get started, [https://obj.umiacs.umd.edu/obj log in] and you will be redirected to the initial help page.  You can also find the link from our https://intranet.umiacs.umd.edu site as &amp;quot;OBJbox Object Store&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= Buckets =&lt;br /&gt;
You can create and browse your buckets (containers that hold data) by visiting your [https://obj.umiacs.umd.edu/obj/buckets/ buckets] page. You can also set bucket-level Access Control Lists (ACLs) from this page. Bucket-level ACLs get implicitly inherited to all the keys within the bucket. However, individual keys can have additional specific ACLs applied for more granular control.&lt;br /&gt;
&lt;br /&gt;
Bucket names must be unique. When you create a bucket it will notify you if the name is already taken.&lt;br /&gt;
&lt;br /&gt;
= Keys (files) =&lt;br /&gt;
After selecting a bucket, you will be able to create folders and upload files within that bucket. Listed files can be downloaded, deleted, or assigned a specific ACL by the key owner/creator.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please note: Local file system ownership and permissions, and special files (such as symlinks) can not be represented in the object store. We highly suggest that if you are securing data into the object store and need these to be faithfully maintained that you use a local archive tool (tar, zip, etc..) to collect the data and then upload the resulting archive file(s).&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Hosting a Website in your Bucket =&lt;br /&gt;
Please visit [[OBJ/WebHosting]] for more information.&lt;br /&gt;
&lt;br /&gt;
= Deleting Keys (files) =&lt;br /&gt;
Within the web interface you can delete files one-by-one. If you want to remove a bunch of files, you will need to use a different client as described below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This is dangerous as there are no backups of files in the object store. Be careful to only delete the data you intend to delete.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Clients =&lt;br /&gt;
There are several clients that can be used (sometimes with a limited set of features) on your desktop to gain access to the Object Store. All supported UMIACS systems have a copy of our [https://gitlab.umiacs.umd.edu/staff/umobj/blob/master/README.md#umobj umobj] utilities which provide command line access to the object store. We also have an article in our wiki on [[S3Clients | 3rd party clients]] that lists and explains the details. These clients need to be configured with your Access and Secret Keys as described below.&lt;br /&gt;
&lt;br /&gt;
= Access Key and Secret Key =&lt;br /&gt;
Each user has one or more pairs of Access Keys and Secret Keys that are used as a credential to not expose your password when using the object store. These can be obtained by clicking on your [https://obj.umiacs.umd.edu/obj/user/ username] in the upper right-hand corner. You&#039;ll use these to identify and authenticate yourself to the Object Store.&lt;br /&gt;
&lt;br /&gt;
When using the [https://gitlab.umiacs.umd.edu/staff/umobj/blob/master/README.md#umobj umobj] utilities, you will need to make sure you have added these credentials in your local shell initialization files. There are links to files that have these automatically generated for the 3 most popular UNIX shell families (bash/sh, csh/tcsh, and zsh). Please make sure that whatever file(s) you copy these credentials into can not be read by other users (eg. chmod 600 filename).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Each Access Key and Secret Key are specific to a particular object store, so if you are accessing multiple object stores you may want to write the credentials for each to separate files and then source each file when you want to use the associated object store. Please [[HelpDesk | contact staff]] if you have any questions.&lt;br /&gt;
&lt;br /&gt;
= Lab Groups =&lt;br /&gt;
Lab Groups allow a group of users to share data while avoiding the need for complex ACLs by maintaining group ownership. Designated Lab Group managers can grant granular access (read, write, full control, manager) to buckets owned by the Lab Group. All objects owned by a Lab Group count against the group quota. Lab Groups can be navigated using the menu with username in the top right corner of the page. &#039;&#039;&#039;Note:&#039;&#039;&#039; this will only appear if you are a member of at least one Lab Group. At this point, you can browse the Object Store as the Lab Group and obtain your unique Access Key and Secret Key pair using the instructions above. To switch to another Lab Group or back to your own buckets, click the menu again and select another user or group.&lt;br /&gt;
&lt;br /&gt;
= Managing Lab Groups =&lt;br /&gt;
Lab Groups have many different levels of membership: &#039;&#039;&#039;Managers, FULL_CONTROL, READ/WRITE,&#039;&#039;&#039; and &#039;&#039;&#039;READ&#039;&#039;&#039;. Managers can add or remove Lab Group Members while every other access level cannot. If you hold the Manager role in a Lab Group, you can add and remove users using the [https://obj.umiacs.umd.edu/obj/labgroup/list/ Manage LabGroups] page, which is available under the Manage menu at the top of the page. After selecting a Lab Group, you can add users by typing their username into the search field and selecting a membership role.&lt;br /&gt;
&lt;br /&gt;
= Requesting a Lab Group =&lt;br /&gt;
To request a Lab Group for your project, please [[HelpDesk | contact staff]].&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShell&amp;diff=9185</id>
		<title>SecureShell</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=SecureShell&amp;diff=9185"/>
		<updated>2020-05-05T23:44:29Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Updated steps for verifying SSH fingerprints to reflect the updates to OpenSSH (ssh-keygen defaults to SHA256 instead of MD5)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure Shell (or [http://en.wikipedia.org/wiki/Secure_Shell SSH]) is a network protocol allowing two computers to exchange data securely over an insecure network.  By default, use of SSH brings the user to a terminal, but the protocol can be used for other types of data transfer such as [[SFTP]] and [[SCP]].&lt;br /&gt;
&lt;br /&gt;
==Connecting to an SSH Server==&lt;br /&gt;
Under Linux and macOS the following command from a terminal will connect a client computer to the UMIACS [[OpenLAB]].&lt;br /&gt;
 # ssh bkirz@openlab.umiacs.umd.edu&lt;br /&gt;
This will give you access to a terminal on any one of the [[OpenLAB]] servers.  Note that by default you will not have access to applications that require X11 to run.&lt;br /&gt;
&lt;br /&gt;
All UMIACS Windows hosts are installed with either the SSH Secure Shell Client or [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
==X11 Forwarding==&lt;br /&gt;
By default, SSH only gives the user shell access to a host.  Enabling X11 Forwarding allows users to run applications with Graphical User Interfaces.&lt;br /&gt;
&lt;br /&gt;
Under Linux and macOS, the following command from a terminal will connect a client computer to the UMIACS [[OpenLAB]] using X11 Forwarding. Please note that under macOS, [http://xquartz.macosforge.org/landing/ xQuartz] is required on the client machine to forward X sessions from the remote session.&lt;br /&gt;
 # ssh &#039;&#039;&#039;-Y&#039;&#039;&#039; bkirz@openlab.umiacs.umd.edu&lt;br /&gt;
&lt;br /&gt;
Under Windows, you will need to forward X through [http://sourceforge.net/projects/vcxsrv/ VcXsrv] or [http://www.straightrunning.com/XmingNotes/ Xming].&lt;br /&gt;
&lt;br /&gt;
First, enable X forwarding on PuTTY. The option is under Connection &amp;gt; SSH &amp;gt; X11, shown below.&lt;br /&gt;
&lt;br /&gt;
[[Image:Putty-x-forwarding.png]]&lt;br /&gt;
&lt;br /&gt;
Next, configure your SSH session and click open to start a SSH session.&lt;br /&gt;
&lt;br /&gt;
After this has been done, every time you want to use X forwarding, you need to make sure VcXsrv or Xming has been started (it will appear in your task tray) through the start menu programs.&lt;br /&gt;
Now, you will be able to use Xwindow programs from your ssh client.&lt;br /&gt;
&lt;br /&gt;
==SSH Tunneling==&lt;br /&gt;
&lt;br /&gt;
You can tunnel one or more ports through an SSH connection such that your packets will look like they are coming from the host you are tunneling to.   This is helpful for services that you would be normally blocked by a firewall.&lt;br /&gt;
&lt;br /&gt;
Please see the [[SecureShellTunneling]] page for more information.&lt;br /&gt;
&lt;br /&gt;
==SSH Keys (and Passwordless SSH)==&lt;br /&gt;
&lt;br /&gt;
There are some situations where it is important to be able to ssh without entering a password.  This is mostly required when working in clusters.  This is done using ssh keys.  Instead of authenticating with a password, ssh can use a pre-defined set of encryption keys to establish an authorized connection. &lt;br /&gt;
To setup passwordless ssh, do the following.&lt;br /&gt;
&lt;br /&gt;
First, you will need to create a ssh [http://en.wikipedia.org/wiki/Key_pair key pair].  It is possible to use a password that you will need to enter at the beginning of your work session.  This is preferable as it is more secure but may cause problems for some clustered work.  If you simply hit &#039;&#039;&#039;[enter]&#039;&#039;&#039;, you will never be prompted for a password when ssh&#039;ing which can lead to security problems.&lt;br /&gt;
&lt;br /&gt;
* To create a &#039;&#039;&#039;&#039;&#039;passwordless&#039;&#039;&#039;&#039;&#039; key, type the following, and then hit enter to place the keys in the default directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;  # ssh-keygen -N &amp;quot;&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Alternatively, to create a &#039;&#039;&#039;&#039;&#039;passphrase-protected&#039;&#039;&#039;&#039;&#039; (more-secure) key, type the following. &lt;br /&gt;
&amp;lt;pre&amp;gt;  # ssh-keygen&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will produce two files, &#039;&#039;&#039;id_rsa&#039;&#039;&#039; and &#039;&#039;&#039;id_rsa.pub&#039;&#039;&#039;, the private and public keys, respectively.  The default location will be ~/.ssh/. For the purposes of this tutorial we&#039;ll assume this default. Once you&#039;ve created the keys, you will need to put them into place as follows: &lt;br /&gt;
  # chmod 700 ~/.ssh &lt;br /&gt;
  # chmod 600 ~/.ssh/id_rsa &lt;br /&gt;
  # touch ~/.ssh/authorized_keys&lt;br /&gt;
  # chmod 600 ~/.ssh/authorized_keys&lt;br /&gt;
  # cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys&lt;br /&gt;
  # rm ~/.ssh/id_rsa.pub &lt;br /&gt;
&lt;br /&gt;
*It is &#039;&#039;&#039;very&#039;&#039;&#039; important that you keep your private key secure!  Ensure that it is chmod&#039;d to 600 and that you do not put it anywhere visible to other users!&lt;br /&gt;
*You must also make sure that no other users may write to your .ssh directory. This includes making sure that your home directory is not writable by group. Your home directory should be chmod&#039;d to 750 or 700 to make sure of this. If the group write bit is set, your ssh keys &#039;&#039;&#039;WILL NOT WORK&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you did not select a passphrase when you generated your keys, you can now ssh without a password.  If you did select a passphrase, you will need to activate the keys as follows:&lt;br /&gt;
&lt;br /&gt;
  # ssh-agent [SHELL]&lt;br /&gt;
  # ssh-add -t [TIME]&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;quot;[SHELL]&amp;quot; is your preferred shell and &amp;quot;[TIME]&amp;quot; is the amount of time you&#039;d like the key to be active in seconds.  So, the following would start a bash shell with passwordless ssh active for 30 minutes:&lt;br /&gt;
&lt;br /&gt;
  # ssh-agent bash&lt;br /&gt;
  # ssh-add -t 1800&lt;br /&gt;
&lt;br /&gt;
You will be prompted for your passphrase and, when entered correctly, you will be able to ssh without entering a password.&lt;br /&gt;
&lt;br /&gt;
To disable this functionality, simply delete your private key file (&#039;&#039;&#039;~/.ssh/id_rsa&#039;&#039;&#039;) and remove the public key from your &#039;&#039;&#039;~/.ssh/authorized_keys&#039;&#039;&#039; file.&lt;br /&gt;
&lt;br /&gt;
==Verify remote host SSH fingerprint==&lt;br /&gt;
The SSH protocol relies on host keys to verify the identify of a given host.  Each host as a unique key for the various different protocols supported.  &lt;br /&gt;
&lt;br /&gt;
When connecting to a remove host for the first time, or when the remote host&#039;s local host key configuration has changed, you may see the following prompt:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh sabobbin@openlab&lt;br /&gt;
The authenticity of host &#039;openlab (128.8.132.247)&#039; can&#039;t be established.&lt;br /&gt;
RSA key fingerprint is 25:83:aa:df:f5:ad:5f:08:c9:8a:a3:5d:97:8b:48:1f.&lt;br /&gt;
Are you sure you want to continue connecting (yes/no)?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is considered best practice to verify the key fingerprint with the actual key of the host.  UMIACS maintains a reference of SSH key fingerprints available at the following link: &lt;br /&gt;
https://gitlab.umiacs.umd.edu/staff/ssh-fingerprints/blob/master/fingerprints&lt;br /&gt;
&lt;br /&gt;
It is important to note that each key type has a different fingerprint.  Depending on your local configuration, your client may prefer a specific type of key.  The following commands can be used to determine the fingerprint of a given key type on a remote host:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keyscan -t rsa openlab.umiacs.umd.edu &amp;gt; key&lt;br /&gt;
# openlab.umiacs.umd.edu:22 SSH-2.0-OpenSSH_8.0&lt;br /&gt;
$ ssh-keygen -l -E md5 -f key&lt;br /&gt;
2048 MD5:25:83:aa:df:f5:ad:5f:08:c9:8a:a3:5d:97:8b:48:1f openlab.umiacs.umd.edu (RSA)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have any questions, or notice a discrepancy, please submit a request to staff@umiacs.umd.edu.&lt;br /&gt;
&lt;br /&gt;
===Windows / PuTTY Verification===&lt;br /&gt;
If you use PuTTY to connect to remote hosts, the prompt will be similar to the following:&lt;br /&gt;
&lt;br /&gt;
[[File:Putty ssh host key prompt.png]]&lt;br /&gt;
&lt;br /&gt;
If the host key reported by PuTTY matches the [https://gitlab.umiacs.umd.edu/staff/ssh-fingerprints/blob/master/fingerprints Documented entry for that host], it is safe to click &#039;yes&#039;.  If they do not match, please report the issue to [mailto:staff@umiacs.umd.edu staff@umiacs.umd.edu].&lt;br /&gt;
&lt;br /&gt;
===Other Platforms===&lt;br /&gt;
* [https://winscp.net/eng/docs/faq_hostkey WinSCP]&lt;br /&gt;
&lt;br /&gt;
==Long Running Processes==&lt;br /&gt;
If you are dealing with a long running process that is inhibiting your ability to work regularly, you may want to run your processes inside a screen on the host that you&#039;re connecting to. This way, if the connection is dropped for any reason the screen session will automatically detach on the host and will continue running so that you can reattach it at a later time when you&#039;ve connected again. Please see our documentation on [[Screen | GNU Screen]] for more information.&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
[http://www.openssh.org/ OpenSSH]&lt;br /&gt;
&lt;br /&gt;
[http://www.openssh.com/windows.html Windows Clients]&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Mathematica&amp;diff=8456</id>
		<title>Mathematica</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Mathematica&amp;diff=8456"/>
		<updated>2019-07-26T19:48:50Z</updated>

		<summary type="html">&lt;p&gt;Ekr597: Changed instructions to use modules instead of directing users to /opt/common&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mathematica is freely available to all University-owned machines. &lt;br /&gt;
&lt;br /&gt;
On our UMIACS-supported Linux hosts, Mathematica can be accessed through our [[Modules]].&lt;br /&gt;
*The command &amp;lt;code&amp;gt;module add mathematica&amp;lt;/code&amp;gt; will add the default version of Mathematica to your Environment.&lt;br /&gt;
*To see the versions of Mathematica that are available use the command &amp;lt;code&amp;gt;module avail mathematica&amp;lt;/code&amp;gt;.&lt;br /&gt;
*To add a specific version of Mathematica to your Environment (i.e. Mathematica 12.0) use the command &amp;lt;code&amp;gt;module add mathematica/12.0&amp;lt;/code&amp;gt;&lt;br /&gt;
*Further information can be found on our [[Modules | Modules page]]. &lt;br /&gt;
&lt;br /&gt;
For UMIACS-supported Windows machines, or other self-supported University-owned equipment, please contact [[HelpDesk | staff]].&lt;br /&gt;
&lt;br /&gt;
==Activation==&lt;br /&gt;
There is no automated way to activate Mathematica across our domain. As a result, each computer will have to be registered once against our hosted license server. Any user can go through this process, and it should persist until the host is reinstalled.&lt;br /&gt;
&lt;br /&gt;
* Upon being prompted, click &amp;quot;Other ways to activate&amp;quot; in the bottom row: &amp;lt;br&amp;gt;[[Image:math1.jpg| 500px| Mathematica 10 Activation Screen 1]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
* Click &amp;quot;Connect to a network license server&amp;quot;: &amp;lt;br&amp;gt;[[Image:math2.jpg| 500px| Mathematica 10 Activation Screen 2]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
* Enter &amp;quot;licserv.umiacs.umd.edu&amp;quot; as the license server. Click &amp;quot;Activate&amp;quot;:&amp;lt;br&amp;gt; [[Image:math3.jpg| 500px| Mathematica 10 Activation Screen 3]]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
* Accept the terms, and click ok: &amp;lt;br&amp;gt;[[Image:math4.jpg| 500px| Mathematica 10 Activation Screen 4]]&lt;br /&gt;
* &amp;lt;b&amp;gt;Mathematica should now be activated for that machine.&amp;lt;/b&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ekr597</name></author>
	</entry>
</feed>