Difference between revisions of "OpenLAB"

From UMIACS
Jump to navigation Jump to search
 
(33 intermediate revisions by 14 users not shown)
Line 1: Line 1:
The [[OpenLAB]] computing facility is a collection of initial nodes that all our users can use for their basic computer needs.  They are backed by our [[NFShomes]] home directories.  We currently run [[RHEL6|Red Enterprise Linux 6]].  
+
<pre style="color: red">
 +
The Institute has implemented a DUO multi-factor login requirement for our SSH network connections that do not pass through UMIACS managed networks or through our Virtual Private Network (VPN).  As of May 5 2021, all new connections now have to meet this requirement.
 +
</pre>
 +
See [[SecureShell/MFA]]
 +
 
 +
<hr>
 +
 
 +
The [[OpenLAB]] computing facility is a collection of initial nodes that all our users can use for their basic computer needs.  They are backed by our [[NFShomes]] home directories.  
  
 
===Remote Login Nodes===
 
===Remote Login Nodes===
 
Please connect to '''openlab.umiacs.umd.edu''' as it will connect you one of the two following remote login nodes.  These are available via [[SSH]].
 
Please connect to '''openlab.umiacs.umd.edu''' as it will connect you one of the two following remote login nodes.  These are available via [[SSH]].
* opensub00.umiacs.umd.edu
+
* opensub02.umiacs.umd.edu
* opensub01.umiacs.umd.edu
+
* opensub03.umiacs.umd.edu
 +
 
 +
The RSA [[SSH]] fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the [https://intranet.umiacs.umd.edu/hostkeys/  SSH Host Key Fingerprints] page on the UMIACS Intranet.
 +
 
 +
We also have a RHEL8 login node, '''opensub04.umiacs.umd.edu''', that you can connect to.  Please note that it cannot submit to the OpenLAB cluster as of right now.  This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.
 +
 
 +
==Operating Systems==
 +
As we are in the process of upgrading the hosts within the OpenLAB cluster to RHEL8, the cluster's operating systems will be heterogeneous, featuring hosts with both RHEL7 and RHEL8 installed. You can specify which operating system you would like to have your job run on by using the <code>constraint</code> tag. For instance, to run open an interactive shell session on a RHEL8 host, you would submit:
  
The RSA [[SSH]] fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts is,
+
<pre>
 +
srun --pty --qos=$QOS --partition=$PARTITION --constraint=rhel8
 +
</pre>
  
  79:b3:dc:99:d8:ee:56:b8:51:63:bc:10:aa:4e:db:b3
+
Please note that if you run into any issues with the availability of modules, you should ensure your .bashrc file contains the commands found on the "Modules in Non-interactive Shell Sessions" section of the [[Modules]] page, and that your .bashrc is being sources in your .bash_login file.
  
 
===OpenLAB Cluster===
 
===OpenLAB Cluster===
These nodes are not explicitly available to login to.  They can either be scheduled from the submit/remote login nodes via [[Condor]] or [[ClusterGuide|Torque]].
+
These nodes are not explicitly available to login to.  They can either be scheduled from the submit/remote login nodes via [[SLURM]].
  
There are two types of nodes we have:
+
<pre>
* 8 Sun Fire X2200 M2 nodes with Dual Operon (2.2ghz) Quad Core Processors (8 logical processors each), 8GB of ram and a local 750GB 7200RPM scratch disk. These will can be scheduled with the basic [[ClusterGuide|Torque]] queue <tt>dque</tt>.
+
NODELIST            CPUS      MEMORY    AVAIL_FEATURES            GRES
** openlab00.umiacs.umd.edu
+
openlab30          64        257757    Opteron,6274,rhel7  (null)
** openlab01.umiacs.umd.edu
+
openlab38          16        23937      Xeon,E5530,rhel7    (null)
** openlab02.umiacs.umd.edu
+
openlab20          16        23937      Xeon,x5560,rhel7    (null)
** openlab03.umiacs.umd.edu
+
openlab31          64        257757    Opteron,6274,rhel7  (null)
** openlab04.umiacs.umd.edu
+
openlab39          16        23937      Xeon,E5530,rhel7    (null)
** openlab05.umiacs.umd.edu
+
openlab21          8          23937      Xeon,x5560,rhel7    (null)
** openlab06.umiacs.umd.edu
+
openlab22          8          23937      Xeon,x5560,rhel7    (null)
** openlab07.umiacs.umd.edu
+
openlab23          8          23937      Xeon,x5560,rhel7    (null)
* 2 nodes with Dual E5-2690 (8 logical processors each), 128GB of ram and 2 400GB local scratch disks.  Additionally these each have two Tesla K20c GPUs available in each node.  These nodes need to be scheduled with the [[ClusterGuide|Torque]] queue <tt>gpu</tt>.
+
openlab25          8         23937      Xeon,x5560,rhel7    (null)
** openlab08.umiacs.umd.edu
+
openlab27          8          23937      Xeon,x5560,rhel7    (null)
** openlab09.umiacs.umd.edu
+
openlab28          8         23937      Xeon,x5560,rhel7    (null)
 +
openlab32          64        257757    Opteron,6274,rhel7 (null)
 +
openlab33          64        257757    Opteron,6274,rhel7  (null)
 +
openlab40          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab41          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab42          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab43          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab44          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab45          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab46          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab47          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab48          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab50          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab52          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab53          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab54          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab55          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab56          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab57          16        23937      Xeon,E5530,rhel7    (null)
 +
openlab58          16        23936      Xeon,E5530,rhel7    (null)
 +
openlab59          16        23936      Xeon,E5530,rhel7    (null)
 +
openlab60          16        23936      Xeon,E5530,rhel7    (null)
 +
openlab61          16        23936      Xeon,E5530,rhel7    (null)
 +
rinzler00          48        128253    AMD,EPYC-7402,rhel8 (null)
 +
thalesgpu09        88        515588    rhel8              gpu:gtx1080ti:4
 +
openlab08          32        128718    Xeon,E5-2690,rhel7  gpu:m40:1,gpu:k20:2
 +
thalesgpu00        32        257588    rhel8              gpu:teslak80:2
 +
thalesgpu01        32        257588    rhel8              gpu:teslak40m:2
 +
thalesgpu02        40        257557    rhel8              gpu:titanX:4
 +
thalesgpu03        40        257557    rhel8              gpu:titanX:4
 +
thalesgpu04        40        257557    rhel8              gpu:titanXp:4
 +
thalesgpu05        40        257557    rhel8              gpu:titanX:4
 +
thalesgpu06        40        322068    rhel8              gpu:titanX:4
 +
thalesgpu07        32        257588    rhel8              gpu:teslak80:2
 +
thalesgpu08        32        257588    rhel8              gpu:teslak80:2
 +
thalesgpu10        40        515635    rhel8              gpu:m40:2
  
===OpenLAB Desktop Resources===
+
</pre>
* Windows 7 Enterprise
 
**jackson.pc.umiacs.umd.edu (AVW 4430)
 
  
*Mac OS X 10.8
+
'''Notes'''
**koala.umiacs.umd.edu (AVW 4430)
+
* Openlab08 and thalesgpu[00-10] are nodes that contain GPUs. To learn how to request GPUs in [[SLURM]] please make sure to read the section in the [[SLURM]] documentation on [[SLURM/JobSubmission#Requesting_GPUs | requesting GPUs]].
  
 
__NOTOC__
 
__NOTOC__

Latest revision as of 19:32, 18 November 2021

The Institute has implemented a DUO multi-factor login requirement for our SSH network connections that do not pass through UMIACS managed networks or through our Virtual Private Network (VPN).  As of May 5 2021, all new connections now have to meet this requirement.

See SecureShell/MFA


The OpenLAB computing facility is a collection of initial nodes that all our users can use for their basic computer needs. They are backed by our NFShomes home directories.

Remote Login Nodes

Please connect to openlab.umiacs.umd.edu as it will connect you one of the two following remote login nodes. These are available via SSH.

  • opensub02.umiacs.umd.edu
  • opensub03.umiacs.umd.edu

The RSA SSH fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the SSH Host Key Fingerprints page on the UMIACS Intranet.

We also have a RHEL8 login node, opensub04.umiacs.umd.edu, that you can connect to. Please note that it cannot submit to the OpenLAB cluster as of right now. This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.

Operating Systems

As we are in the process of upgrading the hosts within the OpenLAB cluster to RHEL8, the cluster's operating systems will be heterogeneous, featuring hosts with both RHEL7 and RHEL8 installed. You can specify which operating system you would like to have your job run on by using the constraint tag. For instance, to run open an interactive shell session on a RHEL8 host, you would submit:

srun --pty --qos=$QOS --partition=$PARTITION --constraint=rhel8

Please note that if you run into any issues with the availability of modules, you should ensure your .bashrc file contains the commands found on the "Modules in Non-interactive Shell Sessions" section of the Modules page, and that your .bashrc is being sources in your .bash_login file.

OpenLAB Cluster

These nodes are not explicitly available to login to. They can either be scheduled from the submit/remote login nodes via SLURM.

NODELIST            CPUS       MEMORY     AVAIL_FEATURES            GRES
openlab30           64         257757     Opteron,6274,rhel7  (null)
openlab38           16         23937      Xeon,E5530,rhel7    (null)
openlab20           16         23937      Xeon,x5560,rhel7    (null)
openlab31           64         257757     Opteron,6274,rhel7  (null)
openlab39           16         23937      Xeon,E5530,rhel7    (null)
openlab21           8          23937      Xeon,x5560,rhel7    (null)
openlab22           8          23937      Xeon,x5560,rhel7    (null)
openlab23           8          23937      Xeon,x5560,rhel7    (null)
openlab25           8          23937      Xeon,x5560,rhel7    (null)
openlab27           8          23937      Xeon,x5560,rhel7    (null)
openlab28           8          23937      Xeon,x5560,rhel7    (null)
openlab32           64         257757     Opteron,6274,rhel7  (null)
openlab33           64         257757     Opteron,6274,rhel7  (null)
openlab40           16         23937      Xeon,E5530,rhel7    (null)
openlab41           16         23937      Xeon,E5530,rhel7    (null)
openlab42           16         23937      Xeon,E5530,rhel7    (null)
openlab43           16         23937      Xeon,E5530,rhel7    (null)
openlab44           16         23937      Xeon,E5530,rhel7    (null)
openlab45           16         23937      Xeon,E5530,rhel7    (null)
openlab46           16         23937      Xeon,E5530,rhel7    (null)
openlab47           16         23937      Xeon,E5530,rhel7    (null)
openlab48           16         23937      Xeon,E5530,rhel7    (null)
openlab50           16         23937      Xeon,E5530,rhel7    (null)
openlab52           16         23937      Xeon,E5530,rhel7    (null)
openlab53           16         23937      Xeon,E5530,rhel7    (null)
openlab54           16         23937      Xeon,E5530,rhel7    (null)
openlab55           16         23937      Xeon,E5530,rhel7    (null)
openlab56           16         23937      Xeon,E5530,rhel7    (null)
openlab57           16         23937      Xeon,E5530,rhel7    (null)
openlab58           16         23936      Xeon,E5530,rhel7    (null)
openlab59           16         23936      Xeon,E5530,rhel7    (null)
openlab60           16         23936      Xeon,E5530,rhel7    (null)
openlab61           16         23936      Xeon,E5530,rhel7    (null)
rinzler00           48         128253     AMD,EPYC-7402,rhel8 (null)
thalesgpu09         88         515588     rhel8               gpu:gtx1080ti:4
openlab08           32         128718     Xeon,E5-2690,rhel7  gpu:m40:1,gpu:k20:2
thalesgpu00         32         257588     rhel8               gpu:teslak80:2
thalesgpu01         32         257588     rhel8               gpu:teslak40m:2
thalesgpu02         40         257557     rhel8               gpu:titanX:4
thalesgpu03         40         257557     rhel8               gpu:titanX:4
thalesgpu04         40         257557     rhel8               gpu:titanXp:4
thalesgpu05         40         257557     rhel8               gpu:titanX:4
thalesgpu06         40         322068     rhel8               gpu:titanX:4
thalesgpu07         32         257588     rhel8               gpu:teslak80:2
thalesgpu08         32         257588     rhel8               gpu:teslak80:2
thalesgpu10         40         515635     rhel8               gpu:m40:2

Notes

  • Openlab08 and thalesgpu[00-10] are nodes that contain GPUs. To learn how to request GPUs in SLURM please make sure to read the section in the SLURM documentation on requesting GPUs.