Difference between revisions of "OpenLAB"

From UMIACS
Jump to navigation Jump to search
(31 intermediate revisions by 14 users not shown)
Line 1: Line 1:
The [[OpenLAB]] computing facility is a collection of initial nodes that all our users can use for their basic computer needs.  They are backed by our [[NFShomes]] home directories.
+
The [[OpenLAB]] computing facility is a collection of initial nodes that all our users can use for their basic computer needs.  They are backed by our [[NFShomes]] home directories.  We currently run [[RHEL7|Red Hat Enterprise Linux 7]].  
  
 
===Remote Login Nodes===
 
===Remote Login Nodes===
 
Please connect to '''openlab.umiacs.umd.edu''' as it will connect you one of the two following remote login nodes.  These are available via [[SSH]].
 
Please connect to '''openlab.umiacs.umd.edu''' as it will connect you one of the two following remote login nodes.  These are available via [[SSH]].
* opensub00.umiacs.umd.edu
+
* opensub02.umiacs.umd.edu
* opensub01.umiacs.umd.edu
+
* opensub03.umiacs.umd.edu
  
The RSA [[SSH]] fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts is,
+
The RSA [[SSH]] fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the [https://intranet.umiacs.umd.edu/hostkeys/  SSH Host Key Fingerprints] page on the UMIACS Intranet.
  
  79:b3:dc:99:d8:ee:56:b8:51:63:bc:10:aa:4e:db:b3
+
We also have a RHEL8 login node, '''openrhel8.umiacs.umd.edu''', that you can connect to.  Please note that it cannot submit to the OpenLAB cluster as of right now.  This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.
  
 
===OpenLAB Cluster===
 
===OpenLAB Cluster===
These nodes are not explictly available to login to.  They can either be scheduled from the submit/remote login nodes via [[Condor]] or [[ClusterGuide|Torque]].
+
These nodes are not explicitly available to login to.  They can either be scheduled from the submit/remote login nodes via [[SLURM]].
  
Each of these are Sun Fire X2200 M2 with Dual Operon (2.2ghz) Quad Core Processors (8 logical processors each), 8GB of ram and a 750GB 7200RPM disk.
+
<pre>
 +
NODELIST            CPUS      MEMORY    AVAIL_FEATURES            GRES
 +
openlab08            32        128718    Xeon,E5-2690              gpu:m40:1,gpu:k20:2
 +
openlab20            16        23937      Xeon,x5560                (null)
 +
openlab21            16        23937      Xeon,x5560                (null)
 +
openlab22            16        23937      Xeon,x5560                (null)
 +
openlab23            16        23937      Xeon,x5560                (null)
 +
openlab25            16        23937      Xeon,x5560                (null)
 +
openlab27            16        23937      Xeon,x5560                (null)
 +
openlab28            16        23937      Xeon,x5560                (null)
 +
openlab29            16        23937      Xeon,x5560                (null)
 +
openlab30            64        257757    Opteron,6274              (null)
 +
openlab31            64        257757    Opteron,6274              (null)
 +
openlab32            64        257757    Opteron,6274              (null)
 +
openlab33            64        257757    Opteron,6274              (null)
 +
openlab38            16        23937      Xeon,E5530                (null)
 +
openlab39            16        23937      Xeon,E5530                (null)
 +
openlab40            16        23937      Xeon,E5530                (null)
 +
openlab41            16        23937      Xeon,E5530                (null)
 +
openlab42            16        23937      Xeon,E5530                (null)
 +
openlab43            16        23937      Xeon,E5530                (null)
 +
openlab44            16        23937      Xeon,E5530                (null)
 +
openlab45            16        23937      Xeon,E5530                (null)
 +
openlab46            16        23937      Xeon,E5530                (null)
 +
openlab47            16        23937      Xeon,E5530                (null)
 +
openlab48            16        23937      Xeon,E5530                (null)
 +
openlab50            16        23937      Xeon,E5530                (null)
 +
openlab51            16        23937      Xeon,E5530                (null)
 +
openlab52            16        23937      Xeon,E5530                (null)
 +
openlab53            16        23937      Xeon,E5530                (null)
 +
openlab54            16        23937      Xeon,E5530                (null)
 +
openlab55            16        23937      Xeon,E5530                (null)
 +
openlab56            16        23937      Xeon,E5530                (null)
 +
openlab57            16        23937      Xeon,E5530                (null)
 +
openlab58            16        23936      Xeon,E5530                (null)
 +
openlab59            16        23936      Xeon,E5530                (null)
 +
openlab60            16        23936      Xeon,E5530                (null)
 +
openlab61            16        23936      Xeon,E5530                (null)
  
* openlab00.umiacs.umd.edu
+
</pre>
* openlab01.umiacs.umd.edu
 
* openlab02.umiacs.umd.edu
 
* openlab03.umiacs.umd.edu
 
* openlab04.umiacs.umd.edu
 
* openlab05.umiacs.umd.edu
 
* openlab06.umiacs.umd.edu
 
* openlab07.umiacs.umd.edu
 
  
===OpenLAB Desktop Resources===
+
'''Notes'''
* RedHat Enterprise Linux 6
+
* One node (openlab08) has Dual E5-2690 @ 2.90GHz (8 logical processors each), 128GB of RAM, L1 cache: 512kB, L2 cache: 1024kB, L3 cache: 2042kB, and 2 400GB local scratch disks. Additionally, it has two Tesla K20c GPUs and one Tesla M40 GPU available. It needs to be scheduled with the [[SLURM]] partition <tt>gpu</tt>. To learn how to request GPUs in [[SLURM]] please make sure to read the section in the [[SLURM]] documentation on [[SLURM/JobSubmission#Requesting_GPUs | requesting GPUs]].
** openwks01.umiacs.umd.edu (AVW 4462)
 
 
 
* RedHat Enterprise Linux 5
 
** openwks00.umiacs.umd.edu (AVW 4462)
 
 
 
* Windows 7 Enterprise
 
**monroe.pc.umiacs.umd.edu (AVW 4462)
 
 
 
* Windows 7 Enterprise
 
**jackson.pc.umiacs.umd.edu (AVW 4462)
 
 
 
*Mac OS X 10.8
 
**koala.umiacs.umd.edu (AVW 4462)
 
  
 
__NOTOC__
 
__NOTOC__

Revision as of 16:29, 30 July 2020

The OpenLAB computing facility is a collection of initial nodes that all our users can use for their basic computer needs. They are backed by our NFShomes home directories. We currently run Red Hat Enterprise Linux 7.

Remote Login Nodes

Please connect to openlab.umiacs.umd.edu as it will connect you one of the two following remote login nodes. These are available via SSH.

  • opensub02.umiacs.umd.edu
  • opensub03.umiacs.umd.edu

The RSA SSH fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the SSH Host Key Fingerprints page on the UMIACS Intranet.

We also have a RHEL8 login node, openrhel8.umiacs.umd.edu, that you can connect to. Please note that it cannot submit to the OpenLAB cluster as of right now. This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.

OpenLAB Cluster

These nodes are not explicitly available to login to. They can either be scheduled from the submit/remote login nodes via SLURM.

NODELIST             CPUS       MEMORY     AVAIL_FEATURES            GRES
openlab08            32         128718     Xeon,E5-2690              gpu:m40:1,gpu:k20:2
openlab20            16         23937      Xeon,x5560                (null)
openlab21            16         23937      Xeon,x5560                (null)
openlab22            16         23937      Xeon,x5560                (null)
openlab23            16         23937      Xeon,x5560                (null)
openlab25            16         23937      Xeon,x5560                (null)
openlab27            16         23937      Xeon,x5560                (null)
openlab28            16         23937      Xeon,x5560                (null)
openlab29            16         23937      Xeon,x5560                (null)
openlab30            64         257757     Opteron,6274              (null)
openlab31            64         257757     Opteron,6274              (null)
openlab32            64         257757     Opteron,6274              (null)
openlab33            64         257757     Opteron,6274              (null)
openlab38            16         23937      Xeon,E5530                (null)
openlab39            16         23937      Xeon,E5530                (null)
openlab40            16         23937      Xeon,E5530                (null)
openlab41            16         23937      Xeon,E5530                (null)
openlab42            16         23937      Xeon,E5530                (null)
openlab43            16         23937      Xeon,E5530                (null)
openlab44            16         23937      Xeon,E5530                (null)
openlab45            16         23937      Xeon,E5530                (null)
openlab46            16         23937      Xeon,E5530                (null)
openlab47            16         23937      Xeon,E5530                (null)
openlab48            16         23937      Xeon,E5530                (null)
openlab50            16         23937      Xeon,E5530                (null)
openlab51            16         23937      Xeon,E5530                (null)
openlab52            16         23937      Xeon,E5530                (null)
openlab53            16         23937      Xeon,E5530                (null)
openlab54            16         23937      Xeon,E5530                (null)
openlab55            16         23937      Xeon,E5530                (null)
openlab56            16         23937      Xeon,E5530                (null)
openlab57            16         23937      Xeon,E5530                (null)
openlab58            16         23936      Xeon,E5530                (null)
openlab59            16         23936      Xeon,E5530                (null)
openlab60            16         23936      Xeon,E5530                (null)
openlab61            16         23936      Xeon,E5530                (null)

Notes

  • One node (openlab08) has Dual E5-2690 @ 2.90GHz (8 logical processors each), 128GB of RAM, L1 cache: 512kB, L2 cache: 1024kB, L3 cache: 2042kB, and 2 400GB local scratch disks. Additionally, it has two Tesla K20c GPUs and one Tesla M40 GPU available. It needs to be scheduled with the SLURM partition gpu. To learn how to request GPUs in SLURM please make sure to read the section in the SLURM documentation on requesting GPUs.