Difference between revisions of "OpenLAB"

From UMIACS
Jump to: navigation, search
 
Line 16: Line 16:
 
NODELIST            CPUS      MEMORY    AVAIL_FEATURES            GRES
 
NODELIST            CPUS      MEMORY    AVAIL_FEATURES            GRES
 
openlab08            32        128718    Xeon,E5-2690              gpu:m40:1,gpu:k20:2
 
openlab08            32        128718    Xeon,E5-2690              gpu:m40:1,gpu:k20:2
openlab11            16        23937      Xeon,x5560                (null)
 
openlab13            16        23937      Xeon,x5560                (null)
 
openlab14            16        23937      Xeon,x5560                (null)
 
openlab15            16        23937      Xeon,x5560                (null)
 
openlab16            16        23937      Xeon,x5560                (null)
 
openlab17            16        23937      Xeon,x5560                (null)
 
openlab18            16        23937      Xeon,x5560                (null)
 
 
openlab20            16        23937      Xeon,x5560                (null)
 
openlab20            16        23937      Xeon,x5560                (null)
 
openlab21            16        23937      Xeon,x5560                (null)
 
openlab21            16        23937      Xeon,x5560                (null)
Line 46: Line 39:
 
openlab47            16        23937      Xeon,E5530                (null)
 
openlab47            16        23937      Xeon,E5530                (null)
 
openlab48            16        23937      Xeon,E5530                (null)
 
openlab48            16        23937      Xeon,E5530                (null)
openlab49            16        23937      Xeon,E5530                (null)
 
 
openlab50            16        23937      Xeon,E5530                (null)
 
openlab50            16        23937      Xeon,E5530                (null)
 
openlab51            16        23937      Xeon,E5530                (null)
 
openlab51            16        23937      Xeon,E5530                (null)

Latest revision as of 16:29, 30 July 2020

The OpenLAB computing facility is a collection of initial nodes that all our users can use for their basic computer needs. They are backed by our NFShomes home directories. We currently run Red Hat Enterprise Linux 7.

Remote Login Nodes

Please connect to openlab.umiacs.umd.edu as it will connect you one of the two following remote login nodes. These are available via SSH.

  • opensub02.umiacs.umd.edu
  • opensub03.umiacs.umd.edu

The RSA SSH fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the SSH Host Key Fingerprints page on the UMIACS Intranet.

We also have a RHEL8 login node, openrhel8.umiacs.umd.edu, that you can connect to. Please note that it cannot submit to the OpenLAB cluster as of right now. This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.

OpenLAB Cluster

These nodes are not explicitly available to login to. They can either be scheduled from the submit/remote login nodes via SLURM.

NODELIST             CPUS       MEMORY     AVAIL_FEATURES            GRES
openlab08            32         128718     Xeon,E5-2690              gpu:m40:1,gpu:k20:2
openlab20            16         23937      Xeon,x5560                (null)
openlab21            16         23937      Xeon,x5560                (null)
openlab22            16         23937      Xeon,x5560                (null)
openlab23            16         23937      Xeon,x5560                (null)
openlab25            16         23937      Xeon,x5560                (null)
openlab27            16         23937      Xeon,x5560                (null)
openlab28            16         23937      Xeon,x5560                (null)
openlab29            16         23937      Xeon,x5560                (null)
openlab30            64         257757     Opteron,6274              (null)
openlab31            64         257757     Opteron,6274              (null)
openlab32            64         257757     Opteron,6274              (null)
openlab33            64         257757     Opteron,6274              (null)
openlab38            16         23937      Xeon,E5530                (null)
openlab39            16         23937      Xeon,E5530                (null)
openlab40            16         23937      Xeon,E5530                (null)
openlab41            16         23937      Xeon,E5530                (null)
openlab42            16         23937      Xeon,E5530                (null)
openlab43            16         23937      Xeon,E5530                (null)
openlab44            16         23937      Xeon,E5530                (null)
openlab45            16         23937      Xeon,E5530                (null)
openlab46            16         23937      Xeon,E5530                (null)
openlab47            16         23937      Xeon,E5530                (null)
openlab48            16         23937      Xeon,E5530                (null)
openlab50            16         23937      Xeon,E5530                (null)
openlab51            16         23937      Xeon,E5530                (null)
openlab52            16         23937      Xeon,E5530                (null)
openlab53            16         23937      Xeon,E5530                (null)
openlab54            16         23937      Xeon,E5530                (null)
openlab55            16         23937      Xeon,E5530                (null)
openlab56            16         23937      Xeon,E5530                (null)
openlab57            16         23937      Xeon,E5530                (null)
openlab58            16         23936      Xeon,E5530                (null)
openlab59            16         23936      Xeon,E5530                (null)
openlab60            16         23936      Xeon,E5530                (null)
openlab61            16         23936      Xeon,E5530                (null)

Notes

  • One node (openlab08) has Dual E5-2690 @ 2.90GHz (8 logical processors each), 128GB of RAM, L1 cache: 512kB, L2 cache: 1024kB, L3 cache: 2042kB, and 2 400GB local scratch disks. Additionally, it has two Tesla K20c GPUs and one Tesla M40 GPU available. It needs to be scheduled with the SLURM partition gpu. To learn how to request GPUs in SLURM please make sure to read the section in the SLURM documentation on requesting GPUs.