OpenLAB

From UMIACS
Jump to: navigation, search

The OpenLAB computing facility is a collection of initial nodes that all our users can use for their basic computer needs. They are backed by our NFShomes home directories. We currently run Red Hat Enterprise Linux 7.

Remote Login Nodes

Please connect to openlab.umiacs.umd.edu as it will connect you one of the two following remote login nodes. These are available via SSH.

  • opensub02.umiacs.umd.edu
  • opensub03.umiacs.umd.edu

The RSA SSH fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the SSH Host Key Fingerprints page on the UMIACS Intranet.

OpenLAB Cluster

These nodes are not explicitly available to login to. They can either be scheduled from the submit/remote login nodes via SLURM.

# show_nodes
NODELIST             CPUS       MEMORY     AVAIL_FEATURES            GRES                      STATE
openlab00            8          7821       Opteron,2354              (null)                    idle
openlab01            8          7821       Opteron,2354              (null)                    idle
openlab02            8          7821       Opteron,2354              (null)                    idle
openlab03            8          7821       Opteron,2354              (null)                    idle
openlab04            8          7821       Opteron,2354              (null)                    idle
openlab05            8          7821       Opteron,2354              (null)                    idle
openlab06            8          7821       Opteron,2354              (null)                    idle
openlab07            8          7821       Opteron,2354              (null)                    idle
openlab08            32         128719     Xeon,E5-2690              gpu:m40:1,gpu:k20:2       idle
openlab09            32         128721     Xeon,E5-2690              gpu:m40:1,gpu:k20:2       mix
openlab10            16         23938      Xeon,x5560                (null)                    mix
openlab11            16         23938      Xeon,x5560                (null)                    mix
openlab13            16         23938      Xeon,x5560                (null)                    mix
openlab14            16         23938      Xeon,x5560                (null)                    mix
openlab15            16         23938      Xeon,x5560                (null)                    mix
openlab16            16         23938      Xeon,x5560                (null)                    idle
openlab17            16         23938      Xeon,x5560                (null)                    idle
openlab18            16         23938      Xeon,x5560                (null)                    idle
openlab20            16         23938      Xeon,x5560                (null)                    idle
openlab21            16         23938      Xeon,x5560                (null)                    idle
openlab22            16         23938      Xeon,x5560                (null)                    idle
openlab23            16         23938      Xeon,x5560                (null)                    idle
openlab25            16         23938      Xeon,x5560                (null)                    idle
openlab27            16         23938      Xeon,x5560                (null)                    idle
openlab28            16         23938      Xeon,x5560                (null)                    idle
openlab29            16         23938      Xeon,x5560                (null)                    idle
openlab30            64         257758     Opteron,6274              (null)                    alloc
openlab31            64         257758     Opteron,6274              (null)                    alloc
openlab32            64         257758     Opteron,6274              (null)                    alloc
openlab33            64         257758     Opteron,6274              (null)                    alloc
openlab35            16         23937      Xeon,e5530                (null)                    idle

Notes

  • 2 nodes with Dual E5-2690 @ 2.90GHz (8 logical processors each), 128GB of ram, l1 cache: 512kB, l2 cache: 1024kB, l3 cache: 2042kB, and 2 400GB local scratch disks. Additionally these each have two Tesla K20c GPUs available in each node and one node has a M40. These nodes need to be scheduled with the SLURM partition gpu. To learn how to request GPUs in SLURM please make sure to read the section in the SLURM documentation on requesting GPUs.