OpenLAB

From UMIACS
(Redirected from Services/Compute/OpenLab)
Jump to navigation Jump to search
The Institute has implemented a DUO multi-factor login requirement for our SSH network connections that do not pass through UMIACS managed networks or through our Virtual Private Network (VPN).  As of May 5 2021, all new connections now have to meet this requirement.

See SecureShell/MFA


The OpenLAB computing facility is a collection of initial nodes that all our users can use for their basic computer needs. They are backed by our NFShomes home directories.

Remote Login Nodes

Please connect to openlab.umiacs.umd.edu as it will connect you one of the two following remote login nodes. These are available via SSH.

  • opensub02.umiacs.umd.edu
  • opensub03.umiacs.umd.edu

The RSA SSH fingerprint for "openlab.umiacs.umd.edu" and all these specific hosts can be verified through the SSH Host Key Fingerprints page on the UMIACS Intranet.

We also have a RHEL8 login node, openrhel8.umiacs.umd.edu, that you can connect to. Please note that it cannot submit to the OpenLAB cluster as of right now. This node is intended to provide a place where you can validate / recompile your software before we eventually begin upgrading our compute resources to RHEL8.

Operating Systems

As we are in the process of upgrading the hosts within the OpenLAB cluster to RHEL8, the cluster's operating systems will be heterogeneous, featuring hosts with both RHEL7 and RHEL8 installed. You can specify which operating system you would like to have your job run on by using the feature tag. For instance, to run open an interactive shell session on a RHEL8 host, you would submit:

srun --pty --qos=$QOS --partition=$PARTITION --feature=rhel8

Please note that if you run into any issues with the availability of modules, you should ensure your .bashrc file contains the commands found on the "Modules in Non-interactive Shell Sessions" section of the Modules page, and that your .bashrc is being sources in your .bash_login file.

OpenLAB Cluster

These nodes are not explicitly available to login to. They can either be scheduled from the submit/remote login nodes via SLURM.

NODELIST            CPUS       MEMORY     AVAIL_FEATURES            GRES
openlab30           64         257757     Opteron,6274,rhel7  (null)
openlab38           16         23937      Xeon,E5530,rhel7    (null)
openlab20           16         23937      Xeon,x5560,rhel7    (null)
openlab31           64         257757     Opteron,6274,rhel7  (null)
openlab39           16         23937      Xeon,E5530,rhel7    (null)
openlab21           8          23937      Xeon,x5560,rhel7    (null)
openlab22           8          23937      Xeon,x5560,rhel7    (null)
openlab23           8          23937      Xeon,x5560,rhel7    (null)
openlab25           8          23937      Xeon,x5560,rhel7    (null)
openlab27           8          23937      Xeon,x5560,rhel7    (null)
openlab28           8          23937      Xeon,x5560,rhel7    (null)
openlab32           64         257757     Opteron,6274,rhel7  (null)
openlab33           64         257757     Opteron,6274,rhel7  (null)
openlab40           16         23937      Xeon,E5530,rhel7    (null)
openlab41           16         23937      Xeon,E5530,rhel7    (null)
openlab42           16         23937      Xeon,E5530,rhel7    (null)
openlab43           16         23937      Xeon,E5530,rhel7    (null)
openlab44           16         23937      Xeon,E5530,rhel7    (null)
openlab45           16         23937      Xeon,E5530,rhel7    (null)
openlab46           16         23937      Xeon,E5530,rhel7    (null)
openlab47           16         23937      Xeon,E5530,rhel7    (null)
openlab48           16         23937      Xeon,E5530,rhel7    (null)
openlab50           16         23937      Xeon,E5530,rhel7    (null)
openlab52           16         23937      Xeon,E5530,rhel7    (null)
openlab53           16         23937      Xeon,E5530,rhel7    (null)
openlab54           16         23937      Xeon,E5530,rhel7    (null)
openlab55           16         23937      Xeon,E5530,rhel7    (null)
openlab56           16         23937      Xeon,E5530,rhel7    (null)
openlab57           16         23937      Xeon,E5530,rhel7    (null)
openlab58           16         23936      Xeon,E5530,rhel7    (null)
openlab59           16         23936      Xeon,E5530,rhel7    (null)
openlab60           16         23936      Xeon,E5530,rhel7    (null)
openlab61           16         23936      Xeon,E5530,rhel7    (null)
rinzler00           48         128253     AMD,EPYC-7402,rhel8 (null)
thalesgpu09         88         515588     rhel8               gpu:gtx1080ti:4
openlab08           32         128718     Xeon,E5-2690,rhel7  gpu:m40:1,gpu:k20:2
thalesgpu00         32         257588     rhel8               gpu:teslak80:2
thalesgpu01         32         257588     rhel8               gpu:teslak40m:2
thalesgpu02         40         257557     rhel8               gpu:titanX:4
thalesgpu03         40         257557     rhel8               gpu:titanX:4
thalesgpu04         40         257557     rhel8               gpu:titanXp:4
thalesgpu05         40         257557     rhel8               gpu:titanX:4
thalesgpu06         40         322068     rhel8               gpu:titanX:4
thalesgpu07         32         257588     rhel8               gpu:teslak80:2
thalesgpu08         32         257588     rhel8               gpu:teslak80:2
thalesgpu10         40         515635     rhel8               gpu:m40:2

Notes

  • Openlab08 and thalesgpu[00-10] are nodes that contain GPUs. To learn how to request GPUs in SLURM please make sure to read the section in the SLURM documentation on requesting GPUs.