Nexus/Network: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
Line 7: Line 7:


==Network Core==
==Network Core==
The network core for Nexus is the same network core used by all UMIACS-supported systems. It consists of a pair of network switches that are connected to each other via a single link for redundancy. Node-to-node communications for nodes in the same [[Nexus#Partitions | partition]] rarely ever need to traverse the network core.
The network core layer for Nexus is the same network core layer used by all UMIACS-supported systems. It consists of a pair of network switches that are connected to each other via dual 100GbE links for redundancy. Node-to-node communications for nodes in the same [[Nexus#Partitions | partition]] rarely ever need to traverse the network core.


==Network Access==
==Network Access==
The network access layer for Nexus is different depending on the lab/center that purchased the compute nodes, as different labs/centers have chosen to invest differently in the network infrastructure supporting their purchased compute nodes. Generally speaking, but not always, this consists of one or more pairs of network switches, with each switch in a pair being connected to the other switch in its pair via one or more links for redundancy. Purchased compute nodes are then connected to each switch in one of these pairs of switches via single links, again, for redundancy.
The network access layer for Nexus consists of different hardware depending on the lab/center that purchased the compute nodes, as different labs/centers have chosen to invest differently in the network infrastructure supporting their purchased compute nodes. Generally speaking, but not always, this consists of one or more pairs of network switches, with each switch in a pair being connected to the other switch in its pair via one or more links for redundancy. Purchased compute nodes are then connected via dual links to one of these pairs of switches (one link per switch), again, for redundancy.


For lab/center-specific documentation, please look at the lab's/center's specific [[Nexus#Partitions | partition]] page. (documentation still under active development)
For lab/center-specific documentation, please look at the lab's/center's specific [[Nexus#Partitions | partition]] page. (documentation still under active development)

Revision as of 20:56, 3 December 2024

Overview

The Nexus cluster runs on a hierarchical Ethernet-based network with node-level speeds ranging anywhere from 1GbE to 100GbE. Generally speaking, but not always, newer-purchased compute nodes often come with hardware capable of using, and therefore use, faster speeds. Increasingly faster speeds require increasingly more expensive network switches and cables, so some labs/centers have opted to stay with slower speeds.

If you are running multi-node jobs in SLURM, or simply want the best performance for a single-node job depending on what filesystem path(s) your job uses, it can be important to know the basics of the cluster's architecture to optimize performance.

In the future, SLURM's topology-aware resource allocation support may be implemented on the cluster, but it is not currently.

Network Core

The network core layer for Nexus is the same network core layer used by all UMIACS-supported systems. It consists of a pair of network switches that are connected to each other via dual 100GbE links for redundancy. Node-to-node communications for nodes in the same partition rarely ever need to traverse the network core.

Network Access

The network access layer for Nexus consists of different hardware depending on the lab/center that purchased the compute nodes, as different labs/centers have chosen to invest differently in the network infrastructure supporting their purchased compute nodes. Generally speaking, but not always, this consists of one or more pairs of network switches, with each switch in a pair being connected to the other switch in its pair via one or more links for redundancy. Purchased compute nodes are then connected via dual links to one of these pairs of switches (one link per switch), again, for redundancy.

For lab/center-specific documentation, please look at the lab's/center's specific partition page. (documentation still under active development)