Compute/DataLocality: Difference between revisions

From UMIACS
Jump to navigation Jump to search
m (Touchups)
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page covers some best practices related to data processing on UMIACS Compute resources.
This page covers some best practices related to data processing on UMIACS Compute resources i.e., [[SLURM]].


==Data Locality==
==Data Locality==
It is recommended to store data that is actively being worked on as close to the processing source as possible.  In the context of a cluster job, the data being processed, as well as any generated results, should be stored on a disk physically installed in the compute node itself.  We'll cover how to identify local disk space later on this page.
It is recommended to store data that is actively being worked on as close to the processing source as possible.  In the context of a cluster job, the data being processed, as well as any generated results, should be stored on a disk physically installed in the compute node itself.  We'll cover how to identify local filesystem disk space later on this page.


===General Workflow===
===General Workflow===
The following is a suggested workflow for a computational job:
The following is a suggested workflow for a computational job:
# Copy the data to be processed to the local compute node.
# Copy the data to be processed to local filesystem disk space of the compute node(s) your job is assigned.
# Process the data, storing results on local disk space.
# Process the data, storing results on local filesystem disk space.
# Once processing is finished, transfer results to permanent storage location. (i.e. a network file share)
# Once processing is finished, transfer results to permanent storage location. (i.e., a network file share)
# Clean up data and results from compute node local disk space.
# Clean up data and results from local filesystem disk space of the compute node(s) your job is assigned.


===Why this matters===
===Why this matters===
Similar to how too many processes on a single machine can slow it down, too many users accessing a network file server can impact performance.  This issue is further compounded in the context of cluster jobs, as a single user can generate hundreds if not thousands of jobs all trying to access the same network fileserver.  By utilizing the local disks on the compute nodes, you effectively distribute the data access load and reduce the load on the central fileserver.
Similar to how running too many processes on a single machine can slow it down, too many users accessing shares on a network file server can impact performance of that file server.  This issue is further compounded in the context of cluster jobs, as a single user can generate hundreds if not thousands of jobs all trying to access the same network file server.  By utilizing the local filesystem disks on the compute nodes, you effectively distribute the data access load and reduce the load on the file server.


Following these best practices isn't just about being a good neighbor however, they will also improve the performance of your jobs.
Following these best practices isn't just about being a good neighbor however, they will also improve the performance of your jobs.
<br/>
<br/>
To further illustrate this issue, consider a service like Netflix.  While Netflix invests heavily in their data storage and supporting network, if they allowed their customers to access it directly it would quickly reach capacity resulting in performance degradation for all users.  In order to accommodate this Netflix distributes it's data into various caching tiers, which are much closer to the end user.  This distribution evens the load across multiple different devices, increasing the performance and availability for all users.
<br/>


While UMIACS obviously does not operate at the same scale as Netflix, the same issues are still present within the compute infrastructureProcessing data that resides on local disk space reduces the load on the central file server and improves the performance of the process.
To further illustrate this issue, consider a service like Netflix.  While Netflix invests heavily in their data storage and supporting network, if they allowed their customers to access it directly it would quickly reach capacity resulting in performance degradation for all users.  In order to accommodate this, Netflix distributes its data into various caching tiers, which are much closer to the end userThis distribution evens the load across multiple different devices, increasing the performance and availability for all users.


While UMIACS obviously does not operate at the same scale as Netflix, the same concepts are still present within the compute infrastructure.  Processing data that resides on local filesystem disk space reduces the load on the central file server and improves the performance of the process.


==Data Storage==
==Data Storage==
When possible, it is recommended that data be stored in an archive file.
When possible, it is recommended that data be stored in an archive file when not actively being processed (i.e., before initiating inbound/outbound transfers to/from local filesystem disk space).


Utilizing archive files provide the following benefits:
Utilizing archive files provide the following benefits:
* Faster data transfers
* Faster data transfers
* Reduced data size
* Reduced data size
* Easier data management.
* Easier data management


Practically every filesystem in existence has limitations in it's ability to handle large numbers of small files.  By grouping large collections of small files into a single archive file we reduce the impact of this limitation, as well as improve the efficiency of data storage when combined with techniques such as compression.  Another advantage manifests when transferring data over the network.  In order to transfer a file a connection to the remote location has to be established and closed for each file, which can add significant overhead when dealing with large numbers of files.  When the files are collected into a single archive file we reduce the number of connections that are created and destroyed, and focus more on streaming data.   
Practically every filesystem in existence has limitations in its ability to handle large numbers of small files.  By grouping large collections of small files into a single archive file, you can reduce the impact of this limitation, as well as improve the efficiency of data storage when combined with techniques such as compression.  Another advantage manifests when transferring data over the network.  In order to transfer a file, a connection to the remote location has to be established and closed for each file, which can add significant overhead when dealing with large numbers of files.  When the files are collected into a single archive file, you reduce the number of connections that are created and destroyed, and focus more on streaming data.   


Common utilities for creating archive files are <code>tar</code> and <code>zip</code>.
Common utilities for creating archive files are <code>tar</code> and <code>zip</code>.


==Identifying Local Disk Space==
==Identifying Local Disk Space==
Local disk storage at UMIACS typically conforms to the following guidelines:
Local disk space at UMIACS typically conforms to the following guidelines:
* Directory name starts with <code>/scratch</code>
* Directory name starts with <code>/scratch</code>
* Almost every UMIACS supported machine has a <code>/scratch0</code>
* Almost every UMIACS supported machine has a <code>/scratch0</code>
* Machines with multiple local disks will have multiple <code>/scratchX</code> directories, where X is a number that increases with the number of disks.
* Machines with multiple local disks may have multiple <code>/scratchX</code> directories, where X is a number that increases with the number of disks
 
Example, with output shortened for brevity:
<pre>
<pre>
# Output shortened for brevity.
$ lsblk  
-bash-4.2$ lsblk  
NAME              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
NAME              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 931.5G  0 disk  
sda                  8:0    0 931.5G  0 disk
└─sda2              8:2    0 930.5G  0 part  
└─sda2              8:2    0 930.5G  0 part
   ├─vol00-scratch0 253:3    0  838G  0 lvm  /scratch0
   ├─vol00-scratch0 253:3    0  838G  0 lvm  /scratch0
sdb                  8:16  0  477G  0 disk  
sdb                  8:16  0  477G  0 disk
└─sdb1              8:17  0  477G  0 part /scratch1
└─sdb1              8:17  0  477G  0 part /scratch1
sdc                  8:32  0 953.9G  0 disk  
sdc                  8:32  0 953.9G  0 disk
└─sdc1-scratch2    253:2    0 953.9G  0 lvm  /scratch2
└─sdc1-scratch2    253:2    0 953.9G  0 lvm  /scratch2
</pre>
</pre>


As shown above, common utilities such as <code>lsblk</code> can be used to identify the specific configuration on a given node.
{{Note|Local disk space is considered transitory and as such is not backed up.  It is not intended for long-term storage of critical/sensitive data.}}
 
{{Note|Local data storage is considered transitory and as such is not backed up.  It is not intended for long-term storage of critical/sensitive data.}}


If you have any questions about the available local disk storage on a given cluster please refer to the documentation specific for that cluster, or contact [[HelpDesk | the UMIACS Help Desk]].
If you have any questions about the available local disk space on a given cluster, please refer to the documentation specific for that cluster, or contact [[HelpDesk | the UMIACS Help Desk]].

Latest revision as of 22:13, 11 November 2024

This page covers some best practices related to data processing on UMIACS Compute resources i.e., SLURM.

Data Locality

It is recommended to store data that is actively being worked on as close to the processing source as possible. In the context of a cluster job, the data being processed, as well as any generated results, should be stored on a disk physically installed in the compute node itself. We'll cover how to identify local filesystem disk space later on this page.

General Workflow

The following is a suggested workflow for a computational job:

  1. Copy the data to be processed to local filesystem disk space of the compute node(s) your job is assigned.
  2. Process the data, storing results on local filesystem disk space.
  3. Once processing is finished, transfer results to permanent storage location. (i.e., a network file share)
  4. Clean up data and results from local filesystem disk space of the compute node(s) your job is assigned.

Why this matters

Similar to how running too many processes on a single machine can slow it down, too many users accessing shares on a network file server can impact performance of that file server. This issue is further compounded in the context of cluster jobs, as a single user can generate hundreds if not thousands of jobs all trying to access the same network file server. By utilizing the local filesystem disks on the compute nodes, you effectively distribute the data access load and reduce the load on the file server.

Following these best practices isn't just about being a good neighbor however, they will also improve the performance of your jobs.

To further illustrate this issue, consider a service like Netflix. While Netflix invests heavily in their data storage and supporting network, if they allowed their customers to access it directly it would quickly reach capacity resulting in performance degradation for all users. In order to accommodate this, Netflix distributes its data into various caching tiers, which are much closer to the end user. This distribution evens the load across multiple different devices, increasing the performance and availability for all users.

While UMIACS obviously does not operate at the same scale as Netflix, the same concepts are still present within the compute infrastructure. Processing data that resides on local filesystem disk space reduces the load on the central file server and improves the performance of the process.

Data Storage

When possible, it is recommended that data be stored in an archive file when not actively being processed (i.e., before initiating inbound/outbound transfers to/from local filesystem disk space).

Utilizing archive files provide the following benefits:

  • Faster data transfers
  • Reduced data size
  • Easier data management

Practically every filesystem in existence has limitations in its ability to handle large numbers of small files. By grouping large collections of small files into a single archive file, you can reduce the impact of this limitation, as well as improve the efficiency of data storage when combined with techniques such as compression. Another advantage manifests when transferring data over the network. In order to transfer a file, a connection to the remote location has to be established and closed for each file, which can add significant overhead when dealing with large numbers of files. When the files are collected into a single archive file, you reduce the number of connections that are created and destroyed, and focus more on streaming data.

Common utilities for creating archive files are tar and zip.

Identifying Local Disk Space

Local disk space at UMIACS typically conforms to the following guidelines:

  • Directory name starts with /scratch
  • Almost every UMIACS supported machine has a /scratch0
  • Machines with multiple local disks may have multiple /scratchX directories, where X is a number that increases with the number of disks

Example, with output shortened for brevity:

$ lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 931.5G  0 disk
└─sda2               8:2    0 930.5G  0 part
  ├─vol00-scratch0 253:3    0   838G  0 lvm  /scratch0
sdb                  8:16   0   477G  0 disk
└─sdb1               8:17   0   477G  0 part /scratch1
sdc                  8:32   0 953.9G  0 disk
└─sdc1-scratch2    253:2    0 953.9G  0 lvm  /scratch2
Exclamation-point.png Local disk space is considered transitory and as such is not backed up. It is not intended for long-term storage of critical/sensitive data.

If you have any questions about the available local disk space on a given cluster, please refer to the documentation specific for that cluster, or contact the UMIACS Help Desk.