<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chrissor</id>
	<title>UMIACS - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.umiacs.umd.edu/umiacs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chrissor"/>
	<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php/Special:Contributions/Chrissor"/>
	<updated>2026-05-12T21:12:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11917</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11917"/>
		<updated>2024-06-26T14:28:00Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;SIF&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Sif_anchor&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11916</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11916"/>
		<updated>2024-06-26T14:26:48Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Sif_anchor | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Sif_anchor&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11915</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11915"/>
		<updated>2024-06-26T14:26:20Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Sif_anchor | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
&amp;lt;span id=&amp;quot;Sif_anchor&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11914</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11914"/>
		<updated>2024-06-26T14:25:44Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Sif_anchor | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
{{anchor|Sif_anchor}}&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11913</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11913"/>
		<updated>2024-06-26T14:22:49Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Sif_anchor | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
&amp;lt;span id=&amp;quot;SIF_anchor&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11912</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11912"/>
		<updated>2024-06-26T14:19:56Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Nexus_Containers#Shared_Containers | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11911</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11911"/>
		<updated>2024-06-26T14:18:46Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Nexus_Containers | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11910</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11910"/>
		<updated>2024-06-26T14:18:03Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer#Nexus_Containers##Shared_Containers | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11909</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11909"/>
		<updated>2024-06-26T14:16:18Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer##Nexus_Containers##Shared_Containers | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11908</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11908"/>
		<updated>2024-06-26T14:15:53Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;[[Apptainer##Shared_Containers | SIF]]&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
	<entry>
		<id>https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11907</id>
		<title>Apptainer</title>
		<link rel="alternate" type="text/html" href="https://wiki.umiacs.umd.edu/umiacs/index.php?title=Apptainer&amp;diff=11907"/>
		<updated>2024-06-26T14:11:31Z</updated>

		<summary type="html">&lt;p&gt;Chrissor: Consolidated information from /Nexus/Apptainer, and reformatted content slightly&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://apptainer.org Apptainer] is a container platform that doesn&#039;t elevate the privileges of a user running the container.  This is important as UMIACS runs many multi-tenant hosts (such as [[Nexus]]) and doesn&#039;t provide administrative control to users on them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Apptainer was previously branded as Singularity.  You should still be able to run commands on the system with &amp;lt;code&amp;gt;singularity&amp;lt;/code&amp;gt;, however you should start migrating to using the &amp;lt;code&amp;gt;apptainer&amp;lt;/code&amp;gt; command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
You can find out what the current version is that we provide by running the &amp;lt;code&amp;gt;apptainer --version&amp;lt;/code&amp;gt; command.  If this instead says &amp;lt;code&amp;gt;apptainer: command not found&amp;lt;/code&amp;gt; and you are using a UMIACS-supported host, please [[HelpDesk | contact staff]] and we will ensure that the software is available on the host you are looking for it on.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# apptainer --version&lt;br /&gt;
apptainer version 1.2.5-1.el8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apptainer can run a variety of images including its own format and [https://apptainer.org/docs/user/main/docker_and_oci.html Docker images].  To create images from definition files, you need to have administrative rights. You will need to either use [[Podman]] to accomplish this on UMIACS-supported hosts, or alternatively do this on a host that you have full administrative access to (laptop or personal desktop) rather than a UMIACS-supported host.&lt;br /&gt;
&lt;br /&gt;
If you are going to pull large images, you may run out of space in your home directory. We suggest you run the following commands to setup alternate cache and tmp directories.  We are using &amp;lt;code&amp;gt;/scratch0&amp;lt;/code&amp;gt; but you can substitute any large enough local scratch directory, network scratch directory, or project directory you would like.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export WORKDIR=/scratch0/$USER&lt;br /&gt;
export APPTAINER_CACHEDIR=${WORKDIR}/.cache&lt;br /&gt;
export APPTAINER_TMPDIR=${WORKDIR}/.tmp&lt;br /&gt;
mkdir -p $APPTAINER_CACHEDIR&lt;br /&gt;
mkdir -p $APPTAINER_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We do suggest you pull images down into an intermediate file (&#039;&#039;&#039;SIF&#039;&#039;&#039; file) as you then do not have to worry about re-caching the image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull cuda12.2.2.sif docker://nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob d5d706ce7b29 done&lt;br /&gt;
Copying blob b4dc78aeafca done&lt;br /&gt;
Copying blob 24a22c1b7260 done&lt;br /&gt;
Copying blob 8dea37be3176 done&lt;br /&gt;
Copying blob 25fa05cd42bd done&lt;br /&gt;
Copying blob a57130ec8de1 done&lt;br /&gt;
Copying blob 880a66924cf5 done&lt;br /&gt;
Copying config db554d658b done&lt;br /&gt;
Writing manifest to image destination&lt;br /&gt;
Storing signatures&lt;br /&gt;
2022/10/14 10:31:17  info unpack layer: sha256:25fa05cd42bd8fabb25d2a6f3f8c9f7ab34637903d00fd2ed1c1d0fa980427dd&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:24a22c1b72605a4dbcec13b743ef60a6cbb43185fe46fd8a35941f9af7c11153&lt;br /&gt;
2022/10/14 10:31:19  info unpack layer: sha256:8dea37be3176a88fae41c265562d5fb438d9281c356dcb4edeaa51451dbdfdb2&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:b4dc78aeafca6321025300e9d3050c5ba3fb2ac743ae547c6e1efa3f9284ce0b&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:a57130ec8de1e44163e965620d5aed2abe6cddf48b48272964bfd8bca101df38&lt;br /&gt;
2022/10/14 10:31:20  info unpack layer: sha256:d5d706ce7b293ffb369d3bf0e3f58f959977903b82eb26433fe58645f79b778b&lt;br /&gt;
2022/10/14 10:31:49  info unpack layer: sha256:880a66924cf5e11df601a4f531f3741c6867a3e05238bc9b7cebb2a68d479204&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer inspect cuda12.2.2.sif&lt;br /&gt;
...&lt;br /&gt;
maintainer: NVIDIA CORPORATION &amp;lt;sw-cuda-installer@nvidia.com&amp;gt;&lt;br /&gt;
name: ubi8&lt;br /&gt;
org.label-schema.build-arch: amd64&lt;br /&gt;
org.label-schema.build-date: Wednesday_24_January_2024_13:53:0_EST&lt;br /&gt;
org.label-schema.schema-version: 1.0&lt;br /&gt;
org.label-schema.usage.apptainer.version: 1.2.5-1.el8&lt;br /&gt;
org.label-schema.usage.singularity.deffile.bootstrap: docker&lt;br /&gt;
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:12.2.2-base-ubi8&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run the local image with the &#039;&#039;&#039;run&#039;&#039;&#039; command or start a shell with the &#039;&#039;&#039;shell&#039;&#039;&#039; command.  &lt;br /&gt;
* Please note that if you are in an environment with GPUs and you want to access them inside the container you need to specify the &#039;&#039;&#039;--nv&#039;&#039;&#039; flag. Nvidia has a very specific driver and libraries that are required to run CUDA programs, so this is to ensure that all appropriate devices are created inside the container and that these libraries are made available in the container .&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv cuda12.2.2.sif nvidia-smi -L&lt;br /&gt;
GPU 0: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-8e040d17-402e-cc86-4e83-eb2b1d501f1e)&lt;br /&gt;
GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-d681a21a-8cdd-e624-6bf8-5b0234584ba2)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Nexus Containers==&lt;br /&gt;
In our [[Nexus]] environment we have some example containers based on our [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] project.  These can be found in &amp;lt;code&amp;gt;/fs/nexus-containers/pytorch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can just run one of the example images by doing the following (you should have already allocated a interactive job with a GPU in [[Nexus]]).  It will use the default [https://gitlab.umiacs.umd.edu/derek/pytorch_docker/-/blob/master/tensor.py script] found at &amp;lt;code&amp;gt;/srv/tensor.py&amp;lt;/code&amp;gt; within the image.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ hostname &amp;amp;&amp;amp; nvidia-smi -L&lt;br /&gt;
tron38.umiacs.umd.edu&lt;br /&gt;
GPU 0: NVIDIA RTX A4000 (UUID: GPU-4a0a5644-9fc8-84b4-5d22-65d45ca36506)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer run --nv /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif&lt;br /&gt;
99 984.5538940429688&lt;br /&gt;
199 654.1710815429688&lt;br /&gt;
299 435.662353515625&lt;br /&gt;
399 291.1429138183594&lt;br /&gt;
499 195.5575714111328&lt;br /&gt;
599 132.3363037109375&lt;br /&gt;
699 90.5206069946289&lt;br /&gt;
799 62.86213684082031&lt;br /&gt;
899 44.56754684448242&lt;br /&gt;
999 32.466392517089844&lt;br /&gt;
1099 24.461835861206055&lt;br /&gt;
1199 19.166893005371094&lt;br /&gt;
1299 15.6642427444458&lt;br /&gt;
1399 13.347112655639648&lt;br /&gt;
1499 11.814264297485352&lt;br /&gt;
1599 10.800163269042969&lt;br /&gt;
1699 10.129261016845703&lt;br /&gt;
1799 9.685370445251465&lt;br /&gt;
1899 9.391674041748047&lt;br /&gt;
1999 9.19735336303711&lt;br /&gt;
Result: y = 0.0022362577728927135 + 0.837898313999176 x + -0.0003857926349155605 x^2 + -0.09065020829439163 x^3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bind Mounts===&lt;br /&gt;
&lt;br /&gt;
To get data into the container you need to pass some [https://apptainer.org/docs/user/main/bind_paths_and_mounts.html bind mounts].  Apptainer containers will not automatically mount data from the outside operating system other than your home directory.  Users need to manually bind mounts for other file paths.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;--bind /fs/nexus-scratch/&amp;lt;USERNAME&amp;gt;/&amp;lt;PROJECTNAME&amp;gt;:/mnt&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we will exec an interactive session binding our [[Nexus]] scratch directory which allows us to specify the command we want to run inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apptainer exec --nv --bind /fs/nexus-scratch/username:/fs/nexus-scratch/username /fs/nexus-containers/pytorch/pytorch_1.13.0+cu117.sif bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now write/run your own pytorch python code interactively within the container or just make a python script that you can call directly from the apptainer exec command for batch processing.&lt;br /&gt;
&lt;br /&gt;
===Shared Containers===&lt;br /&gt;
Portable images called &#039;&#039;&#039;Singularity Image Format&#039;&#039;&#039; or .sif files can be copied and shared.  Nexus maintains some shared containers in &amp;lt;code&amp;gt;/fs/nexus-containers&amp;lt;/code&amp;gt;.  These are arranged by the application(s) that are installed.&lt;br /&gt;
&lt;br /&gt;
==Docker Workflow Example==&lt;br /&gt;
We have a [https://gitlab.umiacs.umd.edu/derek/pytorch_docker pytorch_docker] example workflow using our [[GitLab]] as a Docker registry.  You can clone the repository and further customize this to your needs. The workflow is:&lt;br /&gt;
&lt;br /&gt;
# Run Docker on a laptop or personal desktop on to create the image.&lt;br /&gt;
# Tag the image and and push it to your repository (this can be any docker registry)&lt;br /&gt;
# Pull the image down onto one of our workstations/clusters and run it with your data. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer pull pytorch_docker.sif docker://registry.umiacs.umd.edu/derek/pytorch_docker&lt;br /&gt;
INFO:    Converting OCI blobs to SIF format&lt;br /&gt;
INFO:    Starting build...&lt;br /&gt;
Getting image source signatures&lt;br /&gt;
Copying blob 85386706b020 done&lt;br /&gt;
...&lt;br /&gt;
2022/10/14 10:58:36  info unpack layer: sha256:b6f46848806c8750a68edc4463bf146ed6c3c4af18f5d3f23281dcdfb1c65055&lt;br /&gt;
2022/10/14 10:58:43  info unpack layer: sha256:44845dc671f759820baac0376198141ca683f554bb16a177a3cfe262c9e368ff&lt;br /&gt;
INFO:    Creating SIF file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apptainer exec --nv pytorch_docker.sif python3 -c &#039;from __future__ import print_function; import torch; print(torch.cuda.current_device()); x = torch.rand(5, 3); print(x)&#039;&lt;br /&gt;
0&lt;br /&gt;
tensor([[0.3273, 0.7174, 0.3587],&lt;br /&gt;
        [0.2250, 0.3896, 0.4136],&lt;br /&gt;
        [0.3626, 0.0383, 0.6274],&lt;br /&gt;
        [0.6241, 0.8079, 0.2950],&lt;br /&gt;
        [0.0804, 0.9705, 0.0030]])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chrissor</name></author>
	</entry>
</feed>