Podman: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
 
(24 intermediate revisions by 4 users not shown)
Line 1: Line 1:
[https://podman.io/ Podman] is a daemonless container engine alternative to [https://www.docker.com/ Docker].  We don't support Docker in many of our environments as it grants trivial administrative control over the host the Docker daemon runs on.  While Podman has the ability to run containers in user namespaces.  This means that for every user name space in the kernel you create the processes within it will map to a new uid/gid range.  For example if you are root in your container you will not be uid 0 outside the container you will be uid 4294000000.
[https://podman.io/ Podman] is a daemonless container engine alternative to [https://www.docker.com/ Docker].  We don't support Docker in many of our environments as it grants trivial administrative control over the host the Docker daemon runs on.  Podman on the other hand has the ability to run containers in user namespaces.  This means that for every user name space in the kernel you create the processes within it will map to a new uid/gid range.  For example, if you are root in your container, you will not be uid 0 outside the container, but instead you will be uid 4294000000.


We still believe that [[Singularity]] is the best option for running containerized workloads on our clustered based resources.  Podman is a good option for developing the containers to be run via [[Singularity]] or building a deliverable for a funding agency.  Therefore we will be only providing podman on workstations and standalone servers that individuals ask for.  
We still believe that [[Apptainer]] is the best option for running containerized workloads on our clustered based resources.  Podman is a good option for developing the containers to be run via Apptainer or building a deliverable for a funding agency.  Please [[HelpDesk | contact staff]] if you would like Podman installed on a workstation or standalone server.  More information on Podman running rootless can be found [https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md here].


== Getting Started ==
== Getting Started ==
To get started there are a few things that you need to configure.


To get started there are a few things that users need to configure.
First, run the <code>podman</code> command.  If it says command not found or you get an ERRO like the one below about no subuid ranges, and you are on a workstation or standalone (non-cluster) server, please [[HelpDesk | contact staff]] with the error and the host that you are using.  We will need to do some steps to setup the host you want ready.
 
First run the '''podman''' command.  If it says command not found or that you get an ERRO like the one below about no subuid ranges please contact staff@umiacs.umd.edu with the error and the host that you are using.  We will need to do some steps to setup the host you want ready.


<pre>
<pre>
[derek@zerus:~ ] $ podman
$ podman
ERRO[0000] cannot find mappings for user derek: No subuid ranges found for user "derek" in /etc/subuid
ERRO[0000] cannot find mappings for user username: No subuid ranges found for user "username" in /etc/subuid
Error: missing command 'podman COMMAND'
Error: missing command 'podman COMMAND'
Try 'podman --help' for more information.
Try 'podman --help' for more information.
Line 17: Line 16:


=== Storage ===
=== Storage ===
Containers are made up of layers for the image and these are stored in the graphroot setting of <code>~/.config/containers/storage.conf</code> which by default will be in your home directory.  With our home directories being available over NFS there is an issue[https://www.redhat.com/sysadmin/rootless-podman-nfs] that due to the user name space mapping described above you will not be able to access your home directory when you are building the layers.
Containers are made up of layers for the image and these are stored in the graphroot setting of <code>~/.config/containers/storage.conf</code> which by default will be in your home directory.  With our home directories being available over NFS there is an [https://www.redhat.com/sysadmin/rootless-podman-nfs issue] that due to the user name space mapping described above you will not be able to access your home directory when you are building the layers.


You need to update the <code>graphroot</code> setting to a local directory on the host.  The file <code>~/.config/containers/storage.conf</code> may not exist until you run <code>podman</code> the first time.  
You need to update the <code>graphroot</code> setting to a local directory on the host.  The file <code>~/.config/containers/storage.conf</code> may not exist until you run <code>podman</code> the first time, however you can manually create it.


<pre>
<pre>
[storage]
[storage]
   driver = "vfs"
   driver = "vfs"
  runroot = "/tmp/run-2174"
   graphroot = "/scratch0/username/.local/share/containers/storage"
   graphroot = "/scratch1/derek/.local/share/containers/storage"
...
...
</pre>
</pre>
When building larger images, it may fill up the default directory for imageCopyTmpDir (/var/tmp). If this happens, you will need to specify a different directory using the environment variable TMPDIR. For example:
<pre>export TMPDIR="/scratch0/example_tmp_directory"</pre>


== GPUs ==
== GPUs ==
Running Podman with the local Nvidia GPUs requires some additional configuration steps that staff has to add to any individual host that runs Podman.  This includes the nvidia-container-runtime package is installed and you must add a specific argument to all your podman commands.
Running Podman with the local Nvidia GPUs requires some additional configuration steps that staff has to add to any individual workstation or standalone (non-cluster) server that runs Podman.  This includes ensuring the <tt>nvidia-container-runtime</tt> package is installed.


You will need to add the argument <code>--hooks-dir=/usr/share/containers/oci/hooks.d</code> to ensure that podman finds the nvidia-container-runtime.  For example you can run <code>nvidia-smi</code> from within the official Nvidia cuda containers with a command like this:
For example you can run <code>nvidia-smi</code> from within the official Nvidia CUDA containers with a command like this, optionally replacing the tag for different CUDA versions/OS images:


<pre>
<pre>
$ podman run --rm --hooks-dir=/usr/share/containers/oci/hooks.d docker.io/nvidia/cuda nvidia-smi
$ podman run --rm --hooks-dir=/usr/share/containers/oci/hooks.d docker.io/nvidia/cuda:12.2.2-base-ubi8 nvidia-smi
Thu Apr 16 18:47:04 2020
Thu Apr 16 18:47:04 2020
+-----------------------------------------------------------------------------+
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 440.44      Driver Version: 440.44      CUDA Version: 10.2    |
| NVIDIA-SMI 535.129.03            Driver Version: 535.129.03  CUDA Version: 12.2    |
|-------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| GPU  Name       Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp Perf Pwr:Usage/Cap|        Memory-Usage | GPU-Util  Compute M. |
| Fan  Temp   Perf         Pwr:Usage/Cap |        Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|                                         |                      |              MIG M. |
|  0  GeForce GTX TIT...  Off | 00000000:03:00.0 Off |                  N/A |
|=========================================+======================+======================|
| 2240C   P8   14W / 250W |   142MiB / 12212MiB |      1%      Default |
|  0  NVIDIA RTX A6000              Off | 00000000:01:00.0 Off |                  Off |
+-------------------------------+----------------------+----------------------+
| 3028C   P8               6W / 300W |     2MiB / 49140MiB |      0%      Default |
|  1  GeForce GTX TIT...  Off  | 00000000:04:00.0 Off |                  N/A |
|                                        |                      |                  N/A |
| 22%  34C    P8    15W / 250W |      1MiB / 12212MiB |      1%      Default |
+-----------------------------------------+----------------------+----------------------+
+-------------------------------+----------------------+----------------------+


+-----------------------------------------------------------------------------+
+---------------------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
| Processes:                                                                           |
|  GPU       PID  Type  Process name                             Usage      |
|  GPU   GI  CI        PID  Type  Process name                           GPU Memory |
|=============================================================================|
|        ID  ID                                                            Usage      |
+-----------------------------------------------------------------------------+
|=======================================================================================|
|  No running processes found                                                          |
+---------------------------------------------------------------------------------------+
</pre>
</pre>


The full list of tags can be found at https://hub.docker.com/r/nvidia/cuda/tags.
== Example ==
To build your own image you can start from an example we have https://gitlab.umiacs.umd.edu/derek/gpudocker.
To build your own image you can start from an example we have https://gitlab.umiacs.umd.edu/derek/gpudocker.


First clone the repository, change directory and build the image with podman.
<pre>
<pre>
git clone https://gitlab.umiacs.umd.edu/derek/gpudocker.git
git clone https://gitlab.umiacs.umd.edu/derek/gpudocker.git
Line 65: Line 72:
</pre>
</pre>


Then you can run the test script to verify.  Notice that we pass the local directory <code>test</code> as a path into the image so we can run a script.  This can also be useful for your data output data as well as if you write anywhere else in the container it will not be available outside the container.
<pre>
<pre>
$ podman run --volume `pwd`/test:/mnt --hooks-dir=/usr/share/containers/oci/hooks.d gpudocker python3 /mnt/test_torch.py
$ podman run --volume `pwd`/test:/mnt --hooks-dir=/usr/share/containers/oci/hooks.d gpudocker python3 /mnt/test_torch.py
0
GPU found 0: GeForce GTX 1080 Ti
tensor([[0.6652, 0.7605, 0.1398],
tensor([[0.3479, 0.6594, 0.5791],
         [0.5508, 0.9241, 0.4943],
         [0.6065, 0.3415, 0.9328],
         [0.8676, 0.0278, 0.4935],
         [0.9117, 0.3541, 0.9050],
         [0.0394, 0.1132, 0.1114],
         [0.6611, 0.5361, 0.3212],
         [0.1626, 0.0966, 0.3240]])
         [0.8574, 0.5116, 0.7021]])
</pre>
</pre>
If you instead want to push modifications to this example to your own container registry such that you can pull the container image down later, please see the README.md located in the example repository itself.

Latest revision as of 17:39, 10 July 2024

Podman is a daemonless container engine alternative to Docker. We don't support Docker in many of our environments as it grants trivial administrative control over the host the Docker daemon runs on. Podman on the other hand has the ability to run containers in user namespaces. This means that for every user name space in the kernel you create the processes within it will map to a new uid/gid range. For example, if you are root in your container, you will not be uid 0 outside the container, but instead you will be uid 4294000000.

We still believe that Apptainer is the best option for running containerized workloads on our clustered based resources. Podman is a good option for developing the containers to be run via Apptainer or building a deliverable for a funding agency. Please contact staff if you would like Podman installed on a workstation or standalone server. More information on Podman running rootless can be found here.

Getting Started

To get started there are a few things that you need to configure.

First, run the podman command. If it says command not found or you get an ERRO like the one below about no subuid ranges, and you are on a workstation or standalone (non-cluster) server, please contact staff with the error and the host that you are using. We will need to do some steps to setup the host you want ready.

$ podman
ERRO[0000] cannot find mappings for user username: No subuid ranges found for user "username" in /etc/subuid
Error: missing command 'podman COMMAND'
Try 'podman --help' for more information.

Storage

Containers are made up of layers for the image and these are stored in the graphroot setting of ~/.config/containers/storage.conf which by default will be in your home directory. With our home directories being available over NFS there is an issue that due to the user name space mapping described above you will not be able to access your home directory when you are building the layers.

You need to update the graphroot setting to a local directory on the host. The file ~/.config/containers/storage.conf may not exist until you run podman the first time, however you can manually create it.

[storage]
  driver = "vfs"
  graphroot = "/scratch0/username/.local/share/containers/storage"
...

When building larger images, it may fill up the default directory for imageCopyTmpDir (/var/tmp). If this happens, you will need to specify a different directory using the environment variable TMPDIR. For example:

export TMPDIR="/scratch0/example_tmp_directory"

GPUs

Running Podman with the local Nvidia GPUs requires some additional configuration steps that staff has to add to any individual workstation or standalone (non-cluster) server that runs Podman. This includes ensuring the nvidia-container-runtime package is installed.

For example you can run nvidia-smi from within the official Nvidia CUDA containers with a command like this, optionally replacing the tag for different CUDA versions/OS images:

$ podman run --rm --hooks-dir=/usr/share/containers/oci/hooks.d docker.io/nvidia/cuda:12.2.2-base-ubi8 nvidia-smi
Thu Apr 16 18:47:04 2020
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
+---------------------------------------------------------------------------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A6000               Off | 00000000:01:00.0 Off |                  Off |
| 30%   28C    P8               6W / 300W |      2MiB / 49140MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

The full list of tags can be found at https://hub.docker.com/r/nvidia/cuda/tags.

Example

To build your own image you can start from an example we have https://gitlab.umiacs.umd.edu/derek/gpudocker.

First clone the repository, change directory and build the image with podman.

git clone https://gitlab.umiacs.umd.edu/derek/gpudocker.git
cd gpudocker
podman build -t gpudocker .

Then you can run the test script to verify. Notice that we pass the local directory test as a path into the image so we can run a script. This can also be useful for your data output data as well as if you write anywhere else in the container it will not be available outside the container.

$ podman run --volume `pwd`/test:/mnt --hooks-dir=/usr/share/containers/oci/hooks.d gpudocker python3 /mnt/test_torch.py
GPU found 0: GeForce GTX 1080 Ti
tensor([[0.3479, 0.6594, 0.5791],
        [0.6065, 0.3415, 0.9328],
        [0.9117, 0.3541, 0.9050],
        [0.6611, 0.5361, 0.3212],
        [0.8574, 0.5116, 0.7021]])

If you instead want to push modifications to this example to your own container registry such that you can pull the container image down later, please see the README.md located in the example repository itself.