Podman
Podman is a daemonless container engine alternative to Docker. We don't support Docker in many of our environments as it grants trivial administrative control over the host the Docker daemon runs on. While Podman has the ability to run containers in user namespaces. This means that for every user name space in the kernel you create the processes within it will map to a new uid/gid range. For example if you are root in your container you will not be uid 0 outside the container you will be uid 4294000000.
We still believe that Singularity is the best option for running containerized workloads on our clustered based resources. Podman is a good option for developing the containers to be run via Singularity or building a deliverable for a funding agency. Therefore we will be only providing podman on workstations and standalone servers that individuals ask for.
Getting Started
To get started there are a few things that users need to configure.
First run the podman command. If it says command not found or that you get an ERRO like the one below about no subuid ranges please contact staff@umiacs.umd.edu with the error and the host that you are using. We will need to do some steps to setup the host you want ready.
[derek@zerus:~ ] $ podman ERRO[0000] cannot find mappings for user derek: No subuid ranges found for user "derek" in /etc/subuid Error: missing command 'podman COMMAND' Try 'podman --help' for more information.
Storage
Containers are made up of layers for the image and these are stored in the graphroot setting of ~/.config/containers/storage.conf
which by default will be in your home directory. With our home directories being available over NFS there is an issue[1] that due to the user name space mapping described above you will not be able to access your home directory when you are building the layers.
You need to update the graphroot
setting to a local directory on the host. The file ~/.config/containers/storage.conf
may not exist until you run podman
the first time.
[storage] driver = "vfs" runroot = "/tmp/run-2174" graphroot = "/scratch1/derek/.local/share/containers/storage" ...
GPUs
Running Podman with the local Nvidia GPUs requires some additional configuration steps that staff has to add to any individual host that runs Podman. This includes the nvidia-container-runtime package is installed and you must add a specific argument to all your podman commands.
You will need to add the argument --hooks-dir=/usr/share/containers/oci/hooks.d
to ensure that podman finds the nvidia-container-runtime. For example you can run nvidia-smi
from within the official Nvidia cuda containers with a command like this:
$ podman run --rm --hooks-dir=/usr/share/containers/oci/hooks.d docker.io/nvidia/cuda nvidia-smi Thu Apr 16 18:47:04 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX TIT... Off | 00000000:03:00.0 Off | N/A | | 22% 40C P8 14W / 250W | 142MiB / 12212MiB | 1% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX TIT... Off | 00000000:04:00.0 Off | N/A | | 22% 34C P8 15W / 250W | 1MiB / 12212MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+
To build your own image you can start from an example we have https://gitlab.umiacs.umd.edu/derek/gpudocker.
git clone https://gitlab.umiacs.umd.edu/derek/gpudocker.git cd gpudocker podman build -t gpudocker .
$ podman run --volume `pwd`/test:/mnt --hooks-dir=/usr/share/containers/oci/hooks.d gpudocker python3 /mnt/test_torch.py 0 tensor([[0.6652, 0.7605, 0.1398], [0.5508, 0.9241, 0.4943], [0.8676, 0.0278, 0.4935], [0.0394, 0.1132, 0.1114], [0.1626, 0.0966, 0.3240]])