CUDA: Difference between revisions
No edit summary |
(→RHEL6) |
||
Line 17: | Line 17: | ||
===RHEL6=== | ===RHEL6=== | ||
The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running. You can run '''cat /proc/driver/nvidia/version''' | |||
If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine. | |||
<pre> | |||
$ cat /proc/driver/nvidia/version | |||
NVRM version: NVIDIA UNIX x86_64 Kernel Module 295.33 Sat Mar 17 14:55:45 PDT 2012 | |||
GCC version: gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) | |||
</pre> | |||
The other component is the toolkit and in RHEL6 we have relocated non-locally compiled software into /opt/common. This includes the CUDA toolkit. You can find all the versions available in '''/opt/common/cuda'''. | |||
<pre> | <pre> |
Revision as of 19:12, 26 March 2012
CUDA is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or "GPGPU". It requires a specific card and driver to work correctly, UMIACS has a number of facilities and labs that have Cuda hardware available.
CUDA Software on Linux
RHEL5
The RHEL5 CUDA infrastructure comes in two parts.
First is the driver which installs libraries in /usr/lib.
Second is the CUDA toolkit. The currently supported CUDA toolkit is stored under /usr/local. Please change the "common.mk" settings in your CUDA SDK to set the CUDA root directory to "/usr/local"
You will also need to put the CUDA libraries in your LD_LIBRARY_PATH.
- If you are using a 64-bit machine this will be /usr/local/lib64.
- If you are using a 32-bit machine this will be /usr/local/lib.
Older versions of the CUDA toolkit are stored in /usr/local/stow/cudatoolkit_X.Y where X.Y is the version number. So, CUDA 2.2 is stored in /usr/local/stow/cudatoolkit2.2
RHEL6
The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running. You can run cat /proc/driver/nvidia/version
If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine.
$ cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 295.33 Sat Mar 17 14:55:45 PDT 2012 GCC version: gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC)
The other component is the toolkit and in RHEL6 we have relocated non-locally compiled software into /opt/common. This includes the CUDA toolkit. You can find all the versions available in /opt/common/cuda.
$ ls /opt/common/cuda cudatoolkit-3.2.16 cudatoolkit-4.0.17 cudatoolkit-4.1.28
We will use version 4.1.28 for this example, you can substitute as needed. You will need to setup a number of environmental variables to get started.
- bash/sh
- export PATH=/opt/common/cuda/cudatoolkit-4.1.28/bin:${PATH}
- export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.1.28/lib:${LD_LIBRARY_PATH}
- tcsh/csh
- setenv PATH /opt/common/cuda/cudatoolkit-4.1.28/bin:${PATH}
- setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.1.28/lib:${LD_LIBRARY_PATH}
To get started you might want to build and test with the GPU Computing SDK. You can do this by running /opt/common/cuda/cudatoolkit-4.1.28/gpucomputingsdk_4.1.28_linux.run. It will prompt you to where you want to install the SDK.
Once it is installed please apply this patch from the directory you installed the SDK into.
patch -p1 < /opt/common/cuda/cudatoolkit-4.1.28/UMIACS-CUDA-SDK-4.1.28.diff
You should now be able to run make and compile all the examples.