CUDA: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
Line 17: Line 17:


===RHEL6===
===RHEL6===
'''Updated for Cuda 5'''
The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running.  You can run '''cat /proc/driver/nvidia/version'''
The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running.  You can run '''cat /proc/driver/nvidia/version'''


If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine.
If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine.  You must have version '''304.64''' or greater for Cuda 5.0.35.


<pre>
<pre>
$ cat /proc/driver/nvidia/version  
$ cat /proc/driver/nvidia/version  
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.43 Sun Aug 19 20:14:03 PDT 2012
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.64 Tue Oct 30 10:58:20 PDT 2012
GCC version:  gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)  
GCC version:  gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)
</pre>
</pre>


Line 31: Line 33:
<pre>
<pre>
$ ls /opt/common/cuda
$ ls /opt/common/cuda
cuda-5.0.35        cudatoolkit-4.0.17  cudatoolkit-4.2.9
cudatoolkit-3.2.16  cudatoolkit-4.1.28  UMIACS-CUDA-SDK.diff
cudatoolkit-3.2.16  cudatoolkit-4.1.28  UMIACS-CUDA-SDK.diff
cudatoolkit-4.0.17  cudatoolkit-4.2.9
</pre>
</pre>


We will use version 4.2.9 for this example, you can substitute as needed.  You will need to setup a number of environmental variables to get started.
You will need to setup a number of environmental variables to get started.


* bash/sh
* bash/sh
** export PATH=/opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** export PATH=/opt/common/cuda/cuda-5.0.35/bin:${PATH}
** export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
** export LD_RUN_PATH=/opt/common/cuda/cuda-5.0.35/lib64:/opt/common/cuda/cuda-5.0.35/lib:${LD_RUN_PATH}
** export LIBRARY_PATH=/usr/lib64/nvidia:/usr/lib/nvidia
* tcsh/csh
* tcsh/csh
** setenv PATH /opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** setenv PATH /opt/common/cuda/cuda-5.0.35/bin:${PATH}
** setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
** setenv LD_RUN_PATH /opt/common/cuda/cuda-5.0.35/lib64:/opt/common/cuda/cuda-5.0.35/lib:${LD_RUN_PATH}
** setenv LIBRARY_PATH /usr/lib64/nvidia:/usr/lib/nvidia


To get started you might want to build and test with the GPU Computing SDK.  You can do this by running '''/opt/common/cuda/cudatoolkit-4.2.9/gpucomputingsdk_4.2.9_linux.run'''.  It will prompt you to where you want to install the SDK.
To get started you might want to build and test with the Cuda GPU Computing SDK.  In Cuda 5 the SDK is no longer separate and you need to make a copy of the examples directory inside the cuda toolkit directory.  You can do this by running the following command.
 
Once it is installed please apply this patch from within the top level of the directory you installed the SDK into if there is one for the version you are trying to use.
  rsync -a /opt/common/cuda/cuda-5.0.35/samples ~/cuda_samples
 
<pre>
patch -p1 < /opt/common/cuda/cudatoolkit-4.2.9/UMIACS-CUDA-SDK-RHEL6-4.2.9.diff
</pre>


You should now be able to run '''make''' and compile all the examples.
You should now be able to run <tt>make</tt> command in the <tt>~/cuda_samples</tt> directory and compile all the examples. Please note that you will need to activate a MPI module if you wish to be able to compile all the examples correctly, for example you can run <tt>module load openmpi-x86_64</tt>.  You can alternatively compile the specific sample codes by going into the respective directories and typing <tt>make</tt>


Please note that if you are compiling outside of the Cuda SDK, you may need to add <b>/usr/lib64/nvidia/</b> to your makefile compilation variables or to your LD_FLAGS environment variables.
'''Note''': Compiling requires the <tt>LIBRARY_PATH</tt> environmental variable above to tell GCC where to find the nvidia shared libraries.  If using an alternate compiler please consult it as to how to modify the compile time linker path.


===Ubuntu 12.04===
===Ubuntu 12.04===

Revision as of 02:18, 6 March 2013

CUDA is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or "GPGPU". It requires a specific card and driver to work correctly, UMIACS has a number of facilities and labs that have Cuda hardware available.

CUDA Software on Linux

RHEL5

The RHEL5 CUDA infrastructure comes in two parts.

First is the driver which installs libraries in /usr/lib.

Second is the CUDA toolkit. The currently supported CUDA toolkit is stored under /usr/local. Please change the "common.mk" settings in your CUDA SDK to set the CUDA root directory to "/usr/local"

You will also need to put the CUDA libraries in your LD_LIBRARY_PATH.

  • If you are using a 64-bit machine this will be /usr/local/lib64.
  • If you are using a 32-bit machine this will be /usr/local/lib.

Older versions of the CUDA toolkit are stored in /usr/local/stow/cudatoolkit_X.Y where X.Y is the version number. So, CUDA 2.2 is stored in /usr/local/stow/cudatoolkit2.2

RHEL6

Updated for Cuda 5

The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running. You can run cat /proc/driver/nvidia/version

If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine. You must have version 304.64 or greater for Cuda 5.0.35.

$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.64  Tue Oct 30 10:58:20 PDT 2012
GCC version:  gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)

The other component is the toolkit and in RHEL6 we have relocated non-locally compiled software into /opt/common. This includes the CUDA toolkit. You can find all the versions available in /opt/common/cuda.

$ ls /opt/common/cuda
cuda-5.0.35         cudatoolkit-4.0.17  cudatoolkit-4.2.9
cudatoolkit-3.2.16  cudatoolkit-4.1.28  UMIACS-CUDA-SDK.diff

You will need to setup a number of environmental variables to get started.

  • bash/sh
    • export PATH=/opt/common/cuda/cuda-5.0.35/bin:${PATH}
    • export LD_RUN_PATH=/opt/common/cuda/cuda-5.0.35/lib64:/opt/common/cuda/cuda-5.0.35/lib:${LD_RUN_PATH}
    • export LIBRARY_PATH=/usr/lib64/nvidia:/usr/lib/nvidia
  • tcsh/csh
    • setenv PATH /opt/common/cuda/cuda-5.0.35/bin:${PATH}
    • setenv LD_RUN_PATH /opt/common/cuda/cuda-5.0.35/lib64:/opt/common/cuda/cuda-5.0.35/lib:${LD_RUN_PATH}
    • setenv LIBRARY_PATH /usr/lib64/nvidia:/usr/lib/nvidia

To get started you might want to build and test with the Cuda GPU Computing SDK. In Cuda 5 the SDK is no longer separate and you need to make a copy of the examples directory inside the cuda toolkit directory. You can do this by running the following command.

  rsync -a /opt/common/cuda/cuda-5.0.35/samples ~/cuda_samples

You should now be able to run make command in the ~/cuda_samples directory and compile all the examples. Please note that you will need to activate a MPI module if you wish to be able to compile all the examples correctly, for example you can run module load openmpi-x86_64. You can alternatively compile the specific sample codes by going into the respective directories and typing make

Note: Compiling requires the LIBRARY_PATH environmental variable above to tell GCC where to find the nvidia shared libraries. If using an alternate compiler please consult it as to how to modify the compile time linker path.

Ubuntu 12.04

First you will need to ensure you have the updated x-updates repository added to your workstation. Please contact staff@umiacs.umd.edu and we will help get this in place for your workstation.

The other component is the toolkit and in Ubuntu non-locally compiled software is available in /opt/common. This includes the CUDA toolkit. You can find all the versions available in /opt/common/cuda.

Ubuntu and Cuda are tested in 4.2.9 and higher.

$ ls /opt/common/cuda
cudatoolkit-3.2.16  cudatoolkit-4.0.17  cudatoolkit-4.1.28 cudatoolkit-4.2.9
  • bash/sh
    • export PATH=/opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
    • export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
  • tcsh/csh
    • setenv PATH /opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
    • setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}

To get started you might want to build and test with the GPU Computing SDK. You can do this by running /opt/common/cuda/cudatoolkit-4.2.9/gpucomputingsdk_4.2.9_linux.run. It will prompt you to where you want to install the SDK.

Once it is installed please apply this patch from the directory you installed the SDK into.

patch -p1 < /opt/common/cuda/cudatoolkit-4.2.9/UMIACS-CUDA-SDK-Ubuntu-12.04.diff

You should now be able to run make and compile all the examples.