CUDA: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
 
(20 intermediate revisions by 5 users not shown)
Line 1: Line 1:
[http://en.wikipedia.org/wiki/CUDA CUDA] is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or '''"GPGPU"'''. It requires a specific card and driver to work correctly,  UMIACS has a number of facilities and labs that have Cuda hardware available.
[http://en.wikipedia.org/wiki/CUDA CUDA] is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).


==CUDA Software on Linux==
==Prerequisites==
* NVIDIA GPU device
* NVIDIA Driver


===RHEL5===
{{Note | If you are unsure if your device is CUDA capable, please [[HelpDesk |contact staff]].}}
The RHEL5 CUDA infrastructure comes in two parts.


First is the driver which installs libraries in /usr/lib.
==Getting Started==


Second is the CUDA toolkitThe currently supported CUDA toolkit is stored under /usr/local.  Please change the "common.mk" settings in your CUDA SDK to set the CUDA root directory to "/usr/local"
# Load the CUDA Environmental variables via [[Modules | GNU Modules]]
#*Multiple versions are availableSee the modules documentation and <code>module list cuda</code> for more information.
#: <pre> module load cuda</pre>
# Obtain a copy of the cuda samples:
#: <pre>rsync -a /opt/common/cuda/<CUDA Version>/samples/ ~/cuda_samples</pre>
# Build and run the device query
#: <pre> cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery</pre>


You will also need to put the CUDA libraries in your LD_LIBRARY_PATH. 
Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your systemIf desired you can compile additional samples by switching to their respective directory and running 'make'
*If you are using a 64-bit machine this will be /usr/local/lib64.
*If you are using a 32-bit machine this will be /usr/local/lib.
 
Older versions of the CUDA toolkit are stored in /usr/local/stow/cudatoolkit_X.Y where X.Y is the version number.  So, CUDA 2.2 is stored in /usr/local/stow/cudatoolkit2.2
 
===RHEL6===
The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running.  You can run '''cat /proc/driver/nvidia/version'''
 
If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machine.
 
<pre>
$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 304.43  Sun Aug 19 20:14:03 PDT 2012
GCC version:  gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)
</pre>
 
The other component is the toolkit and in RHEL6 we have relocated non-locally compiled software into /opt/common.  This includes the CUDA toolkit.  You can find all the versions available in '''/opt/common/cuda'''. 
 
<pre>
$ ls /opt/common/cuda
cudatoolkit-3.2.16  cudatoolkit-4.1.28  UMIACS-CUDA-SDK.diff
cudatoolkit-4.0.17  cudatoolkit-4.2.9
</pre>
 
We will use version 4.2.9 for this example, you can substitute as needed.  You will need to setup a number of environmental variables to get started.
 
* bash/sh
** export PATH=/opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
* tcsh/csh
** setenv PATH /opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
 
To get started you might want to build and test with the GPU Computing SDK.  You can do this by running '''/opt/common/cuda/cudatoolkit-4.2.9/gpucomputingsdk_4.2.9_linux.run'''.  It will prompt you to where you want to install the SDK.
 
Once it is installed please apply this patch from the directory you installed the SDK into if there is one for the version you are trying to use.
 
<pre>
patch -p1 < /opt/common/cuda/cudatoolkit-4.2.9/UMIACS-CUDA-SDK-RHEL6-4.2.9.diff
</pre>
 
You should now be able to run '''make''' and compile all the examples.
 
===Ubuntu 12.04===
First you will need to ensure you have the updated x-updates repository added to your workstation.  Please contact staff@umiacs.umd.edu and we will help get this in place for your workstation.
 
The other component is the toolkit and in Ubuntu non-locally compiled software is available in /opt/common.  This includes the CUDA toolkit.  You can find all the versions available in '''/opt/common/cuda'''. 
 
Ubuntu and Cuda are tested in 4.2.9 and higher.
 
<pre>
$ ls /opt/common/cuda
cudatoolkit-3.2.16  cudatoolkit-4.0.17  cudatoolkit-4.1.28 cudatoolkit-4.2.9
</pre>
 
* bash/sh
** export PATH=/opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
* tcsh/csh
** setenv PATH /opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
 
To get started you might want to build and test with the GPU Computing SDK.  You can do this by running '''/opt/common/cuda/cudatoolkit-4.2.9/gpucomputingsdk_4.2.9_linux.run'''.  It will prompt you to where you want to install the SDK.
 
Once it is installed please apply this patch from the directory you installed the SDK into.
 
<pre>
patch -p1 < /opt/common/cuda/cudatoolkit-4.2.9/UMIACS-CUDA-SDK-Ubuntu-12.04.diff
</pre>
 
You should now be able to run '''make''' and compile all the examples.

Latest revision as of 20:39, 26 February 2024

CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

Prerequisites

  • NVIDIA GPU device
  • NVIDIA Driver
Exclamation-point.png If you are unsure if your device is CUDA capable, please contact staff.

Getting Started

  1. Load the CUDA Environmental variables via GNU Modules
    • Multiple versions are available. See the modules documentation and module list cuda for more information.
     module load cuda
  2. Obtain a copy of the cuda samples:
    rsync -a /opt/common/cuda/<CUDA Version>/samples/ ~/cuda_samples
  3. Build and run the device query
     cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery

Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your system. If desired you can compile additional samples by switching to their respective directory and running 'make'