CUDA: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
 
(27 intermediate revisions by 5 users not shown)
Line 1: Line 1:
[http://en.wikipedia.org/wiki/CUDA CUDA] is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or '''"GPGPU"'''. It requires a specific card and driver to work correctly,  UMIACS has a number of facilities and labs that have Cuda hardware available.
[http://en.wikipedia.org/wiki/CUDA CUDA] is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).


==CUDA Software on Linux==
==Prerequisites==
* NVIDIA GPU device
* NVIDIA Driver


===RHEL5===
{{Note | If you are unsure if your device is CUDA capable, please [[HelpDesk |contact staff]].}}
The RHEL5 CUDA infrastructure comes in two parts.


First is the driver which installs libraries in /usr/lib.
==Getting Started==


Second is the CUDA toolkitThe currently supported CUDA toolkit is stored under /usr/local.  Please change the "common.mk" settings in your CUDA SDK to set the CUDA root directory to "/usr/local"
# Load the CUDA Environmental variables via [[Modules | GNU Modules]]
#*Multiple versions are availableSee the modules documentation and <code>module list cuda</code> for more information.
#: <pre> module load cuda</pre>
# Obtain a copy of the cuda samples:
#: <pre>rsync -a /opt/common/cuda/<CUDA Version>/samples/ ~/cuda_samples</pre>
# Build and run the device query
#: <pre> cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery</pre>


You will also need to put the CUDA libraries in your LD_LIBRARY_PATH. 
Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your systemIf desired you can compile additional samples by switching to their respective directory and running 'make'
*If you are using a 64-bit machine this will be /usr/local/lib64.
*If you are using a 32-bit machine this will be /usr/local/lib.
 
Older versions of the CUDA toolkit are stored in /usr/local/stow/cudatoolkit_X.Y where X.Y is the version numberSo, CUDA 2.2 is stored in /usr/local/stow/cudatoolkit2.2
 
===RHEL6===
In RHEL6 we have relocated non-locally compiled software into /opt/common.  This includes the CUDA toolkit.  You can find all the versions available in '''/opt/common/cuda'''.
 
<pre>
$ ls /opt/common/cuda
cudatoolkit-3.2.16  cudatoolkit-4.0.17  cudatoolkit-4.1.28
</pre>
 
We will use version 4.1.28 for this example, you can substitute as needed.  You will need to setup a number of environmental variables to get started.
 
* bash/sh
** export PATH=/opt/common/cuda/cudatoolkit-4.1.28/bin:${PATH}
** export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.1.28/lib:${LD_LIBRARY_PATH}
* tcsh/csh
** setenv PATH /opt/common/cuda/cudatoolkit-4.1.28/bin:${PATH}
** setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.1.28/lib64:/opt/common/cuda/cudatoolkit-4.1.28/lib:${LD_LIBRARY_PATH}
 
To get started you might want to build and test with the GPU Computing SDK.  You can do this by running '''/opt/common/cuda/cudatoolkit-4.1.28/gpucomputingsdk_4.1.28_linux.run'''.  It will prompt you to where you want to install the SDK.
 
Once it is installed please apply this patch from the directory you installed the SDK into.
 
<pre>
patch -p1 < /opt/common/cuda/cudatoolkit-4.1.28/UMIACS-CUDA-SDK-4.1.28.diff
</pre>
 
You should now be able to run '''make''' and compile all the examples.

Latest revision as of 20:39, 26 February 2024

CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

Prerequisites

  • NVIDIA GPU device
  • NVIDIA Driver
Exclamation-point.png If you are unsure if your device is CUDA capable, please contact staff.

Getting Started

  1. Load the CUDA Environmental variables via GNU Modules
    • Multiple versions are available. See the modules documentation and module list cuda for more information.
     module load cuda
  2. Obtain a copy of the cuda samples:
    rsync -a /opt/common/cuda/<CUDA Version>/samples/ ~/cuda_samples
  3. Build and run the device query
     cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery

Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your system. If desired you can compile additional samples by switching to their respective directory and running 'make'