Difference between revisions of "CUDA"

From UMIACS
Jump to navigation Jump to search
 
 
(28 intermediate revisions by 5 users not shown)
Line 1: Line 1:
[[http://en.wikipedia.org/wiki/CUDA CUDA]] is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or '''"GPGPU"'''.
+
[http://en.wikipedia.org/wiki/CUDA CUDA] is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).
  
===CUDA Software on Linux===
+
==Prerequisites==
The CUDA infrastructure comes in two parts.
+
* NVIDIA GPU device
 +
* NVIDIA Driver
  
First is the driver which installs libraries in /usr/lib.
+
{{Note | If you are unsure if your device is CUDA capable, please [[HelpDesk |contact staff]].}}
  
Second is the CUDA toolkit.  The currently supported CUDA toolkit is stored under /usr/local.  Please change the "common.mk" settings in your CUDA SDK to set the CUDA root directory to "/usr/local"
+
==Getting Started==
  
You will also need to put the CUDA libraries in your LD_LIBRARY_PATH.   
+
# Load the CUDA Environmental variables via [[Modules | GNU Modules]]
*If you are using a 64-bit machine this will be /usr/local/lib64.
+
#*Multiple versions are availableSee the modules documentation and <code>module list cuda</code> for more information.
*If you are using a 32-bit machine this will be /usr/local/lib.
+
#: <pre> module load cuda</pre>
 +
# Obtain a copy of the cuda samples:
 +
#: <pre>rsync -a /opt/common/cuda/$CUDA Version/samples/ ~/cuda_samples</pre>
 +
# Build and run the device query
 +
#: <pre> cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery</pre>
  
Older versions of the CUDA toolkit are stored in /usr/local/stow/cudatoolkit_X.Y where X.Y is the version numberSo, CUDA 2.2 is stored in /usr/local/stow/cudatoolkit2.2
+
Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your systemIf desired you can compile additional samples by switching to their respective directory and running 'make'

Latest revision as of 18:14, 2 March 2022

CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

Prerequisites

  • NVIDIA GPU device
  • NVIDIA Driver
Exclamation-point.png If you are unsure if your device is CUDA capable, please contact staff.

Getting Started

  1. Load the CUDA Environmental variables via GNU Modules
    • Multiple versions are available. See the modules documentation and module list cuda for more information.
     module load cuda
  2. Obtain a copy of the cuda samples:
    rsync -a /opt/common/cuda/$CUDA Version/samples/ ~/cuda_samples
  3. Build and run the device query
     cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery

Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your system. If desired you can compile additional samples by switching to their respective directory and running 'make'