CUDA: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
[http://en.wikipedia.org/wiki/CUDA CUDA] is a programming architecture developed by NVIDIA to allow General Purpose Computing on GPUs or '''"GPGPU"'''. It requires a specific card and driver to work correctly,  UMIACS has a number of facilities and labs that have Cuda hardware available.
[http://en.wikipedia.org/wiki/CUDA CUDA] is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).


=CUDA Software on Linux=
==Prerequisites==
* NVIDIA GPU device
* NVIDIA Driver


==RHEL==
{{Note | If you are unsure if your device is CUDA capable, feel free to contact staff@umiacs.umd.edu}}


The first component you will need if you are not using a resource that you know is running CUDA is to check to see if you have the Nvidia driver running.  You can run '''cat /proc/driver/nvidia/version'''
==Getting Started==


If something like the following example does not show you will need to contact staff to see if your hardware is capable and if so we will need to add the driver to the machineYou must have version '''319.30''' or greater for Cuda 5.5.22.
# Load the CUDA Environmental variables via [[Modules | GNU Modules]]
#*Multiple versions are availableSee the modules documentation and <code>module list cuda</code> for more information.
#: <pre> module load cuda</pre>
# Obtain a copy of the cuda samples:
#: <pre>rsync -a /opt/common/cuda/$CUDA Version/samples/ ~/cuda_samples</pre>
# Build and run the device query
#: <pre> cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery</pre>


<pre>
Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your system.  If desired you can compile additional samples by switching to their respective directory and running 'make'
# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  331.20  Wed Oct 30 17:43:35 PDT 2013
GCC version:  gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)
</pre>
 
You can now load the Cuda environment variables by the [[Modules]] command.  This will load the latest version into your environment.  If you need to specify a version, please see <tt>module avail cuda</tt> for available versions.
 
  module load cuda
 
===Cuda 5.5===
 
To get started you might want to build and test with the Cuda GPU Computing SDK.  In Cuda 5 the SDK is no longer separate and you need to make a copy of the examples directory inside the cuda toolkit directory.  You can do this by running the following command.
* RHEL5 - <tt>rsync -a /opt/stow/cuda/cuda-5.5.22/samples ~/cuda_samples</tt>
* RHEL6 - <tt>rsync -a /opt/common/cuda/cuda-5.5.22/samples ~/cuda_samples</tt>
 
You should now be able to run <tt>make</tt> command in the <tt>~/cuda_samples</tt> directory and compile all the examples.  Please note that you will need to activate an MPI module if you wish to be able to compile all the examples correctly.  For example, you can run <tt>module load openmpi-x86_64</tt> (Please note this only works on RHEL6.  On RHEL5 you will need to have the appropriate MPI include/libs in your environment).  You can alternatively compile the specific sample codes by going into the respective directories and typing <tt>make</tt>
 
'''Note''': Compiling requires the <tt>LIBRARY_PATH</tt> environmental variable above to tell GCC where to find the nvidia shared libraries.  If using an alternate compiler please consult it as to how to modify the compile time linker path.
 
===Cuda 6.0===
 
You need to have driver 331.62 or later installed to use this (see above for finding out what driver version you are running) and you will need to ask staff to upgrade if not.  This is only available currently on RHEL6 x86_64.
 
  module load cuda/6.0.37
 
You can then make a copy of the samples to test with.  All the same caveats as with the 5.5 samples above.
 
* RHEL6 - <tt>rsync -a /opt/common/cuda/cuda-6.0.37/samples ~/cuda6_samples</tt>
 
==Ubuntu 12.04==
First you will need to ensure you have the updated x-updates repository added to your workstation.  Please contact staff@umiacs.umd.edu and we will help get this in place for your workstation.
 
The other component is the toolkit and in Ubuntu non-locally compiled software is available in /opt/common.  This includes the CUDA toolkit.  You can find all the versions available in '''/opt/common/cuda'''. 
 
Cuda 4.2.9 and higher are tested on Ubuntu.
 
<pre>
$ ls /opt/common/cuda
cudatoolkit-3.2.16  cudatoolkit-4.0.17  cudatoolkit-4.1.28 cudatoolkit-4.2.9
</pre>
 
* bash/sh
** export PATH=/opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** export LD_LIBRARY_PATH=/opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
* tcsh/csh
** setenv PATH /opt/common/cuda/cudatoolkit-4.2.9/bin:${PATH}
** setenv LD_LIBRARY_PATH /opt/common/cuda/cudatoolkit-4.2.9/lib64:/opt/common/cuda/cudatoolkit-4.2.9/lib:${LD_LIBRARY_PATH}
 
To get started you might want to build and test with the GPU Computing SDK.  You can do this by running '''/opt/common/cuda/cudatoolkit-4.2.9/gpucomputingsdk_4.2.9_linux.run'''.  It will prompt you to where you want to install the SDK.
 
Once it is installed please apply this patch from the directory you installed the SDK into.
 
<pre>
patch -p1 < /opt/common/cuda/cudatoolkit-4.2.9/UMIACS-CUDA-SDK-Ubuntu-12.04.diff
</pre>
 
You should now be able to run '''make''' and compile all the examples.

Revision as of 17:00, 1 February 2016

CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

Prerequisites

  • NVIDIA GPU device
  • NVIDIA Driver
Exclamation-point.png If you are unsure if your device is CUDA capable, feel free to contact staff@umiacs.umd.edu

Getting Started

  1. Load the CUDA Environmental variables via GNU Modules
    • Multiple versions are available. See the modules documentation and module list cuda for more information.
     module load cuda
  2. Obtain a copy of the cuda samples:
    rsync -a /opt/common/cuda/$CUDA Version/samples/ ~/cuda_samples
  3. Build and run the device query
     cd ~/cuda_samples/1_Utilities/deviceQuery/ && make && ./deviceQuery

Assuming the deviceQuery complication completed without error, you should now see output listing the details of the GPUs in your system. If desired you can compile additional samples by switching to their respective directory and running 'make'