CUDA
Compute Unified Device Architecture is a parallel computing platform and API that allows software to use graphics processing units (GPUs) for accelerated general-purpose processing, significantly broadening their utility in scientific and high-performance computing.
Using CUDA¶
Load the module to make the CUDA compiler and libraries available:
module load gcc cuda
Check that it loaded correctly:
nvcc --version
The full NVIDIA HPC Software Development Kit (SDK) is also available.
Self Install¶
If your application requires a version of CUDA that is not available through modules, it is possible to install your own.
-
Download the .run variant from Nvidia's website:
wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux_ppc64le.run -
Extract only CUDA components:
./cuda_10.2.89_440.33.01_linux_ppc64le.run --toolkit --installpath=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract -
Continue when asked, then unselect all the driver related things, leaving only the CUDA components and install
-
Set environment variables: Add the following lines to your shell configuration
export PATH=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract/bin:$PATHexport LD_LIBRARY_PATH=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract/cuda/lib64:$LD_LIBRARY_PATH
Alternatively, Add these lines to your sbatch to get your job to use your CUDA install at runtime.
-
Verify your install:
Run
nvcc --versionto confirm your install worked and the version is correct.
Thanks to Arpon Biswas for testing this.