Skip to content

CUDA

Compute Unified Device Architecture is a parallel computing platform and API that allows software to use graphics processing units (GPUs) for accelerated general-purpose processing, significantly broadening their utility in scientific and high-performance computing.

Using CUDA

Load the module to make the CUDA compiler and libraries available:

module load gcc cuda

Check that it loaded correctly:

nvcc --version

The full NVIDIA HPC Software Development Kit (SDK) is also available.

Self Install

If your application requires a version of CUDA that is not available through modules, it is possible to install your own.

  1. Download the .run variant from Nvidia's website:

    wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux_ppc64le.run

  2. Extract only CUDA components:

    ./cuda_10.2.89_440.33.01_linux_ppc64le.run --toolkit --installpath=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract

  3. Continue when asked, then unselect all the driver related things, leaving only the CUDA components and install

  4. Set environment variables: Add the following lines to your shell configuration

    • export PATH=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract/bin:$PATH
    • export LD_LIBRARY_PATH=/gpfs/u/home/PROJ/PROJname/barn/cuda_10.2/extract/cuda/lib64:$LD_LIBRARY_PATH

    Alternatively, Add these lines to your sbatch to get your job to use your CUDA install at runtime.

  5. Verify your install:

    Run nvcc --version to confirm your install worked and the version is correct.

Thanks to Arpon Biswas for testing this.