If you don’t need to recompile code, you typically don’t need to install CUDA/cuDNN separately. Frameworks come with precompiled CUDA support, and the framework version corresponds to a specific CUDA version. Therefore, you only need to focus on the framework version and not the CUDA version independently.
Check Default CUDA/cuDNN Version
The CUDA version displayed by the
nvidia-smi command only indicates the highest CUDA version supported by the driver, not the actual version of CUDA installed on the instance./usr/local/):
bash
bash
.so in the output logs. If you have installed CUDA via conda, you can check it using the following commands:
bash
Install Other Versions of CUDA/cuDNN
Method 1: Install using Conda
Advantages: Simple and easy to use. Disadvantages: Typically, header files are not included. If you need to compile code, you will need to install using Method 2.bash
Not sure what the version number is? Search for it.
Not sure what the version number is? Search for it.
bash
Method 2: Install by Downloading and Installing the Package
CUDA Download Address: CUDA Toolkit Archive
.run installation package:
bash
cuDNN Download Address: CUDA Deep Neural Network
- Unzip the downloaded file.
- Move the dynamic libraries and header files to the corresponding directories:
bash
- After installation, add the environment variable:
bash
The default image includes the most basic version of CUDA and cuDNN. If you have installed
cudatoolkit via Conda, it will generally be used preferentially over the default installation.