Skip to main content
The system version of all platform images is Ubuntu, with the majority being Ubuntu 18.04 and a few being Ubuntu 20.04.
For RTX 5090 and PRO 6000 (Blackwell Arch.), use PyTorch 2.8.0+ stable or a recent nightly build for proper GPU support, multi-GPU training, and best performance. Older stable versions may work slowly or with limited features.
Additionally, the initial startup of the Community Images may take a considerable amount of time (potentially over one hour). Please wait patiently for the system to complete initialization.
FrameworksFramework VersionPython VersionCUDA Version
PyTorch1.1.03.710.0
PyTorch1.5.13.810.1
PyTorch1.6.03.810.1
PyTorch1.7.03.811.0
PyTorch1.8.13.811.1
PyTorch1.9.03.811.1
PyTorch1.10.03.811.3
PyTorch1.11.03.811.3
PyTorch2.0.03.811.8
PyTorch2.1.03.1012.1
PyTorch2.1.23.1011.8
PyTorch2.3.03.1212.1
PyTorch2.5.13.1212.4
PyTorch2.7.03.1212.8
PyTorch2.8.03.1212.8
TensorFlow1.15.53.811.4
TensorFlow2.5.03.811.2
TensorFlow2.9.03.811.2
Minicondaconda33.79.0
Minicondaconda33.810.1
Minicondaconda33.810.2
Minicondaconda33.811.1
Minicondaconda33.811.3
Minicondaconda33.811.3(cudagl)
Minicondaconda33.811.6
Minicondaconda33.811.8
Minicondaconda33.1011.8
tritonserver24.123.1212.6
JAX0.3.103.811.1
PaddlePaddle2.2.03.811.2
PaddlePaddle2.4.03.811.2
TensorRT8.5.13.811.8
TensorRT8.6.13.811.8
Gromacs2022.23.811.4
Gromacs2023.23.1011.8
  1. First, check if the platform’s pre-installed images include the required versions of PyTorch, TensorFlow, or other frameworks. If available, prioritize using the platform’s built-in images.
  2. If the platform does not have the desired framework versions, determine the required CUDA version for your framework. For example, PyTorch 1.9.0 requires CUDA 11.1. You can then select a platform image with Miniconda and CUDA 11.1 pre-installed. This allows you to install the required framework without the hassle of setting up cudatoolkit. (The pre-installed CUDA on the platform includes .h header files, which is more convenient if you need to compile code.)
  3. If neither of the above conditions is met, you can choose any Miniconda image and install the required frameworks, CUDA, or even other Python versions after the instance is started.

About 3rd-Party Container Registries

At present, GPUHub DOES NOT support deploying containers using Docker image hosted on 3rd-party container registries, including but not limited to Docker Hub, GitHub Container Registry, and GitLab Container Registry. This limitation is based on the following considerations:
  1. Security and Compliance
Images hosted on third-party registries vary widely in origin and update behavior, making it difficult for the platform to perform consistent security review and risk control. To ensure platform security and data protection, GPUHub currently supports only verified image deployment mechanisms.
  1. Runtime Environment Consistency
Differences in image build standards, dependency management, and runtime configurations across registries may lead to compatibility issues within GPUHub’s execution environment. Restricting image sources helps maintain system stability and consistency.
  1. Operational and Support Complexity
Supporting third-party registries would significantly increase troubleshooting complexity related to image pulling, authentication, network access, and runtime errors, potentially impacting support efficiency and service quality. GPUHub continues to evaluate more flexible and secure image management solutions and may introduce support for third-party registries in the future, subject to security and stability requirements.