# Mint Starter Kit > GPUhub Documentation ## Docs - [Authentication](https://docs.gpuhub.com/api-reference/api-authentication.md): Create a token at GPUhub Console and use it for authentication. - [Elastic Deployment API](https://docs.gpuhub.com/api-reference/api-elastic-deployment.md): GPUhub Elastic Deployment API: A simple, token-authenticated REST API that lets any user programmatically create, scale, monitor, and manage pay-per-second GPU containers for AI/ML workloads. - [Obtain Balances](https://docs.gpuhub.com/api-reference/api-obtain-balances.md): Authentication to the GPUhub API to obtain account current balances and other information - [Switch Dedicated NFS / File Storage](https://docs.gpuhub.com/api-reference/api-switch-nfs-fs.md): To switch between Dedicated NFS storage and standard file storage for a specified data center. - [Computational Precision](https://docs.gpuhub.com/best-practices/computational-precision.md): If the computational precision error is significant, it may be due to the introduction of the TF32 numerical type by NVIDIA's Ampere architecture GPUs, and frameworks such as Torch may automatically enable TF32 computations. - [Cost-Saving Tips](https://docs.gpuhub.com/best-practices/cost-saving.md): Tips for saving money on GPUhub by using no-GPU mode, adjusting GPU configurations, and automating shutdown etc. - [Exposing Multiple Services](https://docs.gpuhub.com/best-practices/exposing-multiple-services.md): Currently, the custom services provided by GPUhub only support exposing a single port. If you need to expose multiple services (HTTP services) from different containers, you can use a proxy to implement routing and forwarding. For example, you can use Nginx for forwarding. Here, we provide a more li… - [Filezilla](https://docs.gpuhub.com/best-practices/filezilla.md): If you want to upload and download files/folders from the instance more conveniently, you can use FileZilla: it's free and available on all platforms (Mac, Linux, Windows). - [Gromacs](https://docs.gpuhub.com/best-practices/gromacs.md): Gromacs is a molecular dynamics package used for researching biological molecular systems. - [HuggingFace](https://docs.gpuhub.com/best-practices/huggingface.md): The platform where the machine learning community collaborates on models, datasets, and applications. - [Linux Basics](https://docs.gpuhub.com/best-practices/linux-basics.md): The system of instances rented on GPUhub defaults to the Linux Ubuntu distribution, so being familiar with basic Linux commands is very necessary for training models. - [MPI](https://docs.gpuhub.com/best-practices/mpi.md): An instructions for using MPI on Ubuntu 20.04 - [Open Ports](https://docs.gpuhub.com/best-practices/open-ports.md): How to expose custom services on GPUhub instances. - [OpenCL](https://docs.gpuhub.com/best-practices/opencl.md): OpenCL is a tool that will enable you to perform parallel general-purpose computing on GPUs or other compliant hardware accelerators. - [Performance Tips](https://docs.gpuhub.com/best-practices/performance.md): The documentation offers performance optimization tips for GPU and CPU usage, code checks, and troubleshooting common issues in machine learning tasks. - [PyCharm](https://docs.gpuhub.com/best-practices/pycharm.md): Learn about developing remotely using PyCharm. - [R (Rstudio)](https://docs.gpuhub.com/best-practices/rstudio.md): Due to web issues associated with installing RStudio Server, the following instructions will only cover the installation of the R language. - [SSH Tunnel](https://docs.gpuhub.com/best-practices/ssh-tunnel.md): SSH tunneling can proxy a port of the instance to your local machine, or proxy a local port to the instance. - [TensorBoard](https://docs.gpuhub.com/best-practices/tensorboard.md): TensorFlow is a software library for machine learning and artificial intelligence. - [Visdom](https://docs.gpuhub.com/best-practices/visdom.md): Visdom is a visualization tool that generates rich visualizations of live data to help researchers and developers stay on top of their scientific experiments that are run on remote servers. - [VsCode](https://docs.gpuhub.com/best-practices/vscode.md): Learn about developing remotely using VSCode. - [Vulkan](https://docs.gpuhub.com/best-practices/vulkan.md): In machine learning applications, Vulkan can be used to accelerate image processing, data augmentation, and other tasks that require a large amount of parallel computation. - [XShell](https://docs.gpuhub.com/best-practices/xshell.md): XShell is a very powerful and convenient remote management software for Windows systems. - [Billing & Pricing](https://docs.gpuhub.com/billing-recharge/billing.md): GPUhub Billing and Management Guide - [Invoice Application](https://docs.gpuhub.com/billing-recharge/invoice.md): Currently, a receipt is sent automatically once your recharge is complete; if you also need a PDF invoice, just provide the details below and we’ll generate it for you. - [Recharge](https://docs.gpuhub.com/billing-recharge/recharge.md): Currently, GPUhub supports online payment with Bank Card only. - [Change Billing Mode](https://docs.gpuhub.com/container-instance/change-billing-model.md): Pay-as-you-go Mode & Subscription Mode (Monthly/Yearly) - [JupyterLab](https://docs.gpuhub.com/container-instance/jupyterlab.md): A guide to using JupyterLab in GPUhub, including its working directory and basic features, to help users work more effectively with data analysis and programming tasks. - [Migrate Instance](https://docs.gpuhub.com/container-instance/migrate-instance.md): The local data involved in instance migration includes data from the **System Disk and the Data Disk**, both of which can be migrated. - [Migrate Instance (Same Region)](https://docs.gpuhub.com/container-instance/migrate-instance-sr.md): For migrating instances within the same region, you can refer to this document. The operation is very simple and the migration speed is fast. - [Multi-Node Multi-GPU Parallelism](https://docs.gpuhub.com/container-instance/multi-node-multi-gpu.md): This guide advises using single-node multi-GPU setups for efficiency and provides troubleshooting tips for multi-node parallel computing with GPUs. - [Overview](https://docs.gpuhub.com/container-instance/overview.md): Our **Cloud GPU Container Instances** is a container that uses Docker technology for resource allocation and isolation. It has the advantages of lower performance loss and higher efficiency compared to virtual machine instances. - [Remote Desktop](https://docs.gpuhub.com/container-instance/remote-desktop.md): The document provides a guide on setting up VNC for remote GUI access on GPUhub instances without needing a full desktop environment. - [Reset System](https://docs.gpuhub.com/container-instance/reset-system.md): Resetting the system or changing the image restores the instance to the initial state. - [Run in Background](https://docs.gpuhub.com/container-instance/run-in-background.md): Several ways to run your program as a daemon - [Save Image](https://docs.gpuhub.com/container-instance/save-image.md): Support saving the [system disk](/container-instance/overview#about-system-disk) (i.e., system environment) of container instances as an image for use by other instances, avoiding the need to reconfigure the system environment repeatedly. - [Scale Configuration](https://docs.gpuhub.com/container-instance/scale-configuration.md): Only instances with Pay-as-you-go Billing Mode supports configuration adjustments. - [SSH](https://docs.gpuhub.com/container-instance/ssh.md): SSH (Secure Shell) Remote Connection - [Compress / Decompress](https://docs.gpuhub.com/data/compress-decompress.md): This document provides guidance on compressing and decompressing files using various formats and tools on GPUhub instances. - [Download Data](https://docs.gpuhub.com/data/download-data.md): The following introduces various methods for downloading data from instances. - [File Storage](https://docs.gpuhub.com/data/file-storage.md): Store your important data and code in here! Including models, datasets, and more — and access them across all your instances. - [Local Data Disk](https://docs.gpuhub.com/data/local-data-disk.md): This document explains Local Data Disks, covering capacity, resizing, and billing details. - [Overview](https://docs.gpuhub.com/data/overview.md): This document outlines data retention policies, data transfer methods, and public dataset usage for GPUhub instances. - [3rd-Party Cloud Drives](https://docs.gpuhub.com/data/thirdparty-drives.md): Mount third-party cloud drives and file storage as your local disks (simple, secure, and free). - [Upload Data](https://docs.gpuhub.com/data/upload-data.md): The following introduces various methods for uploading data to instances. - [Overview](https://docs.gpuhub.com/elastic-deployment/elastic-deployment.md): GPUhub Elastic Deployment GPU is a pay-per-second, on-demand service that instantly launches and manages GPU containers with zero maintenance. Perfect for enterprise AIGC and large-scale inference, just pick an image and GPU spec — we handle provisioning, startup, scaling, and lifecycle in seconds.… - [Best Practices](https://docs.gpuhub.com/elastic-deployment/elastic-deployment-best.md): The best practices cover efficient deployments, service discovery/load balancing, container management, and features like reuse for faster startups and lower costs. - [Monitoring](https://docs.gpuhub.com/elastic-deployment/elastic-deployment-monitoring.md): Active retrieval via API inside the container - [Update Log](https://docs.gpuhub.com/elastic-deployment/elastic-deployment-updatelog.md): Officially released the Elastic Deployment Service to the enterprise on Jan-6, 2026 - [Why Not Serverless](https://docs.gpuhub.com/elastic-deployment/why-not-serverless.md): Why choose Elastic Containers over Pure Serverless - [CUDA / cuDNN](https://docs.gpuhub.com/environment/cuda-cudnn.md): Install CUDA/cuDNN - [Dependency Installation](https://docs.gpuhub.com/environment/dependency-installation.md): In GPUhub, all data, including installed dependencies, is retained after shutdown. There is no need to reinstall them upon restart. - [Docker Image](https://docs.gpuhub.com/environment/image.md): This document explains how to save, load, and share custom images on GPUhub, allowing users to replicate and reuse their configured environments across different instances. - [Miniconda](https://docs.gpuhub.com/environment/miniconda.md): The platform's built-in images all come with Miniconda installed, and the installation path is /root/miniconda3/. - [Overview](https://docs.gpuhub.com/environment/overview.md): The platform provides pre-installed images with specific frameworks and versions. If these versions do not meet your needs, see below for methods to configure other versions. - [Python 3.X](https://docs.gpuhub.com/environment/python3-x.md): Create virtual environments with other Python versions using Miniconda. - [PyTorch for Blackwell](https://docs.gpuhub.com/environment/pytorch280.md): Using RTX PRO 6000 / RTX 5090 GPUs with PyTorch - [GPU Memory Not Released](https://docs.gpuhub.com/faqs/gpu-memory-not-released.md): XXX - [JupyterLab Fails to Open](https://docs.gpuhub.com/faqs/jupyterlab-fails-to-open.md): Typically caused by dependency installation or network connection issues. - [Others](https://docs.gpuhub.com/faqs/others.md): The mysteries unfold in question. - [SD Image Generation Memory Leak](https://docs.gpuhub.com/faqs/sd-image-generation-memory-leak.md): If you experience continuous memory growth while generating images or switching models in the AUTOMATIC1111 SD WebUI, you can try the following solution: - [SSH-Based Connection Exception](https://docs.gpuhub.com/faqs/ssh-based-connection-exception.md): When you encounter connection issues with VSCode, PyCharm, or other SSH-based tools, please follow the steps below. - [System Disk Space Insufficient](https://docs.gpuhub.com/faqs/system-disk-space-insufficient.md): First, identify which directory is occupying the system disk space. - [Unable to Call GPU](https://docs.gpuhub.com/faqs/unable-to-call-gpu.md): Solve the issue of being unable to call the GPU. - [VSCode Remote Connection Failed](https://docs.gpuhub.com/faqs/vscode-remote-connection-failed.md): Solving VSCode Remote Connection Issues - [GPU Benchmarks](https://docs.gpuhub.com/gpu-selection/gpu-benchmarks.md): Only considering official compute power doesn't give a full picture of GPU differences and quality. - [GPU Selection](https://docs.gpuhub.com/gpu-selection/gpu-selection.md): Select the GPU suits your purpose and scenario. - [Introduction](https://docs.gpuhub.com/introduction.md): Welcome to the GPUhub documentation. This is the go-to resource for researchers and developers working with GPU-accelerated computing. Get clear, accurate, step-by-step guides to train, deploy, and optimize faster. - [Coupons & Vouchers](https://docs.gpuhub.com/promotions/coupons-vouchers.md): How to claim and use coupons and vouchers on GPUhub, including eligibility criteria, usage modes, and maximum discount limits. - [Membership](https://docs.gpuhub.com/promotions/membership.md): Upon registration, users will receive 30 days membership by default. - [Student Verification](https://docs.gpuhub.com/promotions/student-verification.md): In support of the advancement of science and technology education, we offer preferential policies to current students of colleges and universities. - [Quickstart](https://docs.gpuhub.com/quickstart.md): Start launching your cloud container instance in under 5 minutes - [Anti-Mining Policy](https://docs.gpuhub.com/terms/anti-mining-agreement.md): Developers, let not the lure of personal gain blind you to the greater good. Act with wisdom and restraint. - [Cloud Disk Storage License & Service Agreement](https://docs.gpuhub.com/terms/cloud-disk-agreement.md): By continuing to use the GPUhub platform and accessing the cloud disk/storage service, you acknowledge that you have read, understood, and agree to be bound by the terms of this agreement. - [Cookie Policy](https://docs.gpuhub.com/terms/cookie-policy.md): We are committed to protecting your privacy and ensuring the security of your personal information. - [Supplementary Agreement for Customized Instance Services](https://docs.gpuhub.com/terms/custom-instance-agreement.md): By continuing to use the GPUhub platform and accessing the customized instance service, you acknowledge that you have read, understood, and agree to be bound by the terms of this agreement. - [Privacy Policy](https://docs.gpuhub.com/terms/privacy-policy.md): We are committed to protecting your privacy and ensuring the security of your personal information. - [Terms of Service](https://docs.gpuhub.com/terms/terms-of-service.md): By using GPUhub, you agree to our Terms of Service, which includes our commitment to protecting your privacy and ensuring the security of your personal information. - [Networks](https://docs.gpuhub.com/troubleshooting/networks.md): To reduce the cost of using instances and provide a more flexible experience, GPUhub adopts a shared bandwidth scheme for instances in the same region, instead of charging separately for network bandwidth and traffic. - [Troubleshooting](https://docs.gpuhub.com/troubleshooting/troubleshooting.md): Yes. Sh*t happens. ## Optional - [Discod](https://discord.gg/PVdwkynM75) - [Videos](https://www.youtube.com/channel/UC2qUAYW-gmjBjs_DAZbpGlg) - [X](https://x.com/hub_gpu)