Skip to main content
Memory bandwidth also significantly impacts total time. Here, we compare different GPUs using deep - learning tasks. Test details:
  • Tested GPUs on GPUhub (single - card tests) using PyTorch = 1.9.0.
  • Input was pseudo - data made by torch.zero, so CPU preprocessing and IO didn’t affect results; GPU performance was key.
  • Tested ResNet50 (many activations, sensitive to memory bandwidth) and ViT Transformer (many convolutions, sensitive to compute power).
  • Results include FP32 and FP16 (not mixed - precision). Compare as per your needs.
  • GPU memory size is also key. Check the document for parameter comparisons and detailed hardware specs.
GPU memory size is also important. Check the document for theoretical parameter comparisons and specialized leading tech community for detailed hardware specs.
>>> ResNet50
Namespace(device=0, model='resnet50', precision='float16', train=False)
Iteration 0, 2294.06 images/s in 0.837s.
Iteration 1, 2391.29 images/s in 0.803s.
Iteration 2, 2396.06 images/s in 0.801s.
Iteration 3, 2394.62 images/s in 0.802s.
Iteration 4, 2402.61 images/s in 0.799s.
Namespace(device=0, model='resnet50', precision='float32', train=False)
Iteration 0, 1453.34 images/s in 1.321s.
Iteration 1, 1490.90 images/s in 1.288s.
Iteration 2, 1491.79 images/s in 1.287s.
Iteration 3, 1493.76 images/s in 1.285s.
Iteration 4, 1494.50 images/s in 1.285s.

>>> ViT Transformer
Namespace(device=0, model='vit_base_patch16_224', precision='float16', train=False)
Iteration 0, 1044.44 images/s in 1.838s.
Iteration 1, 1047.37 images/s in 1.833s.
Iteration 2, 1046.37 images/s in 1.835s.
Iteration 3, 1044.68 images/s in 1.838s.
Iteration 4, 1043.91 images/s in 1.839s.
Namespace(device=0, model='vit_base_patch16_224', precision='float32', train=False)
Iteration 0, 596.59 images/s in 3.218s.
Iteration 1, 599.41 images/s in 3.203s.
Iteration 2, 598.86 images/s in 3.206s.
Iteration 3, 597.92 images/s in 3.211s.
Iteration 4, 597.46 images/s in 3.214s.