PNY NVIDIA A100 40GB PCI-E 4.0 Ampere Tensor Core GPU Video Graphics Card
SKU:
7GC1716
Availability:
In Stock
Free shipping
$9,350.00
Secure checkout with:
Brand:
PNY
Overview
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale. Specification NVIDIA A100 Product SKU P1001 SKU 200
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale. Specification NVIDIA A100 Product SKU P1001 SKU 200
Architecture | Ampere |
Process Size | 7nm | TSMC |
Transistors | 54 Billion |
Die Size | 826 mm2 |
CUDA Cores | 6912 |
Streaming Multiprocessors | 108 |
Tensor Cores | Gen 3 | 432 |
Multi-Instance GPU (MIG) Support | Yes, up to seven instances per GPU |
FP64 | 9.7 TFLOPS |
FP64 Tensor Core | 19.5 TFLOPS |
FP32 | 19.5 TFLOPS |
TF32 Tensor Core | 156 TFLOPS | 312 TFLOPS* |
BFLOAT16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
FP16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
INT8 Tensor Core | 624 TOPS | 1248 TOPS* |
INT4 Tensor Core | 1248 TOPS | 2496 TOPS* |
NVLink | 2-Way Low Profile, 2-Slot |
NVLink Interconnect | 600 GB/s Bidirectional |
GPU Memory | 40 GB HBM2e |
Memory Interface | 5120-bit |
Memory Bandwidth | 1555 GB/s |
System Interface | PCIe 4.0 x16 |
Thermal Solution | Passive |
vGPU Support | NVIDIA Virtual Compute Server with MIG support |
Secure and Measured Boot Hardware Root of Trust | CEC 1712 |
NEBS Ready | Level 3 |
Power Connector | 8-pin CPU |
Maximum Power Consumption | 250 W |