935-24287-0000-000 - NVIDIA HGX H100 Delta-Next 640GB SXM5 Air Cooled Baseboard - 8 x H100 80GB SXM5

Mfr Part#:
935-24287-0000-000 | See more Graphics Cards
Availability:
In stock
NVIDIA
Manufacturer: nVidia
$215,000.00

Seeking a large quantity of 935-24287-0000-000, or have a target price in mind? Contact us at (855) 483-7810 or simply Request a Bulk Quote. Our sales team will promptly respond with a discounted price tailored to your needs.

Payment

  • Payment
  • Payment
  • Payment
  • Payment
  • Payment
  • Payment
Payment Payment Payment

Overview

Part:

935-24287-0000-000

Weight:

22.00 LBS

Manufacturer:

NVIDIA

Condition:

New

Top reasons to buy 935-24287-0000-000NVIDIA from us

100% Low Price Guarantee

We provide high-quality products at low wholesale prices.

What is the lowest price of the NVIDIA 935-24287-0000-000?

Our lowest price for 935-24287-0000-000 is $215,000.00, Buy Now.

Check more lowest price of Graphics Cards and NVIDIA.

Description

Purpose-Built for AI and High-Performance Computing
AI, complex simulations, and massive datasets require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX™ AI supercomputing platform brings together the full power of NVIDIA GPUs, NVIDIA NVLink™, NVIDIA networking, and fully optimized AI and high-performance computing (HPC) software stacks to provide the highest application performance and drive the fastest time to insights.

Unmatched End-to-End Accelerated Computing Platform
Both the HGX H200 and HGX H100 include advanced networking options—at speeds up to 400 gigabits per second (Gb/s)—utilizing NVIDIA Quantum-2 InfiniBand and Spectrum™-X Ethernet for the highest AI performance. HGX H200 and HGX H100 also include NVIDIA® BlueField®-3 data processing units (DPUs) to enable cloud networking, composable storage, zero-trust security, and GPU compute elasticity in hyperscale AI clouds.

Deep Learning Inference: Performance and Versatility
AI solves a wide array of business challenges using an equally wide array of neural networks. A great AI inference accelerator has to, not only deliver the highest performance, but also the versatility needed to accelerate these networks in any location that customers choose to deploy them, from data center to edge.

Deep Learning Training: Performance and Scalability
NVIDIA H200 and H100 GPUs feature the Transformer Engine, with FP8 precision, that provides up to 5X faster training over the previous GPU generation for large language models. The combination of fourth-generation NVLink—which offers 900GB/s of GPU-to-GPU interconnect—PCIe Gen5, and NVIDIA Magnum IO™ software delivers efficient scalability, from small enterprises to massive unified GPU clusters. These infrastructure advances, working in tandem with the NVIDIA AI Enterprise software suite, make HGX H200 and HGX H100 the world’s leading AI computing platform.

Accelerating HGX With NVIDIA Networking
The data center is the new unit of computing, and networking plays an integral role in scaling application performance across it. Paired with NVIDIA Quantum InfiniBand, HGX delivers world-class performance and efficiency, which ensures the full utilization of computing resources.
For AI cloud data centers that deploy Ethernet, HGX is best used with the NVIDIA Spectrum-X networking platform, which powers the highest AI performance over Ethernet. It features Spectrum-X switches and BlueField-3 DPUs for optimal resource utilization and performance isolation, delivering consistent, predictable outcomes for thousands of simultaneous AI jobs at every scale. Spectrum-X enables advanced cloud multi-tenancy and zero-trust security. As a reference design, NVIDIA has designed Israel-1, a hyperscale generative AI supercomputer built with Dell PowerEdge XE9680 servers based on the NVIDIA HGX 8-GPU platform, BlueField-3 DPUs, and Spectrum-4 switches.

Specification

Overview
Form Factor:
8x NVIDIA H100 SXM
FP8 Tensor Core*:
32 PFLOPS
INT8 Tensor Core*:
32 POPS
FP16/BFLOAT16 Tensor Core*:
16 PFLOPS
TF32 Tensor Core*
8 PFLOPS
FP32
540 TFLOPS
FP64
270 TFLOPS
FP64 Tensor Core
540 TFLOPS
Memory
640GB HBM3
GPU Aggregate Bandwidth
27GB/s
NVLink
Fourth generation(s)
NVSwitch™
Third generation
NVSwitch GPU-to-GPU Bandwidth
900GB/s
Total Aggregate Bandwidth
7.2TB/s
* With sparsity.

Top reasons to buy 935-24287-0000-000 NVIDIA from us

100% Low Price Guarantee

We provide high-quality products at low wholesale prices.

What is the lowest price of the NVIDIA 935-24287-0000-000?

Our lowest price for 935-24287-0000-000 is $215,000.00, Buy Now.

Check more lowest price of Graphics Cards and NVIDIA .

Reviews

You're reviewing:

Your Rating

FAQs

Customer Questions
No Questions
Did you find what you were looking for?

Trusted by over 20,000 customers globally

Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo
Brand Logo