NVIDIA Highlights Hopper H100 Availability, Ada Lovelace L40 GPU, IGX/OVX Systems & Grace CPU Superchips at GTC 2022

Starting with the flagship Hopper chip, NVIDIA has confirmed that the H100 GPU is now under full production and that its partners will be rolling out the first wave of products in October this year. It was also confirmed that the global rollout for Hopper will include three phases, the first will be pre-orders for NVIDIA DGX H100 systems & free hands of labs to customers directly from NVIDIA with systems such as Dell’s Power Edge servers which are now available on NVIDIA LaunchPad.

NVIDIA Hopper in Full Production

The 2nd phase will include leading OEM partners beginning to ship in the coming weeks with over 50 servers available in the market by the end of the year lastly, the company expects dozens more to enter the market by the first half of 2023.

The NVIDIA L40, powered by the Ada Lovelace architecture

The second major announcement is regarding the L40 GPU, a product focused on the Data Center segment and utilizing the newly announced Ada Lovelace GPU architecture. The L40 GPU’s entire specs are unknown but it comes with 48 GB of GDDR6 memory (ECC), 4 DP 1.4a display outputs, a TBP of 300W, and a dual-slot passive cooler that measures 4.4" x 10.5". The card is powered by a single 16-Pin CEM5 connector. For customers who want to immediately try the new technology, NVIDIA announced that H100 on Dell PowerEdge servers is now available on NVIDIA LaunchPad, which provides free hands-on labs, giving companies access to the latest hardware and NVIDIA AI software. Customers can also begin ordering NVIDIA DGX H100 systems, which include eight H100 GPUs and deliver 32 petaflops of performance at FP8 precision. NVIDIA Base Command and NVIDIA AI Enterprise software power every DGX system, enabling deployments from a single node to an NVIDIA DGX SuperPOD supporting advanced AI development of large language models and other massive workloads. H100-powered systems from the world’s leading computer makers are expected to ship in the coming weeks, with over 50 server models in the market by the end of the year and dozens more in the first half of 2023. Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. Additionally, some of the world’s leading higher education and research institutions will be using H100 to power their next-generation supercomputers. Among them are the Barcelona Supercomputing Center, Los Alamos National Lab, Swiss National Supercomputing Centre (CSCS), Texas Advanced Computing Center and the University of Tsukuba. via NVIDIA The NVIDIA L40 GPU supports all major vGPU software such as NVIDIA vPC/vApps, and NVIDIA RTX Virtual Workstation (vWS) and comes with Level 3 NEBS support plus secure boot (root of trust) support. The most important aspect of this product is that it features three AV1 Encode & also 3x Decode units. This is already a bump from the RTX 6000 and other GeForce RTX 40 graphics cards that feature dual AV1 engines.

Grace Hopper Superchip Is Ideal for Next-Gen Recommender Systems

NVIDIA has also further detailed its Grace Hopper Superchip which it claims is ideal for recommender systems.

NVIDIA Announces OVX Computing Systems

NVIDIA has also revealed its brand new OVX system that makes use of the L40 GPUs which we just mentioned above, utilizing up to 8 Ada Lovelace chips in total for enhanced networking technology, to deliver groundbreaking real-time graphics, AI, and digital twin simulation capabilities. The OVX systems with L40 GPUs are expected to hit the market by early 2023 through leading partners like Inspur, Lenovo, and Supermicro. NVLink carries data at a whopping 900 gigabytes per second — 7x the bandwidth of PCIe Gen 5, the interconnect most leading edge upcoming systems will use. That means Grace Hopper feeds recommenders 7x more of the embeddings — data tables packed with context — that they need to personalize results for users.

More Memory, Greater Efficiency

The Grace CPU uses LPDDR5X, a type of memory that strikes the optimal balance of bandwidth, energy efficiency, capacity, and cost for recommender systems and other demanding workloads. It provides 50% more bandwidth while using an eighth of the power per gigabyte of traditional DDR5 memory subsystems. Any Hopper GPU in a cluster can access Grace’s memory over NVLink. It’s a feature of Grace Hopper that provides the largest pools of GPU memory ever. In addition, NVLink-C2C requires just 1.3 picojoules per bit transferred, giving it more than 5x the energy efficiency of PCIe Gen 5. The overall result is recommenders get a further up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional CPUs (see chart below). via NVIDIA NVIDIA also introduced its IGX system mainboard which is an edge-AI platform, purpose-built for industrial and medical environments The L40 GPU’s third-generation RT Cores and fourth-generation Tensor Cores will deliver powerful capabilities to Omniverse workloads running on OVX, including accelerated ray-traced and path-traced rendering of materials, physically accurate simulations, and photorealistic 3D synthetic data generation. The L40 will also be available in NVIDIA-Certified Systems servers from major OEM vendors to power RTX workloads from the data center. In addition to the L40 GPU, the new NVIDIA OVX includes the ConnectX-7 SmartNIC, providing enhanced network and storage performance and the precision timing synchronization required for true-to-life digital twins. ConnectX-7 includes support for 200G networking on each port and fast in-line data encryption to speed up data movement and increase security for digital twins. via NVIDIA

NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 81NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 51NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 57NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 74NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 38NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 39NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 86NVIDIA Hopper H100 GPU Goes In Full Production  Ada Lovelace Comes To L40 Server GPU  Grace CPU Superchip Further Detailed - 94