Server systems with NVIDIA Grace Hopper


WE SUPPLY THE KEY TO BEST PERFORMANCE WITH KI

The demand for artificial intelligence (AI) is constantly increasing and so is the need for computing capacity. AI applications such as deep learning, simulations or predictions require extremely powerful hardware in order to generate valid results from complex data sets. NVIDIA's Grace Hopper perfectly meets these requirements today and in the future: the superchip was specially developed for training large AI models, generative AI, recommender systems and data analytics.

Click Here for 1st Hand Superchip Knowledge


NVIDIA-Whitepaper 'Performance and Productivity for Strong-Scaling HPC and Giant AI Workloads'

AI today - Ready for Take-off  

The progress of smart AI technologies has increased rapidly in the past few years: AI applications such as chat GPT, simulations in domains such as medicine or recommender systems in online stores solve complex tasks and make our everyday lives easier. More intelligent algorithms and more complex data analyses are multiplying the possible applications. However, AI-driven progress requires computing power - and ever more of it. System architectures such as NVIDIA's Grace Hopper provide the necessary power for rapidly developing AI technologies.

  • Powerful: Grace Hopper's combination of GPU, CPU and NVLink interconnect provides maximum performance across many benchmarks.
  • Versatile: There are already many applications running on Grace Hopper and the number is growing. 
  • Energy-saving: NVIDIA Grace Hopper requires less energy compared to x86 systems.  

Free Webinar April 25, 2024
Best Practices For Your Success


Server systems with NVIDIA Grace Hopper Superchip:
The first Choice for your CEA Workloads

  • Learn more about the most powerful architecture on the market today to help you
    quickly master large-scale AI calculations and complex CAE simulations.

  • Introducing Grace Hopper:
    Get to know the powerful architecture of the NVIDIA Hopper GPU with the Grace CPU
    (ARM Architecture) and the fast NVLink Chip-2-Chip (C2C).
  • Optimized Performance:
    NVIDIA's Grace Hopper superchip delivers tremendous performance to handle
    large CAE/CFD workloads in the shortest possible time.
  • Benchmark Insights:
    Grace Hopper Server-Systems performs particularly well with the CFD software OpenFOAM.
    Learn more details with selected benchmarks.
  • Order Now: 
    NVIDIA Grace Hopper Superchip is available from GNS Systems.
    There is a special offer for the International Supercomputing Conference 2024.
    Learn more in the webinar.
  •  Watch the Webinar on April 25, 2024 at 3pm
    with Ian Pegler (NVIDIA), Sergey Shatov (NVIDIA) and Dominic McKendry (GNS Systems)
    to learn more about the most powerful architecture for large-scale CAE/CFD workloads
    on the market today. The webinar will be in English.


    Register now

Your Individual Consultation for
5 x Higher Performance

Do you have any questions? Then get in touch with us. Our experts will be happy to advise you on all aspects of Grace Hopper and high-performance server systems for efficient AI use.   

Best Performance for AI



Artificial intelligence requires efficient, flexible and scalable system architectures for software and hardware, especially to ensure the iterative processing of
large data sets, for example in deep learning algorithms. The NVIDIA Grace Hopper systems are currently among the most powerful architectures on the market.
The new combination of NVIDIA Hopper GPU with the Grace CPU (ARM architecture) and the fast NVLink Chip-2-Chip (C2C) offers up to five times
higher performance for applications than comparable x86 systems.

Architecture Features

Grace Hopper combines the powerful NVIDIA Hopper GPU with the Grace CPU (ARM Architecture) and connects it with the fast NVLink Chip-2-Chip (C2C). 

The first NVIDIA data center CPU for HPC and AI workloads, the NVIDIA Grace CPU uses 72 Arm Neoverse V2 CPU cores to get the maximum per-thread performance out of the system. Up to 480 GB LPDDR5X memory provides the optimal balance between memory capacity, energy efficiency and performance.

NVIDIA Hopper is the ninth generation of the NVIDIA Data Center GPU and is specifically designed for large-scale AI and HPC applications. The thread block clusters and thread block reconfiguration used improve spatial and temporal data locality and keep the units in use utilized.

NVIDIA NVLink-C2C is NVIDIA's memory coherent and low latency interconnect standard for superchips. It forms the heart of the Grace Hopper superchip and delivers a total bandwidth of up to 900 GB/s. 

QuantaGrid S74G-2U Specifications

Processor
Processor Family: NVIDIA Grace TM Hopper TM Superchip
Processor Type: NVIDIA Grace TM 72 Arm® Neoverse V2 cores
Max. TDP Support: 1000W
Number of Processors: (1) Processors
Internal Interconnect:
NVIDIA
® NV-Link®-C2C 900GB/s
Core architecture
2U Rackmount
Cache W x H x D (inch): 17.24" x 3.44" x 35.43" 
W x H x D (inch): 438 x 87.5 x 900mm
Storage
Default Configuration: (4) E1.S NVMe SSD
Memory size
Capacity: Up to 480GB LPDDRX embedded 96GB HBM3 GPU memory
Expansion Slot
Default Configuration: (3) PCle 5.0 x16 FHFL Dual Width slots
Front I/O
Power / ID / Reset Button
Power / ID / Status LEDs
(2) USB 3.0 ports
(1) VGA port
Storage Controller 
Broadcom HBA 9500 Series Storage Adaptor
Broadcom MegaRAID 9560 Series
Power 1+1 High efficiency hot-plug 2000W PSU,
80 Plus Titanium

Whether deep learning, NLP (Natural Language Processing) or data analytics - NVIDIA's Grace Hopper delivers
enormous performance to enable extensive AI calculations and simulations of complex relationships
in the shortest possible time.

High Speed
for Your Innovations

Odoo – Beispiel 1 für drei Spalten

Grace Hopper for
OpenFOAM 

The NVIDIA Grace Hopper provides developers with a modular system that optimally supports even demanding CFD simulations with OpenFOAM. For the application of complex simulation models in product development, it is necessary to perform the calculations massively in parallel on state-of-the-art computing architectures.

OpenFOAM-based applications on Grace Hopper architectures make optimum use of the performance potential of the servers. Compared to other systems such as the x86 system without Hopper, the NVIDIA Grace Hopper architecture only requires 15 percent instead of 35 percent runtime. Overall, only 85 percent of the runtime is generally used on the CPU side, which provides a good basis for faster and shorter design cycles. No matter how many product variants exist: With the right IT infrastructure for OpenFOAM, engineers increase the quality of simulations and significantly accelerate virtual product development.

Odoo – Beispiel 1 für drei Spalten

Grace Hopper for
Large Language Models

With Grace Hopper, NVIDIA delivers a solid server that scales optimally to the requirements of demanding AI workloads. Large language models are based on billions to quadrillions of pieces of data and therefore require large computing power to enable the understanding and creation of language.

NVIDIA Grace Hopper is specially developed for training large AI models and enables high data throughput due to its architecture. Because of the HBM3 memory used, Grace Hopper achieves a memory bandwidth of almost 100 percent with a batch size of 1. While other systems such as x86 systems experience a drop in performance from a batch size of 4, the NVLink in Grace Hopper supports workloads by a factor of 4.5. This enables trained large language models with Grace Hopper to capture complex queries in the shortest possible time and process the huge data sets quickly.

Odoo – Beispiel 1 für drei Spalten

Grace Hopper for
Rekommender Systems

Recommender systems (also known as recommendation services or filters) use powerful AI infrastructures to help end users and customers find the content, products and services they are most interested in. The underlying combination of sophisticated AI models and large data sets often requires extensive computing resources.

NVIDIA Grace Hopper is the right infrastructure for managing large models and massive data in recommender systems and takes interaction with user-generated content to the next level. NVIDIA Grace Hopper drives high-throughput recommender system pipelines by delivering outstanding CPU memory performance. This is made possible by NVIDIA's NVLink: the direct, high-performance communication bridge between a pair of GPUs delivers high-bandwidth access to HBM3 local memory and LPDDR5X memory. This accelerates the speed of work and helps the AI models used to deliver more accurate, relevant and faster results to end users.

Odoo – Beispiel 1 für drei Spalten

Grace Hopper for
Graph Neural Networks

Graph Neural Network (GNN) already offers impressive performance and high interpretability, especially in the area where relationship and interaction of data play an important role. However, high computing power is required to enable the models to accurately analyze and predict large amounts of data in a short period of time.

NVIDIA Grace Hopper offers the right infrastructure for use cases in the field of GNN. Grace Hopper provides users with the structural basis to quickly and efficiently construct the numerical representation of a graph, which can later be used to train machine learning algorithms. For the processing of structured data and the training of GNN, the user has high bandwidth access to LPDDR5X and an NVLink C2C connection. Grace Hopper's large memory capacities solve graph-based machine learning tasks in the shortest possible time.

Would you like to learn more?

You will find further information in the NVIDIA-Whitepaper
'Performance and Productivity for Strong-Scaling HPC and Giant AI Workloads'

Our competencies for your projects


Together with our long-standing partners, we provide you with holistic and detailed advice and ensure that your AI infrastructures
are implemented in a practice-proven manner. 


Machine Learning
& ArtificiaI Intelligence
by GNS Systems

The right choice if …

Our services