Introduction

In early 2025, just a few months ago, the High-Performance Computing Center Stuttgart (HLRS) introduced “Hunter,” the latest in its line of supercomputing systems designed for complex simulations, data analytics, and AI workloads. Replacing the older “Hawk” system, Hunter marks a significant architectural departure, prioritizing energy efficiency, tight CPU-GPU integration, and scalable performance in heterogeneous workloads.

This week, in collaboration with AMD and HPE, we had the chance to visit HLRS, and were able to explore the hardware, software, and infrastructure underlying Hunter. Like most Supercomputers, Hunter isn’t a single machine—it’s a massive-distributed system, built from hundreds of interconnected nodes. Everything works together, but it’s definitely not just one box doing all the heavy lifting. Think of it more like a networked ecosystem than a standalone server. The system is housed in HLRS’s purpose-built facility in my hometown of Stuttgart, where attention to power distribution, thermal management, and high-speed interconnects is paramount.
With a theoretical peak performance of 48.1 PFlop/s, Hunter’s speed is nearly double that of its predecessor, Hawk, which uses a pure CPU-based architecture.
This article provides an overview of the system architecture, computing components, and operational model behind Hunter.