Description
The growing proliferation of GPUs and related HPDA and HPC-based AI workloads is challenging the HPC storage architectures of conventional CPU-based designs and traditional HPC modeling/simulation workloads. Conventional HPC storage systems manage the well-understood needs of largely independent and segregated home directories and scratch files, keeping the system fully utilized for optimal performance. However, data-intensive HPDA and AI workloads, with much greater variety of heterogenous I/O profiles, are stressing the performance capabilities of the conventional storage systems and related architectures.
