Abstract
High-Performance Computing (HPC) systems excel in managing distributed workloads, and the growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. In the past, research on HPC I/O focused on optimizing the underlying storage system for modeling and simulation applications and checkpointing the results, causing writes to be the dominant I/O operation. These applications typically access large portions of the data written by simulations or experiments. ML workloads, in contrast, perform small I/O reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to HPC storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during ML training, explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature, and lastly, present gaps requiring further R&D. We seek to summarize the common practices used in accessing data by ML applications and expose research gaps that could spawn further R&D.
Abstract (translated)
高性能计算(HPC)系统在管理分布式负载方面表现出色,随着人工智能(AI)需求的增加,对机器学习(ML)模型训练和推理的更快速方法的需求也在增加。在过去,研究主要集中在优化建模和仿真应用的底层存储系统以及检查点结果。导致写入操作成为主导的I/O操作。这些应用通常访问由仿真或实验编写的大型数据部分。与ML工作负载不同,ML工作负载在大型随机文件上执行小的I/O读取。这种I/O访问模式的变化给HPC存储系统带来了几个挑战。在本文中,我们对HPC系统中的ML应用程序的I/O进行了调查,目标文献是在2019年到2024年期间发表的6年内的文献。我们提供了ML的常见阶段的概述,回顾了可用的调试器和基准,研究了在ML训练过程中遇到的I/O模式,探讨了现代ML框架中使用的I/O优化以及在最近文献中提出的I/O优化,最后,我们提出了需要进一步研究的研究差距。我们希望简要概括ML应用程序访问数据时的常见做法,并揭示可能引发进一步研究需求的研究空白。
URL
https://arxiv.org/abs/2404.10386