Copyright Notice:
The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Publications of SPCL
L. Trümper, P. Schaad, B. Ates, A. Calotoiu, M. Copik, T. Hoefler: | ||
A Priori Loop Nest Normalization: Automatic Loop Scheduling in Complex Applications (In CGO '25: Proceedings of the 23rd ACM/IEEE International Symposium on Code Generation and Optimization, pages 418-430, Association for Computing Machinery, ISBN: 9798400712753, Mar. 2025) AbstractThe same computations are often expressed differently across software projects and programming languages. In particular, how computations involving loops are expressed varies due to the many possibilities to permute and compose loops. Since each variant may have unique performance properties, automatic approaches to loop scheduling must support many different optimization recipes. In this paper, we propose a priori loop nest normalization to align loop nests and reduce the variation before the optimization. Specifically, we define and apply normalization criteria, mapping loop nests with different memory access patterns to the same canonical form. Since the memory access pattern is susceptible to loop variations and critical for performance, this normalization allows many loop nests to be optimized by the same optimization recipe. To evaluate our approach, we apply the normalization with optimizations designed for only the canonical form, improving the performance of many different loop nest variants. Across multiple implementations of 15 benchmarks using different languages, we outperform a baseline compiler in C on average by a factor of 21.13, state-of-the-art auto-schedulers such as Polly and the Tiramisu auto-scheduler by 2.31 and 2.89, as well as performance-oriented Python-based frameworks such as NumPy, Numba, and DaCe by 9.04, 3.92, and 1.47. Furthermore, we apply the concept to the CLOUDSC cloud microphysics scheme, an actively used component of the Integrated Forecasting System, achieving a 10% speedup over the highly-tuned Fortran code.Documentsdownload article:![]() access preprint on arxiv: ![]() | ||
BibTeX | ||
|