Publication

Enabling rule-based application of correctness analyses in MUST

  • Ermöglichung der regelbasierten Anwendung von Korrektheitsanalysen in MUST

Nguyen, Dinh Thuy Vy; Müller, Matthias S. (Thesis advisor); Lankes, Stefan (Thesis advisor); Tomski, Felix (Consultant)

Aachen : RWTH Aachen University (2025)
Bachelor Thesis

Bachelorarbeit, RWTH Aachen University, 2025

Abstract

The Message Passing Interface (MPI) is a standard for enabling communication among processes in parallel and distributed systems. As the MPI standard evolves, new routines are continually introduced, and existing ones often require updates. This makes it increasingly important to keep MPI tools up to date to ensure their performance remain consistent with the standard. MUST, a correctness analysis tool, is no exception. The tool currently relies on MPI API specifications in XML format, where developers must manually decide which routines require which correctness checks. This manual process is error-prone, difficult to maintain, and becomes especially inconvenient as the MPI header grows in size and complexity, making extension and updates of the specifications demanding. Instead, this work contributes by further developing a system that uses a custom grammar to define correctness rules that apply specific checks to MPI routines if the conditions satisfy. These conditions ensure that the routines or their parameters exhibit specific properties. In this way, the generation of such specifications is automated. Key parts of the work involve designing rules to capture correctness patterns, mapping the correctness analysis parameters to suitable MPI routine parameters, as well as pre/post classification to decide which checks should be called before or after the actual MPI routine in the wrappers. This automation helps reduce manual effort to update API specifications and ensures consistency across them. The evaluation focuses on two aspects: the accuracy of the mapping between generated and manual specifications, and the accuracy of the pre/post classification of analysis functions. Together, these measures indicate how closely the automatically generated specification aligns with the manually written one. From the evaluation, the system produces 19,515 lines of code, compared with 36,726 lines in the manually written XML specification. When tested against the MUST test suite of 826 tests, 178 tests (21.55%) passed with the generated specification, highlighting the remaining gap between the automatically generated and manually maintained specifications.

Institutions

  • Faculty of Computer Science [120000]
  • Chair of High Performance Computing (Computer Science 12) [123010]