Development Effort & Productivity Estimation in HPC


In pursuit of exaflop computing, the expenses of high-performance computing (HPC) centers increase in terms of acquisition, energy, employment, and programming. Thus, a quantifiable metric for productivity as value per cost gets more important to make an informed decision on how to invest available budgets. Productivity with respect to a computing center is influenced by the runtime of parallel applications and the center’s total cost of ownership (TCO). Furthermore, TCO is dependent on the developer costs and therefore on the development effort. In turn, impact factors on the development effort incorporate project size, the kind of application, personnel factors (e.g. knowledge of a certain programming model), the programming model itself, the architecture or the programming environment.

This work deals with the investigation of the impact and importance of these factors to create a methodology for software and total ownership cost estimation in high-performance computing. One focus is on the interaction of the parallel programming model, the kind of algorithm and the development productivity.


Since metrics for the comparison or assessment of clusters depend on many parameters (e.g. hardware costs, application runtime, tuning effort, system lifetime), we are working on tools that enable other computing centers to define their own parameters and model their own productivity. This incorporates costs for development of parallel applications which is defined by development effort.

EffortLog [1] - Tool to Track Development Effort & Performance

We have developed a small effort log tool that asks every 15 min or 2 hours (the interval is definable) what kind of work was done in the last interval and which changes did occur (e.g. compiler, hardware, performance). This is similar to a developer diary and may also help the developer keeping track of tuning knobs and other modifications. Additionally, developers can denote milestones and corresponding performance data. All data is stored locally in a json-file that can be investigated by the developer any time.
The tool can be found at the EffortLog GitHub repository. We provide pre-compiled Windows and OSX versions (see “releases”) and releases and sources for compilation under Linux (and other OS).

aixH(PC)2 - Aachen HPC Productivity Calculator [3]

In Wienke et al [3], we have extended the previous approach [4] to model the productivity of HPC systems on a computing center scale. Productivity is a quantifiable metric defined as value per cost that can be used to make informed decision on how to invest available budgets.
We have developed an online tool that illustrates our approach and allows other computing centers to model their own productivity. This online tool computes the productivity for a varying number of computer nodes, system lifetime or person-days needed for application tuning. Besides getting the results in a CSV format, this tool also illustrates the productivity function in 2D or 3D figures.
Here you can find the current version 1.1 for easy usage with pre-defined values for the case study psOpen [3].
If you have feedback or if you are willing to share your parameter set and results, please let us know. The contact address can be found below.

TCO Spreadsheet [4]

In the context of the work of Wienke et al [4], we have developed a spreadsheet that allows to compute the costs per application run and the derivation of break-even investments used for the comparison of different computer architectures and programming models.
Download the TCO spreadsheet: If you are asked for a user name and password, just 'cancel' and the spreadsheet will open.

Development Effort Methodologies

Since statistical methods to analyze development effort need to rely on a great sample size, we hope that the HPC community contributes to this data gathering effort by applying our methodology in events like student courses, hackathons or HPC-focused development teams.

Knowledge Survey (KS) [1]

To investigate the impact of pre-knowledge of developers and their learning progress on development effort, we use so-called knowledge surveys (KS). A knowledge survey is not a test, instead participants are asked to state their confidence (3-point scale) to answer questions on parallel programming. We have introduced their usage and analysis in Wienke et al. [1]. Sample questions for KS can be found in the following spreadsheet. We group questions in categories such as shared-memory programming, distributed-memory programming and GPU programming.

Please feel free to use these questions for your own knowledge surveys (by citing Wienke et al. [1]). We hope to jointly gather sufficient data sets for more insight into the impact of pre-knowledge on development effort. For feedback and cooperations, contact Sandra Wienke (), RWTH Aachen University.

Impact Factors on Development Effort [1]

To get to know which impact factors on development effort are the key drivers for spending development time in HPC, we do surveys asking participants to rank different impact factors amongst each other. Our current list comprises 11 impact factors since our experiments show that a bigger amount of parameters cannot be easily handled by human beings. Nevertheless, the given impact factors might not be the final list of parameters. Therefore, we also ask for other important impact factors and will adapt the given list corresponding to these responses. Interviewing people in the HPC domain, we see that these impact factors might vary across work fields.

Feel free to use and adapt the current list of impact factors that is summerized in the following spreadsheet (please cite Wienke et al. [1]). We hope to jointly gather sufficient data sets for more insight into the impact factors on development effort. For feedback and cooperations, contact Sandra Wienke (), RWTH Aachen University.

Related Publications

  1. S. Wienke: Productivity and Software Development Effort Estimation in High-Performance Computing. Dissertation, RWTH Aachen University, Apprimus Verlag 2017, ISBN 978-3-86359-572-2,
  2. F. P. Schneider, S. Wienke, and M.S. Müller: Operational Concepts of GPU Systems in HPC Centers: TCO and Productivity. In: Euro-Par 2017: Parallel Processing Workshops, pp. 452-464, Springer International Publishing (2018)

  3. J. Miller, S. Wienke, M. Schlottke-Lakemper, M. Meinke, and M.S. Müller: Applicability of the Software Cost Model COCOMO II to HPC Projects. International Journal of Computational Science and Engineering (2017)

  4. M. Nicolini, J. Miller, S. Wienke, M. Schlottke-Lakemper, M. Meinke, M.S. Müller: Software Cost Analysis of GPU-Accelerated Aeroacoustics Simulations in C++ with OpenACC. In: M. Taufer, B. Mohr, and M. J. Kunkel (eds) High Performance Computing: ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS, Frankfurt, Germany, June 19–23, 2016, Revised Selected Papers, pp. 524-543, Springer International Publishing (2016)

  5. S. Wienke, J. Miller, M. Schulz, M.S. Müller: Development Effort Estimation in HPC. ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC16), November 2016, Salt Lake City, UT, USA.
  6. S. Wienke, T. Cramer, M.S. Müller, M. Schulz: Quantifying Productivity - Towards Development Effort Estimation in HPC. Scientific poster at the International ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC15), November 2015, Austin, TX, USA.
  7. S. Wienke, H. Iliev, D. an Mey, M.S. Müller: Modeling the Productivity of HPC Systems on a Computing Center Scale. In: J. Kunkel, T. Ludwig (eds.) ISC 2015, Lecture Notes in Computer Science, vol. 9137, pp. 358-375. Springer International Publishing (2015)
  8. S. Wienke, D. an Mey, M.S. Müller: Accelerators for Technical Computing: Is It Worth the Pain? A TCO Perspective. In: J. Kunkel, T. Ludwig, H. Meuer (eds.) ISC 2013, Lecture Notes in Computer Science, vol. 7905, pp. 330-342. Springer Berlin Heidelberg (2013)



+49 241 80 24761