List of Projects
Showing 1 - 13 of 13 Results
Heuristics for Heterogeneous Memory (H2M)
In the DFG-funded project "Heuristics for Heterogeneous Memory" (H2M) RWTH Aachen University and the French project partner Inria are jointly developing support for new memory technologies such as High Bandwidth Memory (HBM) and Non Volatile Memory (NVRAM).These technologies are increasingly used in HPC systems as additional memory besides the common DRAM. To use them, applications currently have to be heavily modified and need to use platform- or manufacturer-specific APIs.
H2M pursues the goal of providing portable interfaces in order to to identify available memories and their properties in a system and to access those. Based on this, abstractions and heuristics are developed to give application developers and runtime systems control to decide in which memory to place data and when to move data between different types of memory.
Further information is available at the project web page .
Interactive collaborative parallel programming (IkapP)
Within the framework Fellowships for Innovation in Digital Higher Education NRW, the Stifterverband has accepted the project of Lena Oden (FernUniversität in Hagen) and Christian Terboven (RWTH Aachen) for funding.
Parallel programming and data processing on multicore and manycore architectures is becoming increasingly important in many subjects, such as deep learning or large simulations. The students’ motivation for the use of high-performance computers is unfortunately limited by many technical hurdles and therefore often leads to a passive participation of the students in the practical exercises of the courses. The aim of the project is therefore to create a browser-based environment with suitable teaching units. This enables an easier access to the systems and supports the students with appropriate teaching modules to learn the concepts of parallel programming independently, without having to deal with technical details of the systems beforehand. The platform should also enable collaborative work so that students can work together at different locations. Through interactive visualization and targeted incentives such as "performance competitions", students are also to be motivated to develop their own, better solutions.
Further information is available on the website of the Stifterverband.
Development of a Scalable Data-Mining-based Prediction Model for ICT and Power Systems – ScaMPo
The complexity of power grids and at the same time of supercomputers increases permanently. Especially the increasing share of renewable energy in the power generation implies fundamental changes in the power grid. A growing number of participants is able to produce and to consume power. Consequently, every behavior modification has influence on the grid. In the area of supercomputing, the complexity of the system increases. For instance, the new RWTH cluster CLAIX scheduled to start operation in Nov. 2016 will have 600 2-sockel compute nodes and each CPU will have 24 cores (48 cores including hyperthreading). Parts of the system will be accelerated by GPUs and the power consumption of the whole system but also of each component will be continuously monitored. Changes to the software stack and any replacement of a defect hardware component has an influence on the performance, the power consumption and also on the failure rate. Especially, the forecast of the impact of changes and its long-term effects like the reduction of failure rates is a high challenge. Data mining is the key technology to handle such complex systems and is in principle a computational process of discovering patterns in large data sets. In general, data mining is one of the key technologies for the digital society. Based on the two examples – handling of complex power grids and supercomputers – the project ScaMPo creates a scalable framework to collect the data and to store in a cloud infrastructure. Afterwards, the data will be analyzed, patterns will be discovered and the understanding of the system will be improved. In case of supercomputers the operation costs will be reduced while in power grid the stability and the penetration of renewable energy will be increased. This project will not develop new data mining techniques. Rather, the project will base on open-source approaches for data mining and focus on the strength of the project partner, which is the design of a scalable and a robust approach. The long-term vision of the project, is the generalization of the approach for other research areas and the creation of a competence center for scalable data mining technologies.
Jülich Aachen Research Alliance (JARA)
The Jülich Aachen Research Alliance, JARA for short, a cooperation between RWTH Aachen University and Forschungszentrum Jülich, provides a top-level research environment that is in the top international league and is attractive to the brightest researchers worldwide. In six sections, JARA conducts research in translational brain medicine, nuclear and particle physics, soft matter science, future information technologies, high-performance computing, and sustainable energy.
The RWTH Aachen IT Center provides support to the JARA-HPC section, which first and foremost aims to contribute to making full use of the opportunities offered by high-performance computers and computer simulations to address current research issues. Furthermore, it seeks to provide a joint infrastructure for research and teaching in the fields of high-performance computing and visualization. More..
MUST Correctness Checking for YML and XMP Programs – MYX
Exascale systems challenge the programmer to write multi-level parallel programs, which means employing multiple different paradigms to address each individual level of parallelism in the system. The long-term challenge is to evolve existing and to develop new programming models to better support the application development on exascale machines. In the multi-level programming paradigm FP3C, users are able to express high-level parallelism in the YvetteML workflow language (YML) and employ parallel components written in the XcalableMP (XMP) paradigm. XMP is a PGAS language specified by Japan’s PC Cluster Consortium for high-level programming and the main research vehicle for Japan’s post-petascale programming model research targeting exascale. YML is used to describe the parallelism of an application at a very high level, in particular to couple complex applications. By developing correctness checking techniques for both paradigms, and by investigating the fundamental requirements to first design for and then to verify the correctness of parallelization paradigms, MYX aims to combine the know-how and lessons learned of different areas to derive the input necessary to guide the development of future programming models and software engineering methods.
In MYX we will investigate the application of scalable correctness checking methods to YML, XMP and selected features of MPI. This will result in a clear guideline how to limit the risk to introduce errors and how to best express the parallelism to catch errors that for principle reasons can only be detected at runtime, as well as extended and scalable correctness checking methods.
A Task-based Programming Environment to Develop Reactive HPC Applications – Chameleon
The architecture of HPC systems is becoming increasingly complex. The BMBF-funded Chameleon project is dedicated to the aspect of dynamic variability in HPC systems, which is constantly increasing. Today's programming approaches are often not suitable for highly variable systems and may only be able to use some of the true performance capabilities of modern systems in the future.
To this end, Chameleon develops a task-based programming environment that is better prepared for systems with dynamic variability than bulk synchronous programming models commonly used today. Results from Chameleon are expected to influence the OpenMP programming model.
The ability of the Chameleon runtime environment to react to dynamic variability is evaluated using two applications. SeiSol simulates complex earthquake scenarios and the resulting propagation of seismic waves. Parallel processing at the node level is based on a explicitly implemented task queue that takes account of priority relations between the tasks. Chameleon's reactive task-based implementation is designed to simplify this task queue and improve scaling. sam(oa)² enables finite volume and finite element simulations on dynamic adaptive triangular grids. It implements load balancing with the help of space-filling curves and can be used, among other things, for the simulation of tsunami events. Chameleon will enable dynamic execution of tasks on remote MPI processes and develop a reactive infrastructure for general 1D load balancing problems.
Further information can be found on the project homepage.
Performance, Optimization and Productivity - POP
ProPE is a project funded by the German Science foundation (DFG) from 2017 to 2020. It aims at developing a blueprint for a sustainable, structured, and process-oriented service infrastructure for performance engineering (PE) of high performance applications in German tier-2 or tier-3 scientific computing centers.
The vision of ProPE is to have a nationwide support infrastructure which allows application scientists to develop and use code with provably optimal hardware resource utilization on high performance systems, thus reducing IT costs of scientific progress.
Further information can be found on the project homepage.
Using performance analysis tools and optimizing code for HPC architectures is a cumbersome task which often requires in depth expert knowledge in HPC. Because of current trends of computing architectures to use accelerators, more cores and deeper memory hierarchies, this complexity is going to increase further in the foreseeable future. Therefore, the POP (Performance, Optimization and Productivity) project offers services in performance analysis and performance optimization to code developers in industry and academia to connect code developers and HPC experts. This shall allow to integrate performance optimization in the software development process of HPC applications. The POP project gathers experts from the Barcelona Supercomputer Center (BSC), the High Performance Computing Center Stuttgart (HLRS), the Jülich Supercomputing Centre (JSC), the Numerical Algebra Group (NAG), TERATEC und the IT Center of RWTH Aachen University.
POP is one of the eight Centers of Excellence in HPC that have been promoted by the European Commission within Horizon 2020.
Further information on the project and how you can engage the services of POP can be found here.