Lecture Concepts and Models of Parallel and Data-centric Programming



We will implement the flipped classroom concept. That means that the lecture has been broken up into subsections of 20 - 30 minutes length each, and these subsections will be provided as videos here. We will also add supplemental material, quizzes, and so forth. The scheduled lecture slots will be used as interactive question-and-answer sessions via Zoom to address the lecture parts that were made available during the week before.


The lecture will start on Monday, April 19th, at 10:30 a.m. and will take place via Zoom videoconferencing. The link will be published in the Moodle course room. Please register via RWTHonline for the lecture to get access to the course room (see below).

Participation in any Zoom meeting is optional.

Q&A Sessions

  • Mon, 10:30 a.m. - 12:00 p.m.


  • Thu, 10:30 a.m. - 12:00 a.m.


Registration for the lecture is organized via RWTHonline.


Materials are published in the Moodle course room. You get access by registering via RWTHonline.


The traditional simulation is increasingly enhanced by modeling based on (large) data sets. Methods and tools of parallel programming foster the setup, evaluation and adoption of compute-intensive and complex models. In parallel processing of large amounts of data, novel programming models offer additional functionality and performance improvements by adopting data-centric approaches instead of focusing on computations. In this lecture, corresponding programming models, selected algorithms and tools are presented which are used in high-performance computing and data sciences.

The objectives of the lecture are the classification of parallel systems in high-performance computing, as well as the processing of large data sets (big data) and the understanding of optimization and parallelization concepts to exploit these systems. This includes the knowledge and comprehension of the implementation of various programming models, of selected parallel algorithms and respective complexity characteristics and scalability limits.


  • Architecture of parallel computers (clusters) for the application in high-performance computing and in the processing of large amounts of data (big data)
  • Parallel programming models: instruction level, accelerators, shared memory, distributed memory, MapReduce concept
  • Parallel processing of I/O
  • Synchronization concepts used by parallel programming models
  • Realization of commonly used (abstract) data types in parallel programming models
  • Selected parallel algorithms from different application areas
  • Modeling of parallelism (speedup, efficiency, scalability limits) and performance
  • Further selected topics