Wartungsankündigung: Wichtig: bitte beachten Sie unsere Wartungsankündigungen für Dienstag, den 02. April 2024 und Freitag, den 05. April 2024 auf der Magazineinstiegseite!
Maintenance announcement: please note our maintenance announcements for Tuesday, 02 April 2024 and Friday, 05 April on the repository page!
Wartungshinweis: wegen wichtigen Wartungsarbeiten an den OpenCast-Servern, bitten wir Sie über das Osterwochenende keine neuen Videos hochzuladen! Die bereits vorhandenen OpenCast-Videos stehen aber wieder zur Verfügung.
Icon Course

Seminar Advanced Topics in Parallel Computing (SS19)

Efficient use of high-end supercomputing resources for simulations of a phenomenon from physics, chemistry, biology, financial modelling, neural networks or signal processing, is only possible if the corresponding applications are designed using modern and advanced computational methods in parallel programming. Often the ability of the application to use newest computing hardware like accelerators or high-speed transmission technology, plays a central role for being granted an access to big supercomputers. Furthermore improving existing algorithms of simulation codes by using advanced technique of parallelization can result to crucial advantages for efficiency in time: when simply speeding up the generation of results, or even saving energy: when the optimised application is able to generate the same results through redistribution of main computation into low-energy consuming part of computers like graphical co-processors, local disks, cache, etc. Students attending this seminar will be assigned topics related to up-to-date technology in the field of advanced parallel programming for distributed and shared memory systems, using MPI, OpenMP, CUDA, OpenCL, OpenACC. Also the tools for analysis of scalability, efficiency and potential consumption of time by an application will be studied and topics in parallel file systems, high-speed communications could be investigated. < br> The following topics can be chosen: Parallelisierungsstrategien für neuronale Netze Parallel Computing with MATLAB OpenACC Programming on Graphics Cards Code Parallelisation with OpenMP OpenMP 4.0 - Programming Standard for CPUs and GPUs High-Order Asynchronous Finite Difference Schemes AVX SIMD Can hardware performance counters reliably be used to detect performance patterns? Swapping the out-vertices and the in-vertices of a graph in GPU-accelerated data analytics. Implementing single-source breadth-first search, multisource breadth-first search, and weighted breadth-first search on a graph. Fast and Accurate Summation with Finite Precision - Theory and Practice. ROCm, a New Era in Open GPU Computing Effiziente Datenlayouts und Interpolationsschamata für partikelbasierte Simulationsmethoden: CPU und GPU Verfügbare Programmiermodelle für GPUs: Alternativen zu CUDA VR aus dem Internet: WebVR für die Visualisierung wissenschaftlicher Daten Programmiermodelle für GPUs: Alternativen zu CUDA

General Information

Important Information
Die Themenvergabe findet am 29.4.2019 statt.
SCC Süd 20.21 - R217
15.45 - 17:15

Description


Efficient use of high-end supercomputing resources for simulations of a phenomenon from physics, chemistry, biology, financial modelling, neural networks or signal processing, is only possible if the corresponding applications are designed using modern and advanced computational methods in parallel programming. Often the ability of the application to use newest computing hardware like accelerators or high-speed transmission technology, plays a central role for being granted an access to big supercomputers.

Furthermore improving existing algorithms of simulation codes by using advanced technique of parallelization can result to crucial advantages for efficiency in time: when simply speeding up the generation of results, or even saving energy: when the optimised application is able to generate the same results through redistribution of main computation into low-energy consuming part of computers like graphical co-processors, local disks, cache, etc.

Students attending this seminar will be assigned topics related to up-to-date technology in the field of advanced parallel programming for distributed and shared memory systems, using MPI, OpenMP, CUDA, OpenCL, OpenACC. Also the tools for analysis of scalability, efficiency and potential consumption of time by an application will be studied and topics in parallel file systems, high-speed communications could be investigated.
< br>

The following topics can be chosen:

Parallelisierungsstrategien für neuronale Netze
Parallel Computing with MATLAB
OpenACC Programming on Graphics Cards
Code Parallelisation with OpenMP
OpenMP 4.0 - Programming Standard for CPUs and GPUs
High-Order Asynchronous Finite Difference Schemes
AVX SIMD
Can hardware performance counters reliably be used to detect performance patterns?
Swapping the out-vertices and the in-vertices of a graph in GPU-accelerated data analytics.
Implementing single-source breadth-first search, multisource breadth-first search, and weighted breadth-first search on a graph.
Fast and Accurate Summation with Finite Precision - Theory and Practice.
ROCm, a New Era in Open GPU Computing
Effiziente Datenlayouts und Interpolationsschamata für partikelbasierte Simulationsmethoden: CPU und GPU
Verfügbare Programmiermodelle für GPUs: Alternativen zu CUDA
VR aus dem Internet: WebVR für die Visualisierung wissenschaftlicher Daten Programmiermodelle für GPUs: Alternativen zu CUDA

General

Language
German
Copyright
This work has all rights reserved by the owner.

Availability

Access
Unlimited
Admittance
You have to request for membership to access this course. Please describe your interest for becoming member in the message form. You will be notified as soon as an administrator has accepted or declined your request.
Registration Period
Unlimited

Visible Personal Data for Course Administrators

Data Types of the Personal Profile
Username
First Name
Last Name
E-Mail
Matriculation number

Additional Information

Object-Id
1317007
Permanent Link