Blog

  • When the Dam Breaks

    SIDE as a Matchmaker Facilitating CIRCE-JSC Collaboration for HPC-Based Flood Research in Duisburg

    The loss of life and property brought on by unforeseen events or calamities, whether man-made or natural, has never been easy for humanity to prevent. It may be clear where the highest risk areas are, such as residential areas near a river bank, but knowing in advance what measures to take and how to strike the ideal balance between safety and missed opportunities is open to interpretation. This struggle is further complicated by the advent of climate change, which increases both the severity and unpredictability of extreme weather.

    High-performance computing (HPC) has the potential to help humanity minimize these catastrophe related losses in life and property by simulating scenarios accurately enough to quantify the trade offs between safety and missed opportunities. As part of the ‘Computational Immediate Response Centre for Emergencies (CIRCE)’ project, the High-Performance Computing Center Stuttgart (HLRS) and the Duisburg Fire Service developed a simulation to predict flooding on the Rhine. Meanwhile, the Simulation and Data Laboratory Terrestrial Systems at the Jülich Supercomputing Centre (JSC) and HLRS’s Department of Numerical Methods and Libraries both aimed to use HPC-powered numerical simulations to analyze flood scenarios and identify high-risk evacuation areas.

    Recognizing this shared objective, SIDE as a member of the EuroCC2 project brought these two research groups together. By fostering information exchange, SIDE enabled the development of a more effective solution for predicting the impact of dam break scenarios in Duisburg, Germany.

    The simulations are performed in two phases. The first phase is called the flood test phase, in which the water flow is simulated under normal conditions to achieve a steady-state flow, i.e., when there is no temporal evolution of water depth or velocity. This first phase provides realistic initial conditions for the state of the river prior to the dam break simulation. Figure 1 shows some results of such a simulation. The second phase then introduces a breach in the dam, where a surge of water is simulated entering the downstream areas, as shown in Figure 2.

    Nebeneinander angeordnete Ausgaben einer Hochwassersimulation. Links: ein Zeitreihendiagramm von „Mass flux of water“ gegenüber „Time“ mit drei farbigen Linien und einer Legende: Inlet (blau), South (orange) und Outlet (grün). Die orangefarbene South-Linie liegt am höchsten und nimmt im Verlauf der Simulation langsam ab, während die grüne Outlet-Linie deutlich niedriger liegt und nur kleine Schwankungen zeigt; die blaue Inlet-Linie liegt nahe an der grünen und ist schwer zu unterscheiden. Rechts: eine Hochwasserkarte mit einem dunkelblauen, geschwungenen Fluss und dem umgebenden Gelände, das von Grün über Gelb bis Rot eingefärbt ist. Die Karte ist mit „South“ oben links, „Outlet“ oben rechts und „Inlet“ unten beschriftet. Ein vertikaler Farbbalken rechts zeigt, dass verschiedene Farben unterschiedliche hochwasserbezogene Simulierungswerte darstellen, aber die Zahlenbeschriftungen der Skala sind in diesem Bild zu klein, um gelesen zu werden; daher kann die genaue Bedeutung und Richtung der Farbskala (zum Beispiel, ob Rot eine größere Tiefe/Fließgeschwindigkeit als Grün bedeutet) aus dieser Datei nicht bestätigt werden.
    FIGURE 1: (Left) Net mass flux of water flowing at the boundaries of the computational domain (inlet, outlet, and south), which remains almost constant over time - steady state achieved. (Right) Visualization of the flood test phase in steady state conditions; black boxes point out the boundaries of the computational domain (south, outlet, and inlet, respectively) where the water flows.
    Ein maßstabsgetreues Tischmodell, das die flussabwärtige Ausbreitung einer Überflutung in Wohngebieten nach einem Dammbruch veranschaulicht. Die leuchtend grüne Grundfläche stellt die Geländeoberfläche dar, und zahlreiche kleine weiße rechteckige Blöcke unterschiedlicher Größe und Höhe stehen für Wohnhäuser und andere Gebäude. Das dichte Raster aus Zwischenräumen zwischen den Blöcken bildet straßenähnliche Wege und verdeutlicht, wie das durch den Bruch freigesetzte Hochwasser flussabwärts strömen, sich durch die Quartiere ausbreiten und sich in tieferliegenden oder stärker eingeengten Bereichen sammeln kann, während es zwischen den Baukörpern hindurchfließt.
    FIGURE 2: Visualization of downstream flood propagation in residential areas following a dam breach.

    Figure 2 shows the visualization of downstream flood propagation following a dam breach. The dam breach point is marked with a black circle. The red-colored region indicates the river, and the green-colored region shows the water flowing downstream through residential areas. A small blue patch near the dam breach point shows some landscape where water has not yet covered the ground.

    Another outcome of this SIDE matchmaking was the evaluation of two open-source codes, SERGHEI and OpenFOAM , for performing numerical simulations of dam break scenarios. Although OpenFOAM offers reasonable results, SERGHEI was chosen as the code to model the dam break use case because of its high degree of accuracy, optimized performance, and ability to run on Graphical Processing Units (GPUs), which can be significantly faster than the Central Processing Units (CPUs) that are typically used by OpenFOAM. Table 1 shows the difference in the simulation times for a flood test phase using CPUs and GPUs. It shows that the simulation time on GPUs for a simulation with SERGHEI is significantly lower, meaning that, in addition to energy savings from needing less time, less energy is needed because GPUs are themselves more energy efficient than CPUs for this type of simulation.

    Resource used by SERGHEI    Runtime
    CPU (AMD EPYC 7742 64-Core-Prozessor)    18995,9 (s)
    GPU (NVIDIA A100-SXM4-40GB)    2344,27 (s)

    Thanks to SIDE matchmaking, these German research groups not only strengthened their collaboration but also shared their expertise, solved their problem faster, and made better use of HPC and energy resources. Their collaboration is still ongoing, with the potential for future scientific and academic benefits as they learn more about making simulations faster and more energy efficient. Moreover, collaborations like this could eventually help better guide local authorities through their planning for similar unforeseen events and crises.

  • Post-Event Report – 2nd Forum for Supercomputing & Future Technologies

    Services & Applications for Industry and Public Institutions

    On October 21, 2025, the High-Performance Computing Center Stuttgart (HLRS) hosted the second Forum for Supercomputing & Future Technologies. Under the motto “Services & Applications for Industry and Public Institutions,” experts from research, industry, and the public sector came together to explore how high-performance computing (HPC) is driving digital innovation and transformation across domains.

    After a warm welcome by Dr. Andreas Wierse (SIDE / SICOS BW GmbH), the day began with industrial use cases highlighting the digital transformation of SMEs. Erwin Schnell (AeroFEM GmbH) opened with „Der Weg ist das Ziel" , illustrating how small and medium-sized enterprises can leverage simulation and HPC to navigate the path toward digital maturity. Dr. Andreas Arnegger (OSORA Medical GmbH) followed with an impressive insight into HPC-assisted therapy planning for bone fracture treatment, showing how computational power directly benefits patient care.

    In another striking example, Dr. Sebastian Mayer and Dr. Andrey Lutich (PropertyExpert GmbH) demonstrated how AI-based image recognition is revolutionizing automated invoice verification – a clear intersection between data science and high-performance computing.

    After a short coffee break, Paul von Berg (Urban Monkeys GmbH / DataMonkey) shared his experience fine-tuning a geospatial LLM on HPC systems, sparking lively discussions among attendees. Daniel Gröger (alitiq GmbH) presented an FFplus-supported project using machine learning for short-term PV power forecasting, followed by Dr. Xin Liu (SIDE / Jülich Supercomputing Centre) , who showcased dam-break simulations and German Bight operation models – tangible examples of HPC applications in the public sector.

    Before lunch, several key initiatives were introduced, including SIDE, FFplus, JAIF, HammerHAI, EDIH Südwest and EDIH-AICS. Together, they illustrated how research, funding, and industry are closely collaborating to enhance digital innovation and technological sovereignty in Germany and Europe.

    The afternoon program combined practical experience with networking. Participants could either join Speeddating with HPC, AI, and funding experts or take a data center tour to see HLRS infrastructure in action. Later, sessions included one-on-one expert consultations, a hands-on workshop „How to Use a Supercomputer: The Basics“ by Dr. Maksym Deliyergiyev , and a visualization workshop led by the HLRS Visualization Department, where participants experienced immersive data environments.

    In closing, Dr. Andreas Wierse offered a look ahead to upcoming SIDE and EuroCC activities, emphasizing the growing role of collaboration and accessibility in supercomputing. The forum once again proved that HPC is no longer an exclusive domain of research institutions but a practical tool for innovation in both industry and the public sector.

    The morning program of the second SIDE Forum can now be viewed below.

    Watch video

  • HPC for AI-based trading robots: A success story with Smart-Markets GmbH

    Technical/scientific Challenge

    In the ever-changing financial markets, adaptability and innovation are crucial for sustained success. Smart-Markets GmbH is an SME that develops and offers automated trading robots for medium to long-term stock trading and foreign exchange (forex) Day trading. Since market dynamics change over time, the performance of a trading algorithm diminishes when it is not able to adapt to market changes. Therefore, maintaining continuous effectiveness of the trading robots is one of the major challenges for Smart-Markets, currently requiring continuous back-testing and recalibration of the trading robot algorithms.

    Solution

    To address this challenge, Smart-Markets collaborated with SIDE in a Proof-of-Concept (PoC) study to explore using advanced Machine Learning techniques, specifically Reinforcement Learning, to improve the adaptability of their trading robots. As shown in Figure 1, the robot traded in the EUR/USD stock market. More than 10 years of high frequency tick data, which records every price change in trading, was used for the training and the subsequent test-trading of the agent.

    Figure 2 depicts the results for a simplified scenario, in which no trading fee was applied for the transactions. After an initial random action phase in the first years of trading, where the net worth of 100.000 USD did not significantly change, the agent started making its own trading decisions. Evidently, the predictions of the agent were sufficient to achieve a continuous profit over several years of trading, even in periods of overall negative trends.

    Diagram showing an interaction loop between two labeled boxes. The top box says
    Figure 1: AI agent with reinforcement learning to trade Euro and USD in the stock price.
    Diagram showing an interaction loop between two labeled boxes. The top box says
    Figure 2: Net worth of the trading robot over time (left) and the course of the USD/EUR training data (right).

    Benefits 

    • SIDE helped Smart-Markets leverage HPC resources for processing and analyzing large-scale, high-frequency financial data.
    • The PoC enabled the testing of AI-based trading robots, which could be adapted to changing market conditions within Smart-Markets trading strategies.
    • This PoC serves as a model for exploring broader adoption of advanced computing in the financial sector and beyond.

    Results

    With AI expertise provided by SIDE, this PoC allowed Smart-Markets to explore a new technology without first needing to acquire AI experience. The results show that an AI-based trading robot has the potential to trade profitably over multiple years by dynamically adapting to market changes in real-time. However, within the scope of this project, it was not possible to train a robot that makes a profit in realistic scenarios where a fee is required for each action. To adapt the trading robot to realistic scenarios in the future, the scope of this PoC could be significantly expanded by e.g. incorporating data from several trading prices into the training model.

  • State-of-the-art advancements in quantitative MRI using HPC

    Technical/scientific Challenge

    Quantitative MRI (qMRI) measures underlying MRI parameters, enhancing sensitivity to physiological changes and enabling reliable test-retest comparability, so that observed changes reflect true physiological differences rather than scanner variability. Translating qMRI to UHF, which produces higher-resolution imaging in shorter acquisition times, entails increased field inhomogeneities and specific absorption rate, though. Novel methods developed at INM-4 address these challenges but trigger significantly higher reconstruction complexity and prohibitively long reconstruction times.

    Solution

    To address these prohibitively long reconstruction times, INM-4, in collaboration