"Flussdiagramm, das den Prozess des Zugangs und der Nutzung eines HPC-Systems (High-Performance Computing) darstellt. Das Diagramm ist in mehrere Abschnitte unterteilt, die jeweils durch ein Kästchen mit Symbolen und Text repräsentiert werden. 1. Der linke Abschnitt trägt den Titel ""NCC-Kontakt"" und enthält Symbole von einer Person und einem Headset. Es werden die initiale Beratung, Bedarfsanalyse und die Bestimmung geeigneter HPC-Lösungen erwähnt. 2. Der nächste Abschnitt, ""Login HPC-System,"" zeigt ein Symbol einer Person am Computer. Es wird der Zugang zum System über SSH und webbasierte Portale besprochen, sowie die interaktive Arbeit auf Login-Knoten, meist über Kommandozeile/Linux. 3. ""Daten-Management"" ist der folgende Abschnitt, mit Symbolen für Datenspeicherung. Es werden Datenübertragungsmethoden wie SCP und webbasierte Portale behandelt, sowie Speichersysteme für Dokumente, Software und parallele Dateisysteme für HPC-Jobs. 4. ""Post-Processing"" enthält Symbole von Servern, einer Cloud und einem Auge. Es wird über die Weiterverarbeitung der Daten in Workflows auf HPC- oder Cloud-Systemen und die Visualisierung gesprochen. 5. Der rechte Abschnitt, ""Ergebnis,"" zeigt eine 3D-Visualisierung eines Turbinen- oder ähnlichen Objekts, das das Ergebnis des HPC-Prozesses darstellt. Unter den Hauptabschnitten gibt es zusätzliche Kästchen mit den Titeln ""Bereitstellung,"" ""Projektstart,"" ""Software-Umgebung,"" und ""Job-Management,"" die jeweils Schritte wie die Projekterstellung, Softwareanforderungen und das Job-Management im HPC detailliert beschreiben. Das gesamte Diagramm ist durch einen horizontalen Pfeil verbunden, der mit ""Wie erhalte ich Zugang zu HPC? Wie nutze ich das HPC-System?"" beschriftet ist und den Fluss vom Interesse an HPC bis zum Zugang und der Nutzung des Systems anzeigt."

Welcome to our information page on using high-performance computers! Here you will find information on various topics in the world of high-performance computing.

Please note: This page collects links and information on specific topics. If you have individual questions or need more basic information, please visit our Trainings or use our contact form.

HPC access

HPC resources – typically specified in core hours – are available not only for science, but also for SMEs, industry and the public sector. Scientists have the opportunity to apply for HPC resources from public funding. Explanations of this are linked in this category. We are happy to advise users outside the scientific sector on access options – simply contact us using the contact form.

Once the HPC resources have been granted, the systems can be used. Depending on requirements and experience, this is done either classically via the command line using a secure connection via Secure Shell (ssh) or interactively, for example securely via JupyterLab. Login typically takes place via a front end, from which so-called jobs are sent, in which the user-defined work orders are processed with the requested resources (computing nodes/cores, duration). Both the jobs and the entire project can be monitored continuously (job monitoring, user portal). Explanations and examples of technical HPC access are listed below.

JSCLRZHLRS
Requirements: (computing time project, JUDOOR account), instructions for SSH login, GUI via X-Forwarding, Putty, VNCJUWELSJURECA  or via JupyterLabslides and Video)
Job monitoring and user portal
Example of use:
Deep Learning on Supercomputers – An Introduction (MNIST Tutorial with Keras/Tensorflow, adaptation for distributed training (Horovod), (GIT repo)
Project:
At LRZ, projects can only be submitted by academic institutions, but with industry participation (project partners).
Project application:
Supercomputers
Linux Cluster 
What happens after approval?
Guide to project approval (GCS)
Guide to project approval (PRACE)
Hunter Platform: 
Hunter access
HPC access:
SSH login via classic console with fixed IP address, alternatively VPN
Two-factor authentication required
Vulcan Platform:
NEC Cluster access (vulcan) – HLRS Platforms
HPC access:
SSH login via classic console with fixed IP address
SSH:
SSH at LRZ HPC systems
SSH Tutorial + Video 
Overview and status of HPC systems 
User portal
Graphically displayed HPC job monitoring:
Portal, documentation 
Working environments:
Compute Cloud: Access information, presentation overview 
Jupyter in the cloud
VNC:
Use of a server-side preconfigured remote desktop on dedicated nodes of the Linux cluster or supercomputer

Software

Without software, there would be no high-performance computing, AI or cloud computing! Modern HPC systems differ significantly from consumer computers in terms of their architecture (e.g. type and number of processing cores, size of RAM), storage media and, above all, their usually complex network topology. For this reason, operating systems adapted to HPC systems are used. Based on these, users are provided with a broad software portfolio. This includes basic software tailored to the HPC system, such as compilers or communication libraries for parallel computing, as well as HPC applications such as simulation software and scientific libraries. The software offering is usually available to users via a module system. Software components in the desired versions can be compiled and used in computing jobs. 

In addition to open source packages and community applications, the offering also includes commercial software. Furthermore, users are free to install software packages or self-developed programmes in their private directory. On request, software can also be added to the portfolio, and support can be provided for installation and configuration on the HPC system.

Explanations and examples of software are listed below.

JSCLRZHLRS
Provided libraries and application softwareProvided software environment: Modular system and Spack: documentation, presentation overview Application software packages – HLRS Platforms
Interactive HPC/development environment JupyterLab (overview) Spack in user space for users to install their own software
Jupyter on HPC systems
Software with remote access functionality, e.g.: Visual Studio Code, Matlab 

Data management

HPC projects typically generate large amounts of data. Storage systems with the right features ensure optimal performance when handling this data. For example, parallel file systems enable fast, parallel reading and writing of data during an HPC job, which can then be moved to long-term archives for permanent backup. Explanations of the various storage systems and their use, as well as the possibilities for data transfer between different file systems, are linked in this category.

JSCLRZHLRS
Use and properties of the available file systems (Home, Project, Scratch, (Fast)data, Archive): JUWELSJURECAdata transfer with scp, rsync, etc., as well as use of GIT (JUWELSJURECA)
Juelich Storage Cluster (‘tiered’ storage infrastructure): Video, slides
Features and use of file systems: Supercomputers, Linux Cluster 
Data Science Storage
Data Science Archive
NEC–> NEC Cluster Disk Storage (vulcan) – HLRS Platforms
WorkSpace –> Workspace mechanism – HLRS Platforms 

High Performance Storage System (HPSS) – HLRS Platforms
Data transfer between centres or user workstations and centres via Globus Online or scpOverview and use

Visualisation & post-processing

We support engineers, scientists, industrial users and the public sector in the visual analysis of data typically generated by simulations on high-performance computers.
Various visualisation methods and technologies from our portfolio can be used for this purpose. To visualise results, separate access to dedicated visualisation nodes is required.

Access is via Remote Desktop (via a web browser) or client server (software required). Another option is to use VNC Viewer (software required) or JupyterLab (web browser). Further information and documentation can be found here.

JSCLRZHLRS
JUWELS
JURECA 
Remote Visualisation 2021 as a web-based front end, working on a remote desktop: Portal, documentationVNC access: 
Graphic Environment
in computing jobs (batch mode) or for interactive work on computing nodes (client-server mode) in Rechenjobs (batch-Modus) oder zur interaktiven Arbeit auf Rechenknoten (Client-Server-Modus)ParaView -> Login via client: Hunter PrePostProcessing
VisItCOVISE Online Documentation (hlrs.de)

COVISE  tutorial.pdf (hlrs.de)