Practical: High-Performance Computing System Administration
High-Performance Computing System Administration is essential for managing HPC resources not only as a user but as a cluster administrator. As part of this practical course, you will take part in a hands-on one-week block course, which will introduce the basics of Linux and using HPC resources and then go into depth on HPC system administration. At the end of the block course you will choose a topic in terms of a tool related to HPC system administration, evaluate that tool and hand-in a report at the end of the semester. For this a supervisor will be assigned to you, who is an expert on the assigned tool and is able to guide you.
Key information
Contact | Julian Kunkel, Jonathan Decker | ||
Location | Virtual Main Room Support Room | ||
Time | 07.10.24-11.10.24 5-day block course | ||
Language | English | ||
Module | M.Inf.1831: High-Performance Computing System Administration | ||
SWS | 4 | ||
Credits | 6 (+ 3 with M.Inf.1834) | ||
Contact time | up to 84 hours (63 full hours), depending on the course | ||
Independent study | up to 186 hours |
Please note that we plan to record sessions (lectures and seminar talks) with the intent of providing the recordings via BBB to other students but also to publish and link the recordings on YouTube for future terms. If you appear in any of the recordings via voice, camera or screen share, we need your consent to publish the recordings. See also this Slide.
Required Prior Knowledge
- No skills/knowledge is required
- Understanding of Linux basics and having used Linux before and being able to operate a Bash shell is beneficial
- We will provide a short crash course at the beginning of the course and link supplementary training material
Learning Objectives
- Discuss theoretic facts related to networking, compute and storage resources
- Integrate cluster hardware consisting of multiple compute and storage nodes into a “supercomputer“
- Configure system services that allow the efficient management of the cluster hardware and software including network services such as DHCP, DNS, NFS, IPMI, SSHD.
- Install software and provide it to multiple users
- Compile end-user applications and execute it on multiple nodes
- Analyze system and application performance using benchmarks and tools
- Formulate security policies and good practice for administrators
- Apply tools for hardening the system such as firewalls and intrusion detection
- Describe and document the system configuration
Topics for Practical Works
- LLM RAG Agent based on ChatAI
- Automating Simple Maintenance Tasks in HPC Systems Using Python and Shell Scripts
- Extending the Linux kernel scheduler
- Confidential Computing (HPC/Cloud)
- Python Performance Optimization leveraging Native Implementations (Numba/CPython/PyO3/Nukita/transpyle)
- Parallel filesystems performance optimization & benchmarking (incl AI/ML)
- Longhorn as a Kubernetes persistent storage in the HPC environment
- AI for monitoring
- I/O Performance for ML models
- Nvidia Nsigiht systems on HPC cluster, profiling AI workloads remotely
- Web testing Shiny applications
- Neuromorphic Computing
- Effective intrusion detection systems (IDS) Strategies in HPC Environments
- Regression Testing for HPC
- Global Optimization (of Clusters) with Genetic Algorithms
- FPGA Computing with SciEngine
- RISC-V: State of the union
- Benchmarking of HPC Systems
- Security in Cloud and HPC
- GPU Computing with WebAssembly
- Parallelization with Dask + Xarray
- What's new in the Kubernetes ecosystem
- Containers in HPC
- Function-as-a-service in HPC
- Encryption tools
- Image Management and network booting with Werewolf
- Software Management with modules/spack
- Ressource Management with SLURM
- Managing object storage
- Managing cluster file systems in user space (GlusterFS, FUSE, SeaWeedFS)
- File system management (NFSv4, Ceph, BeeGFS)
- Performance analysis tools
- Monitoring system performance
- Application and system benchmarks
- Virtualization tools for HPC (e.g., CharlieCloud, Singularity, Shifter)
- Scalable databases with e.g., Elasticsearch, Postgres
- Kernel compilation and configuration
- Forensic tools
- WebAssembly in Kubernetes
- Vector database performance comparison with Postgres
- Confidential Container Attestation
Agenda
Block Seminar 07.10.24-11.10.24
This part is attended by BSc/MSc students and GWDG academy participants
Note: There are only breaks for lecture slots in the schedule. You can take a break during exercises as necessary. Preparation sheets: Preparation
Monday 07.10.2024
-
- Agenda of the week
- Forming support groups
- Format of the “group work”
- Exercise (10 min): Introduce yourself in the “learning groups”
- Tutorial (10 min): Demo; setting up cloud resources from a fresh account
- Exercise (20 min): Is your cloud setup working?
- Plenary (10 min): Discussion of the format, Q&A
- 10:00 - 11:30 Linux Crash Course – Kevin Lüdemann slides
- Command Line
- Some basic commands
- Remote access to the Scientific Compute Cluster
- 11:30 - 12:00 Linux Exercise – Kevin Lüdemann exercise
- 12:00 - 12:45 Lunch Break
- 12:45 - 13:15 First steps running applications on the cluster using Slurm – Patrick Höhn slides
- Running applications on multiple nodes using SRUN
- Getting an overview of the available hardware (docu, sinfo)
- Outlook of running a parallel program, measuring different types of applications
- 13:15 - 13:45 Slurm Exercise exercise
- 13:45 - 14:00 break
- 14:00 - 14:30 Introduction to Git – Christian Köhler slides
- 14:30 - 15:00 Git Exercise exercise
- 15:00 - 15:40 Compilation of applications via cmake, Autotools, make – Trevor Khwam slides
- Exercise for cmake, Autotools, make exercise
- 16:00 - 16:15 break
- 16:15 - 16:45 Running containers with Singularity – Azat Khuziyakhmetov slides
- 16:45 - 18:00 Exercise - Virtual Machine and Slurm – Jonathan Decker exercise primes.zip
- Performance of parallel primes
- Resolve issues with preparation of the cloud environment
- Complete other unfinished exercises
- (Optional) Virtual machine setup on personal workstation
Tuesday 08.10.2024
-
- Lecture(15 min): Introduction to firewalls
- Exercise(35 min): Exploring firewall rules, port scanning with nmap, internet access for the nodes using NAT
- Plenary Discussion(10 min)
-
- Lecture(45 min): Introduction Certificates and PKI
- Exercise(55 min): Create, inspect and install certificates into a web server
- Plenary Discussion(20 min)
- 12:00 - 12:45 Lunch Break
-
- “How to boot a thousand nodes”
- Lecture (20 min): Motivation, components of cluster management (DNS, DHCP, PXE-Boot process, images, resource management, monitoring, hardware-components)
- Management Demo
- Exercise (30 min): Describing the responsibility of Warewulf components and the boot process
- Lecture: Technical details and administration of dnsmasq, DHCP, and investigating logfiles
- Exercise 1
- Lecture: Warewulf configuration
- Demo: Image creation and deployment
- Exercise 2
- 14:45 - 15:00 Break
-
- Lecture(15 min): NFS Introduction
- Exercise(30 min): Setup of a basic NFS Server and client
- Plenary Discussion(15 min)
-
- Slurm installation, basic configuration, testing
- Lecture: introduction to Slurm
- Tutorial server installation, basic configuration and testing (flexible break)
- Exercise: adjustments of the configuration, integration of the cluster nodes, testing
Wednesday 09.10.2024
- 9:00 - 11:00 Setting Up Containers – Freja Nordsiek Slides Tutorial 1 Exercise 1 Tutorial 2 Exercise 2
- Lecture (15 min): Introduction to containers and their management
- Demo + Q&A (10 min): Outlook - the scope of container management in Docker and Singularity ecosystems
- Lecture (15 min): Setting up Podman and testing it
- Exercise (30 min)
- Lecture (15 min): Installing and configuring Apptainer and testing it
- Exercise (30 min)
- Plenary discussion (15 min)
-
- Lecture (20 min): processes and management, documentation, frameworks: ITIL, PRINCE2
- Exercise (20 min): Discussion of the best-practices, searching for related work, critical discussion of your own experience with the setup of Warewulf and Slurm
- Plenary discussion (20 min)
- 12:00 - 12:45 Lunch Break
-
- Lecture(15 min): Monitoring introduction and software stacks
- Lecture(5 min): InfluxDB
- Exercise(20 min): Installing InfluxDB
- Lecture(5 min): Telegraf
- Exercise(20 min): Installing Telegraf
- Lecture(5 min): Grafana
- Exercise(35 min): Installing Grafana and setting up a dashboard for an example application (Slurm)
- Plenary discussion (15 min)
- 14:45 - 15:00 Break
-
- Lecture(15 min): Service catalogue introduction, privacy concerns and risk management
- Exercise(10 min): Describing an application for a service catalogue (Telegraf, Influx, Slurm, …)
- Plenary discussion (5 min)
-
- Lecture(30 min): Security introduction + Demo
- Discussing an existing service and its security implications
- Exercise(15 min): Theoretical investigation of an existing service (the one from before)
- Exercise(30 min): Describe a new service and it's security implications and adding it to a service catalogue
- Plenary discussion (15 min)
- 17:30 - 18:00 Intelligent Platform Management Interface (IPMI) – Nils Kanning Slides
- Lecture(15 min): IPMI introduction
- Plenary discussion (15 min)
Thursday 10.10.2024
-
- Lecture(10 min): Gitlab introduction
- Exercise(25 min): Installing Gitlab-Community Edition
- Lecture(15 min): Best-practices for using Git for issue tracking and collaboration
- Examples from GWDG
- Exercise(25 min): Discussing practices for issue tracking
- Plenary Discussion(15 min)
-
- Lecture(10 min): Introduction ticketing systems and ticket workflows
- Tutorial(10 min): Demonstration of features
- Exercise(30 min): Install Znuny
- Plenary Discussion(10 min)
- Exercise (20 min): Testing out Znuny
- Plenary Discussion(10 min)
- 12:00 - 12:45 Lunch Break
-
- Lecture(35 min): Benchmarking
- Exercise(15 min): Real system benchmarking on your VMs
- Plenary Discussion(10 min)
-
- Lecture(20 min): Hardware characteristics and performance estimates in distributed systems
- Exercise(35 min): Theoretic performance assessment
- Plenary Discussion(15 min)
- 15:00 - 15:15 Break
-
- Lecture(15 min): Providing a joint software environment with environment modules and Spack
- Exercise(45 min): Installing MPI and Gromacs and providing module descriptions (other group members to test)
- Plenary Discussion(15 min)
-
- Lecture (10 min): Introduction
- Exercise (15 min): Installation and testing
- Plenary Discussion(5 min)
- 17:15 - 17:30 Break
- 17:30 - 18:00 General Q&A session and organisational information for students slides
Friday 11.10.2024
RzGö live hardware demonstration and Hands-on. If you are a remote participant, we request that you revisit the previous material and prepare questions for Q&A sessions.
On-site is limited to up to 20 participants.
- Group 1
- 09:00 Group 1 Meet at GWDG Burckhardtweg 4, 37077 Göttingen in the lobby - (Bus stop Bruckhardtweg)
- Lecture(20 min): HPC Interconnects, Fabric Manager, RDMA, VLAN, LATP
- Exercise(20 min): Cable planing
- 10:15 Group 1 Introduction to our onsite hardware – Sebastian Krey
Smartboard Group 1 - 10:30-13:00 Group 1 Hands-on Hardware Exercises
- 13:00-14:00 Group 1 Tour in the data center
- Group 2
- 11:45 Group 2 Meet at GWDG Burckhardtweg 4, 37077 Göttingen in the lobby - (Bus stop Bruckhardtweg)
- 12:00-13:00 Group 2 Tour in the data center
- Lecture(20 min): HPC Interconnects, Fabric Manager, RDMA, VLAN, LATP
- Exercise(20 min): Cable planing
- 14:30 Group 2 Introduction to our onsite hardware – Sebastian Krey
Smartboard Group 2 - 14:45-17:30 Group 2 Hands-on Hardware Exercises
- Setting up hardware
- Plugin a small cluster
- BIOS settings
- Installation of Warewulf
- Mounting of Infiniband cards
- Configuration of Infiniband
- RMDI performance test
Student Project Work
- 2024-11-01 - Send your requested topic to us until this day
- 2024-11-08 - We assign a supervisor per student until this day
- Contact your supervisor
- Work on your topic
- Write your reports
- Get feedback from supervisor
- 2025-03-31 - Submit final report as PDF per email to jonathan.decker@uni-goettingen.de
Examination
The exam is conducted through a report. The report should cover the evaluation of the assigned tool. The report should describe:
- What the tool is, what it is used for
- How the tool was set up
- How you evaluated it
- The results of your evaluation
- Discussion of problems and potential of the tool
- Conclusion
The report should not exceed 15 pages (only counting raw text in the main part, the full report including cover pages and appendix may be longer). It is not sufficient to repeat the documentation of the tool in your own words.
We recommend to use the LaTeX templates provided by us here: https://hps.vi4io.org/teaching/ressources/start#templates
Examination Requirement
In order to be allowed to take the examination, you have to show that you have taken the majority of the sessions of the block course. To prove this, please send 1-2 pages of notes on the course to us. These can be your personal notes from the course you took during the sessions and does not need to be a formatted document and is just to prove that you took the course. These do NOT need to be complete solutions to the exercises, a few sentences on your takeaways per section are enough.
If you joined the course late or had to miss out on some of the sessions, you can find the recordings on BBB and the materials on this web page. The exercises can be completed on a personal VM.
Topic Distribution
Student | Supervisor | Topic | Submissions | ||||
Your Name | Your Supervisor | Your Topic | Report | ||||
Abdallah Abdelnaby | Azat Khuziyakhmetov | Containers in HPC | |||||
Henrik Jonathan Seeliger | Aasish Kumar Sharma | Monitoring System Performance or Scalable Databases (with K8s) | |||||
Valerius Albert Gongjus Mattfeld | Lars Quentin | Python Performance Optimization leveraging Native Implementations (Numba/CPython/PyO3/Nukita/transpyle) | |||||
Ashutosh Kumar Jaiswal | Azat Khuziyakhmetov | Containers in HPC | |||||
Karan Sharma | Sadgeh Kshtkar | LLM RAG Agent based on ChatAI | |||||
Jan Lenke | Jonathan Decker | WebAssembly in Kubernetes | |||||
Pinar Haskul | Mirac Aydin | Hardware optimization using genetic algorithms |