ICOMEX
The ICOMEX Project
The project name stands for ICOsahedral-grid Models for EXascale earth system simulations. The goal is to develop methods for fine resolution climate models based on icosahedral grids that scale well. The ICOMEX project is funded by the G8 initiative and includes partners from four countries.
Leading Principal Investigator
- Günther Zängl Deutscher Wetterdienst
Principal Investigators
- Thomas Dubos Ecole Polytechnique, Institut Pierre Simon Laplace
- Leonidas Linardakis Max Planck Institute for Meteorology
- Thomas Ludwig University of Hamburg
University of Hamburg
At the University of Hamburg, we are responsible for three of the seven workpackages. This part of the project is funded by the DFG (GZ: LU 1353/5-1).
Contact & principal investigator: Dr. Julian Kunkel
Workpackage 2
Abstract model description scheme for efficient use of memory bandwidth on a variety of platforms
The goals of this package are:
- To provide an abstraction scheme in the form of a Domain Specific Language (DSL) for the ICON atmosphere-ocean modelling system capable of generalize memory management operations
- The construction of a source-to-source translator infrastructure able to generate optimal code that uses the memory bandwith efficiently for different hardware architectures
- The DSL is intended to be based on Fortran to ease the acceptance from the numerical weather prediction and climate science communities
Up to now we have:
- Defined a format to express distinct index interchanges and optimizations
- Defined DSL keywords to be used on Fortran code
- Developed a tool that
- Converts from DSL annotated code to an Intermediate Representation (IR)
- Performs changes on the IR according to a specification
- Generates back Fortran code from the IR
Next steps will focus on:
- Inlining
- Abstraction of loops
Workpackage 6
Parallel I/O
In this workpackage we are researching ways to optimize the I/O related parts of the models. Our main focus are the regularly invoked output routines for the produced data and for checkpointing. Also, we take a look at how the data is actually stored on disk/tapes and what can be done to allow faster access or compression.
Results
We developed a compression algorithm for climate data.
- A transparent integration into HDF5 is ready MAFISC, but we are still waiting for HDF5 to include a general purpose module loading mechanism. Before this is included in the library, only programms explicitely compiled with our filter are able to read compressed datasets.
- A proposal for such a module mechanism has been developed and given to the HDF5 people.
- The compression is better than pure lzma, on which our algorithm is based.
- We used lzma because it yields the best compression as compared to the other standart algorithms we tested.
- We could easily replace it with a better general purpose compressor (should such a better compressor be developed).
- Usage of computational hardware for compression/decompression does pay off financially.
Instrumentation of ICON to trace the I/O related parts is complete.
- First traces using the waterplanet experiment have been created.
We created a NetCDF version that avoids double buffering. cachelessNetcdf
The MultifileHDF5 splits a logical HDF file into one file per process. multifileHdf5
A benchmark for testing a variety of libraries with output similar to the ICON-model is available here: ICON-output imitating benchmark
For our compression experiments, we have also developed a testfile generator. This produces test datasets with very precisely controlled characteristics. Currently, it supports 15 different modes, most of which also take a number of arguments. Out of the box, a simple make will only produce seven different datasets, more can easily be added to the Makefile. If povray is installed, it also possible to render images that give a visual impression of the type of data produced. Download: testfilegen.tar.gz
Within this workpackage we gladly support the Exascale I/O Working Group (EIOW) to protype the I/O routines of ICON to utilize the next generation of storage systems.
Future plans
The traces are the basis for our I/O access pattern analysis.
- This includes temporal and spatial patterns.
- This is currently in deferred status since the ICON I/O is still under reconstruction.
Create a benchmark resembling the typical I/O access patterns.
- The point of this benchmark is to scale to exaflop systems.
- This is the basis for evaluating the scaling of the I/O method.
Compare the measured I/O performance with the theoretical peak performance.
- Localization of bottlenecks based on a model of the parallel file system.
- Measurements will be done with different access patterns and file formats (NetCDF, HDF5 and Grib).
- The discrepancies between theoretical and measured performance will be used to rate the efficiency of I/O optimization on all layers.
Develop optimization strategies.
- Focused on the most promising layers.
- Will be pushed back to the vendors/developers.
Workpackage 7
Collaboration with hardware vendors
The aim of this workpackage is the communication with the HPC hardware vendors and the developer communities who provide products used in climate model codes. We hope to get guidance from the vendors/developers on how to best utilize their products. On the other hand, we intend to provide them with valueable insights to our model code needs, enabling them to tailor their products to the climate computing community needs.