ESiWACE2

The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE2) is an H2020 funded project and successor of the ESiWACE project.

Within the project, the research group is responsible for various contribution to work packages and particularly WP 4.

Please see the official web page of ESiWACE for further information.

Contact Dr. Julian Kunkel

  • Deutsches Klimarechenzentrum GmbH (coordinator)
  • Centre National de la Recherche Scientifique
  • European Centre for Medium-Range Weather Forecasts
  • Barcelona Supercomputing Center
  • Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V./ Max-Planck-Institut für Meteorologie
  • Sveriges meteorologiska och hydrologiska institut
  • Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique
  • National University of Ireland Galway (Irish Centre for High End Computing)
  • Met Office
  • Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici
  • The University of Reading
  • Science and Technology Facilities Council
  • BULL SAS
  • Seagate Systems UK Limited
  • ETH Zürich
  • The University of Manchester
  • Netherlands eScience Center
  • Federal Office of Meteorology and Climatology
  • DataDirect Networks
  • Mercator Océan

While we are involved in various work packages, the main focus lies is WP4 which will provide the necessary toolchain to handle data at pre-exa-scale and exa-scale, for single simulations and ensembles.

Specifically, we will

  1. Support data reduction in ensembles and avoid un-necessary subsequent data manipulations by providing tools to carry out ensemble statistics “in-flight” and compress ensemble members on the way to storage.
  2. Provide tools to: a) transparently hide complexity of multiple-storage tiers from applications at runtime by developing middleware that lies between the familiar NetCDF interface and storage, and prototype commercially credible storage appliances which can appear at the backend of such middleware, and; b) support manual migration of semantically important content between primary storage on disk, tape, and object stores, including appropriate user-space caching tools (thus allowing some portable data management within weather and climate workflows).

The work will build upon the generated prototypes build in ESiWACE-1.

architecture-d4.2.pdf

ESiWACE2 Architecture Milestone Document

ESDM builds upon a data model similar to NetCDF and utilizes a self-describing on-disk data format for storing structured data. We aim to deliver the NetCDF integrated version by the end of the ESiWACE1 project. This improvement can then be used as a drop-in replacement for typical use-cases without changing anything from the application perspective. While our current version utilises the manual configuration by data-center experts, the ultimate long-term goal is to employ machine learning to automatise the decision making and reduce the burden for users and experts.

Here are some results achieved from the ESiWACE 1 project. We run our ESDM prototype at Mistral with several larger number of processes. The results for running the benchmarks on 200 nodes with varying numbers of processes are shown in Figure 1. The figure shows the results for different processes per node (x-axis) considering ten timesteps of 300 GB data each.

As the baseline for exploring the efficiency, we run the IOR benchmark using optimal settings (i.e., large sequential I/O). The graphic shows two IOR results: storing file-per-process (fpp) on Lustre (ior-fpp), as this yielded better performance than the results for shared file access, and storing fpp on local storage (ior-fpptmp).

Mistral has two file systems (Lustre01 and Lustre02) and five configurations with ESDM were tested: storing data only in Lustre02, settings where data are stored on both Lustre file systems concurrently (both), and environments with in-memory storage (local tmpfs). We also explored if fragmenting data into 100MB files and 500MB files is beneficial (the large configurations). Note that the performance achieved on a single file system is slightly faster to the best-case performance achieved with optimal settings using the benchmarks. We conclude that the fragmentation into chunks accelerates the benchmark.

By utilizing the two file systems resembling a heterogeneous environment effectively, we can improve the performance from 150 GB/s to 200 GB/s (133% of a single file system). While this was just a benchmark testing, it shows that we are able to exploit the available performance and thus.

Publications

  • Toward Understanding I/O Behavior in HPC Workflows (Jakob Lüttgau, Shane Snyder, Philip Carns, Justin M. Wozniak, Julian Kunkel, Thomas Ludwig), 2019-02-11 BibTeX DOI
  • Beating data bottlenecks in weather and climate science (Bryan N. Lawrence, Julian Kunkel, Jonathan Churchill, Neil Massey, Philip Kershaw, Matt Pritchard), 2019-01-25 BibTeX URL PDF
  • Cost and Performance Modeling for Earth System Data Management and Beyond (Jakob Lüttgau, Julian Kunkel), 2019-01-25 BibTeX DOI PDF
  • A Survey of Storage Systems for High-Performance Computing (Jakob Lüttgau, Michael Kuhn, Kira Duwe, Yevhen Alforov, Eugen Betke, Julian Kunkel, Thomas Ludwig), 2018-04 BibTeX URL DOI PDF
  • Poster: Adaptive Tier Selection for NetCDF and HDF5 (Jakob Lüttgau, Eugen Betke, Olga Perevalova, Julian Kunkel, Michael Kuhn), 2017-11-14 BibTeX PDF
  • Simulation of Hierarchical Storage Systems for TCO and QoS (Jakob Lüttgau, Julian Kunkel), 2017 BibTeX DOI PDF
  • Poster: Modeling and Simulation of Tape Libraries for Hierarchical Storage Systems (Jakob Lüttgau, Julian Kunkel), 2016-11-15 BibTeX URL PDF

Talks

  • Impressum
  • Privacy
  • research/projects/reading/esiwace2/start.txt
  • Last modified: 2023-08-28 10:40
  • by 127.0.0.1