====== ESiWACE2 ====== {{:research:projects:esiwace-logo.png?400&nolink}} The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE2) is an H2020 funded project and successor of the [[research:projects:hamburg:esiwace|ESiWACE project]]. Within the project, the research group is responsible for various contribution to work packages and particularly WP 4. Please see the official web page of [[https://www.esiwace.eu/|ESiWACE]] for further information. **Contact** [[about:people:julian kunkel]] ===== People ===== * [[about:people:julian kunkel]] (contact) * [[about:people:alumni:luciana_rocha_pedro]] * [[about:people:alumni:nathanael_huebbe]] ===== Collaboration ===== * Deutsches Klimarechenzentrum GmbH (coordinator) * Centre National de la Recherche Scientifique * European Centre for Medium-Range Weather Forecasts * Barcelona Supercomputing Center * Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V./ Max-Planck-Institut für Meteorologie * Sveriges meteorologiska och hydrologiska institut * Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique * National University of Ireland Galway (Irish Centre for High End Computing) * Met Office * Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici * **The University of Reading** * Science and Technology Facilities Council * BULL SAS * Seagate Systems UK Limited * ETH Zürich * The University of Manchester * Netherlands eScience Center * Federal Office of Meteorology and Climatology * DataDirect Networks * Mercator Océan ===== Goals for the University of Reading ===== While we are involved in various work packages, the main focus lies is WP4 which will provide the necessary toolchain to handle data at pre-exa-scale and exa-scale, for single simulations and ensembles. Specifically, we will - Support data reduction in ensembles and avoid un-necessary subsequent data manipulations by providing tools to carry out ensemble statistics “in-flight” and compress ensemble members on the way to storage. - Provide tools to: a) transparently hide complexity of multiple-storage tiers from applications at runtime by developing middleware that lies between the familiar NetCDF interface and storage, and prototype commercially credible storage appliances which can appear at the backend of such middleware, and; b) support manual migration of semantically important content between primary storage on disk, tape, and object stores, including appropriate user-space caching tools (thus allowing some portable data management within weather and climate workflows). ===== Flexible Storage Layout for Earth-System Data ===== The work will build upon the generated prototypes build in ESiWACE-1. {{:research:projects:esd.png?400|}} {{research:projects:architecture-d4.2.pdf}} [[https://zenodo.org/record/3724217|ESiWACE2 Architecture Milestone Document]] ESDM builds upon a data model similar to NetCDF and utilizes a self-describing on-disk data format for storing structured data. We aim to deliver the NetCDF integrated version by the end of the ESiWACE1 project. This improvement can then be used as a drop-in replacement for typical use-cases without changing anything from the application perspective. While our current version utilises the manual configuration by data-center experts, the ultimate long-term goal is to employ machine learning to automatise the decision making and reduce the burden for users and experts. ===== Performance results ===== Here are some results achieved from the ESiWACE 1 project. We run our ESDM prototype at Mistral with several larger number of processes. The results for running the benchmarks on 200 nodes with varying numbers of processes are shown in Figure 1. The figure shows the results for different processes per node (x-axis) considering ten timesteps of 300 GB data each. {{ :research:projects:esiwace:200-nodes.png?400|}} As the baseline for exploring the efficiency, we run the IOR benchmark using optimal settings (i.e., large sequential I/O). The graphic shows two IOR results: storing file-per-process (fpp) on Lustre (ior-fpp), as this yielded better performance than the results for shared file access, and storing fpp on local storage (ior-fpptmp). Mistral has two file systems (Lustre01 and Lustre02) and five configurations with ESDM were tested: storing data only in Lustre02, settings where data are stored on both Lustre file systems concurrently (both), and environments with in-memory storage (local tmpfs). We also explored if fragmenting data into 100MB files and 500MB files is beneficial (the large configurations). Note that the performance achieved on a single file system is slightly faster to the best-case performance achieved with optimal settings using the benchmarks. We conclude that the fragmentation into chunks accelerates the benchmark. By utilizing the two file systems resembling a heterogeneous environment effectively, we can improve the performance from 150 GB/s to 200 GB/s (133% of a single file system). While this was just a benchmark testing, it shows that we are able to exploit the available performance and thus. require_once('../php/common.php'); $ref = new CrossReferencer(); $ref->reference(DataLoader::ProjectsFilter, "ESiWACE");