In recent years, classic HPC users have seen an ever-increasing interest in the public cloud that is used as part of traditional HPC workflows. There are many reasons for this, e.g. special hardware components such as TPUs or special GPUs are available in the cloud earlier than in a local data center. In addition, there is a need for users to store any data for analysis using AI methods in different data silos and to be able to access them flexibly from HPC and cloud systems. A central role for data analytics workflows is the flexible data migration and provision in the data lake. For this purpose, highly-scalable object storage has long been established in the cloud area, which is mostly used via an S3 interface. Another advantage from the user's point of view for a consistent data management strategy as offered by a data lake is the uniform and consistent view that it allows for the individual data silos.
This workshop aims to have a discussion with researchers that believe data lake solution can improve their projects workflows. At this event, we would like to share with you our services and future prospects regarding the data-lake pipeline.
But, more importantly, we like to have a fruitful discussion with you on how your ideas and your needs could be realized. Hence, we kindly invite you to contribute to this event by presenting a few slides about your goals and big data use case(s).
Our motto is: Let's build a bridge to the data lake together
Date | Friday, 12 November 2021 | ||
Time | 14-18 | ||
Venue | Virtual, bbb room |
This workshop is funded by the GWDG and supported by the NHR.
The workshop is organized by
Prospect agenda:
Please register here. If you like to give a talk, please contact Julian Kunkel.