Design and Implementation of an Analysis Pipeline for Heterogeneous Data

التفاصيل البيبلوغرافية
العنوان: Design and Implementation of an Analysis Pipeline for Heterogeneous Data
المؤلفون: Sarker, Arup Kumar, Alsaadi, Aymen, Perera, Niranda, Staylor, Mills, von Laszewski, Gregor, Turilli, Matteo, Kilic, Ozgur Ozan, Titov, Mikhail, Merzky, Andre, Jha, Shantenu, Fox, Geoffrey
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Distributed, Parallel, and Cluster Computing, H.2.4, D.2.7, D.2.2
الوصف: Managing and preparing complex data for deep learning, a prevalent approach in large-scale data science can be challenging. Data transfer for model training also presents difficulties, impacting scientific fields like genomics, climate modeling, and astronomy. A large-scale solution like Google Pathways with a distributed execution environment for deep learning models exists but is proprietary. Integrating existing open-source, scalable runtime tools and data frameworks on high-performance computing (HPC) platforms is crucial to address these challenges. Our objective is to establish a smooth and unified method of combining data engineering and deep learning frameworks with diverse execution capabilities that can be deployed on various high-performance computing platforms, including cloud and supercomputers. We aim to support heterogeneous systems with accelerators, where Cylon and other data engineering and deep learning frameworks can utilize heterogeneous execution. To achieve this, we propose Radical-Cylon, a heterogeneous runtime system with a parallel and distributed data framework to execute Cylon as a task of Radical Pilot. We thoroughly explain Radical-Cylon's design and development and the execution process of Cylon tasks using Radical Pilot. This approach enables the use of heterogeneous MPI-communicators across multiple nodes. Radical-Cylon achieves better performance than Bare-Metal Cylon with minimal and constant overhead. Radical-Cylon achieves (4~15)% faster execution time than batch execution while performing similar join and sort operations with 35 million and 3.5 billion rows with the same resources. The approach aims to excel in both scientific and engineering research HPC systems while demonstrating robust performance on cloud infrastructures. This dual capability fosters collaboration and innovation within the open-source scientific research community.
Comment: 14 pages, 16 figures, 2 tables
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2403.15721Test
رقم الانضمام: edsarx.2403.15721
قاعدة البيانات: arXiv