Exploiting Machine Learning for Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines

التفاصيل البيبلوغرافية
العنوان: Exploiting Machine Learning for Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines
المؤلفون: Riccardo Cantini, Paolo Trunfio, Alessio Orsino, Fabrizio Marozzo, Domenico Talia
المصدر: Future Internet
Volume 13
Issue 5
Future Internet, Vol 13, Iss 121, p 121 (2021)
مصطلحات موضوعية: in-memory, Speedup, Computer Networks and Communications, Computer science, workflow, Information technology, 02 engineering and technology, Machine learning, computer.software_genre, Scheduling (computing), Robustness (computer science), 0202 electrical engineering, electronic engineering, information engineering, scheduling, Job shop scheduling, Apache Spark, business.industry, Process (computing), 020206 networking & telecommunications, T58.5-58.64, data-intensive, Task (computing), Workflow, machine learning, Parallel processing (DSP implementation), 020201 artificial intelligence & image processing, Artificial intelligence, business, computer
الوصف: Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing can bring important benefits to accelerate execution. However, optimization techniques are necessary to fully exploit in-memory processing, avoiding performance drops due to memory saturation events. This paper proposed a novel solution, called the Intelligent In-memory Workflow Manager (IIWM), for optimizing the in-memory execution of data-intensive workflows on parallel machines. IIWM is based on two complementary strategies: (1) a machine learning strategy for predicting the memory occupancy and execution time of workflow tasks
(2) a scheduling strategy that allocates tasks to a computing node, taking into account the (predicted) memory occupancy and execution time of each task and the memory available on that node. The effectiveness of the machine learning-based predictor and the scheduling strategy were demonstrated experimentally using as a testbed, Spark, a high-performance Big Data processing framework that exploits in-memory computing to speed up the execution of large-scale applications. In particular, two synthetic workflows were prepared for testing the robustness of the IIWM in scenarios characterized by a high level of parallelism and a limited amount of memory reserved for execution. Furthermore, a real data analysis workflow was used as a case study, for better assessing the benefits of the proposed approach. Thanks to high accuracy in predicting resources used at runtime, the IIWM was able to avoid disk writes caused by memory saturation, outperforming a traditional strategy in which only dependencies among tasks are taken into account. Specifically, the IIWM achieved up to a 31% and a 40% reduction of makespan and a performance improvement up to 1.45× and 1.66× on the synthetic workflows and the real case study, respectively.
وصف الملف: application/pdf
اللغة: English
تدمد: 1999-5903
DOI: 10.3390/fi13050121
الوصول الحر: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0ec6ebff76689fcd0706b0388c758f55Test
حقوق: OPEN
رقم الانضمام: edsair.doi.dedup.....0ec6ebff76689fcd0706b0388c758f55
قاعدة البيانات: OpenAIRE
الوصف
تدمد:19995903
DOI:10.3390/fi13050121