RINAS: Training with Dataset Shuffling Can Be General and Fast

التفاصيل البيبلوغرافية
العنوان: RINAS: Training with Dataset Shuffling Can Be General and Fast
المؤلفون: Zhong, Tianle, Zhao, Jiechen, Guo, Xindi, Su, Qiang, Fox, Geoffrey
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Databases, Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Machine Learning, Computer Science - Performance
الوصف: Deep learning datasets are expanding at an unprecedented pace, creating new challenges for data processing in model training pipelines. A crucial aspect of these pipelines is dataset shuffling, which significantly improves unbiased learning and convergence accuracy by adhering to the principles of random sampling. However, loading shuffled data for large datasets incurs significant overhead in the deep learning pipeline and severely impacts the end-to-end training throughput. To mitigate this, current deep learning systems often resort to partial dataset shuffling, sacrificing global randomness to maintain acceptable training throughput on large datasets, still leaving global shuffling efficiency issues not fully explored. In this work, we present RINAS, a data loading framework that systematically addresses the performance bottleneck of loading global shuffled datasets. Our key contribution is to offer an intra-batch unordered data fetching approach, which unleashes unexplored parallelism of data loading. We implement RINAS under the PyTorch framework for common dataset libraries HuggingFace and TorchVision. Our experimental results show that RINAS improves the throughput of general language model training and vision model training by up to 59% and 89%, respectively.
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2312.02368Test
رقم الانضمام: edsarx.2312.02368
قاعدة البيانات: arXiv