دورية أكاديمية

The Nebula Benchmark Suite: Implications of Lightweight Neural Networks.

التفاصيل البيبلوغرافية
العنوان: The Nebula Benchmark Suite: Implications of Lightweight Neural Networks.
المؤلفون: Kim, Bogil1 (AUTHOR) bogilkim@yonsei.ac.kr, Lee, Sungjae1 (AUTHOR) sungjae.lee@yonsei.ac.kr, Park, Chanho1 (AUTHOR) ch.park@yonsei.ac.kr, Kim, Hyeonjin1 (AUTHOR) hyeonjin_kim@yonsei.ac.kr, Song, William J.1 (AUTHOR) wjhsong@yonsei.ac.kr
المصدر: IEEE Transactions on Computers. Nov2021, Vol. 70 Issue 11, p1887-1900. 14p.
مصطلحات موضوعية: *C++, NEBULAE, MOTIVATION (Psychology)
مستخلص: This article presents a benchmark suite named Nebula that implements lightweight neural network benchmarks. Recent neural networks tend to form deeper and sizable networks to enhance accuracy and applicability. However, the massive volume of heavy networks makes them highly challenging to use in conventional research environments such as microarchitecture simulators. We notice that neural network computations are mainly comprised of matrix and vector calculations that repeat on multi-dimensional data encompassing batches, channels, layers, etc. This observation motivates us to develop a variable-sized neural network benchmark suite that provides users with options to select appropriate size of benchmarks for different research purposes or experiment conditions. Inspired by the implementations of well-known benchmarks such as PARSEC and SPLASH suites, Nebula offers various size options from large to small datasets for diverse types of neural networks. The Nebula benchmark suite is comprised of seven representative neural networks built on a C++ framework. The variable-sized benchmarks can be executed i) with acceleration libraries (e.g., BLAS, cuDNN) for faster and realistic application runs or ii) without the external libraries if execution environments do not support them, e.g., microarchitecture simulators. This article presents a methodology to develop the variable-sized neural network benchmarks, and their performance and characteristics are evaluated based on hardware measurements. The results demonstrate that the Nebula benchmarks reduce execution time as much as 25x while preserving similar architectural behaviors as the full-fledged neural networks. [ABSTRACT FROM AUTHOR]
Copyright of IEEE Transactions on Computers is the property of IEEE and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
قاعدة البيانات: Business Source Index
الوصف
تدمد:00189340
DOI:10.1109/TC.2020.3029327