دورية أكاديمية

Demonstration of transfer learning using 14 nm technology analog ReRAM array

التفاصيل البيبلوغرافية
العنوان: Demonstration of transfer learning using 14 nm technology analog ReRAM array
المؤلفون: Fabia Farlin Athena, Omobayode Fagbohungbe, Nanbo Gong, Malte J. Rasch, Jimmy Penaloza, SoonCheon Seo, Arthur Gasasira, Paul Solomon, Valeria Bragaglia, Steven Consiglio, Hisashi Higuchi, Chanro Park, Kevin Brew, Paul Jamison, Christopher Catano, Iqbal Saraf, Claire Silvestre, Xuefeng Liu, Babar Khan, Nikhil Jain, Steven McDermott, Rick Johnson, I. Estrada-Raygoza, Juntao Li, Tayfun Gokmen, Ning Li, Ruturaj Pujari, Fabio Carta, Hiroyuki Miyazoe, Martin M. Frank, Antonio La Porta, Devi Koty, Qingyun Yang, Robert D. Clark, Kandabara Tapily, Cory Wajda, Aelan Mosden, Jeff Shearer, Andrew Metz, Sean Teehan, Nicole Saulnier, Bert Offrein, Takaaki Tsunomura, Gert Leusink, Vijay Narayanan, Takashi Ando
المصدر: Frontiers in Electronics, Vol 4 (2024)
بيانات النشر: Frontiers Media S.A., 2024.
سنة النشر: 2024
المجموعة: LCC:Electrical engineering. Electronics. Nuclear engineering
مصطلحات موضوعية: resistive random access memory, HfOx, deep learning, analog hardware, transfer learning, open loop training, Electrical engineering. Electronics. Nuclear engineering, TK1-9971
الوصف: Analog memory presents a promising solution in the face of the growing demand for energy-efficient artificial intelligence (AI) at the edge. In this study, we demonstrate efficient deep neural network transfer learning utilizing hardware and algorithm co-optimization in an analog resistive random-access memory (ReRAM) array. For the first time, we illustrate that in open-loop deep neural network (DNN) transfer learning for image classification tasks, convergence rates can be accelerated by approximately 3.5 times through the utilization of co-optimized analog ReRAM hardware and the hardware-aware Tiki-Taka v2 (TTv2) algorithm. A simulation based on statistical 14 nm CMOS ReRAM array data provides insights into the performance of transfer learning on larger network workloads, exhibiting notable improvement over conventional training with random initialization. This study shows that analog DNN transfer learning using an optimized ReRAM array can achieve faster convergence with a smaller dataset compared to training from scratch, thus augmenting AI capability at the edge.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 2673-5857
83908250
العلاقة: https://www.frontiersin.org/articles/10.3389/felec.2023.1331280/fullTest; https://doaj.org/toc/2673-5857Test
DOI: 10.3389/felec.2023.1331280
الوصول الحر: https://doaj.org/article/251ccb96d4dd435e83908250513bf0faTest
رقم الانضمام: edsdoj.251ccb96d4dd435e83908250513bf0fa
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:26735857
83908250
DOI:10.3389/felec.2023.1331280