A new non-convex framework to improve asymptotical knowledge on generic stochastic gradient descent

التفاصيل البيبلوغرافية
العنوان: A new non-convex framework to improve asymptotical knowledge on generic stochastic gradient descent
المؤلفون: Fest, Jean-Baptiste, Repetti, Audrey, Chouzenoux, Emilie
المساهمون: OPtimisation Imagerie et Santé (OPIS), Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de vision numérique (CVN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-CentraleSupélec-Université Paris-Saclay, Centre de vision numérique (CVN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay, Heriot-Watt University Edinburgh (HWU), European Project: ERC-2019-STG-850925,MAJORIS(2020)
المصدر: Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2023) ; MLSP 2023 - IEEE International Workshop on Machine Learning for Signal Processing ; https://inria.hal.science/hal-04165342Test ; MLSP 2023 - IEEE International Workshop on Machine Learning for Signal Processing, Sep 2023, Rome, Italy
بيانات النشر: HAL CCSD
سنة النشر: 2023
المجموعة: Archive ouverte HAL (Hyper Article en Ligne, CCSD - Centre pour la Communication Scientifique Directe)
مصطلحات موضوعية: Stochastic gradient descent, non-convex optimization, Kurdyka-Lojasiewicz, convergence analysis, [MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC]
جغرافية الموضوع: Rome, Italy
الوصف: International audience ; Stochastic gradient optimization methods are broadly used to minimize non-convex smooth objective functions, for instance when training deep neural networks. However, theoretical guarantees on the asymptotic behaviour of these methods remain scarce. Especially, ensuring almost-sure convergence of the iterates to a stationary point is quite challenging. In this work, we introduce a new Kurdyka-Łojasiewicz theoretical framework to analyze asymptotic behavior of stochastic gradient descent (SGD) schemes when minimizing non-convex smooth objectives. In particular, our framework provides new almost-sure convergence results, on iterates generated by any SGD method satisfying mild conditional descent conditions. We illustrate the proposed framework by means of several toy simulation examples. We illustrate the role of the considered theoretical assumptions, and investigate how SGD iterates are impacted whether these assumptions are either fully or partially satisfied.
نوع الوثيقة: conference object
اللغة: English
العلاقة: info:eu-repo/grantAgreement//ERC-2019-STG-850925/EU/Majoration-Minimization algorithms for Image Processing/MAJORIS; hal-04165342; https://inria.hal.science/hal-04165342Test; https://inria.hal.science/hal-04165342/documentTest; https://inria.hal.science/hal-04165342/file/MLSP_2023.pdfTest
الإتاحة: https://inria.hal.science/hal-04165342Test
https://inria.hal.science/hal-04165342/documentTest
https://inria.hal.science/hal-04165342/file/MLSP_2023.pdfTest
حقوق: http://creativecommons.org/licenses/byTest/ ; info:eu-repo/semantics/OpenAccess
رقم الانضمام: edsbas.4431CF23
قاعدة البيانات: BASE