يعرض 1 - 2 نتائج من 2 نتيجة بحث عن '"Shen Han"', وقت الاستعلام: 0.64s تنقيح النتائج
  1. 1
    دورية

    المصدر: IEEE Transactions on Signal Processing; 2023, Vol. 71 Issue: 1 p2579-2594, 16p

    مستخلص: Asynchronous and parallel implementation of standard reinforcement learning (RL) algorithms is a key enabler of the tremendous success of modern RL. Among many asynchronous RL algorithms, arguably the most popular and effective one is the asynchronous advantage actor-critic (A3C) algorithm. Although A3C is becoming the workhorse of RL, its theoretical properties are still not well-understood, including its non-asymptotic analysis and the performance gain of parallelism (a.k.a. linear speedup). This paper revisits the A3C algorithm and establishes its non-asymptotic convergence guarantees. Under both i.i.d. and Markovian sampling, we establish the local convergence guarantee for A3C in the general policy approximation case and the global convergence guarantee in softmax policy parameterization. Under i.i.d. sampling, A3C obtains sample complexity of $\mathcal {O}(\epsilon ^{-2.5}/N)$ per worker to achieve $\epsilon$ accuracy, where $N$ is the number of workers. Compared to the best-known sample complexity of $\mathcal {O}(\epsilon ^{-2.5})$ for two-timescale AC, A3C achieves linear speedup, which justifies the advantage of parallelism and asynchrony in AC algorithms theoretically for the first time. Numerical tests on synthetic environment, OpenAI Gym environments and Atari games have been provided to verify our theoretical analysis.

  2. 2
    دورية أكاديمية

    المؤلفون: Wu, Zhaoxian1 (AUTHOR) wuzhx23@mail2.sysu.edu.cn, Shen, Han2 (AUTHOR) shenh5@rpi.edu, Chen, Tianyi2 (AUTHOR) chentianyi19@gmail.com, Ling, Qing1 (AUTHOR) lingqing556@mail.sysu.edu.cn

    المصدر: IEEE Transactions on Signal Processing. 11/15/2021, p3839-3853. 15p.

    مستخلص: In this paper, we consider the policy evaluation problem in reinforcement learning with agents on a decentralized and directed network. In order to evaluate the quality of a fixed policy in this decentralized setting, one option is for agents to run decentralized temporal-difference (TD) collaboratively. To account for the practical scenarios where the state and action spaces are large and malicious attacks emerge, we focus on the decentralized TD learning with linear function approximation in the presence of malicious agents (often termed as Byzantine agents). We propose a trimmed mean-based Byzantine-resilient decentralized TD algorithm to perform policy evaluation in this setting. We establish the finite-time convergence rate, as well as the asymptotic learning error that depends on the number of Byzantine agents. Numerical experiments corroborate the robustness of the proposed algorithm. [ABSTRACT FROM AUTHOR]

    : Copyright of IEEE Transactions on Signal Processing is the property of IEEE and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)