دورية أكاديمية

SPIDE: A purely spike-based method for training feedback spiking neural networks.

التفاصيل البيبلوغرافية
العنوان: SPIDE: A purely spike-based method for training feedback spiking neural networks.
المؤلفون: Xiao, Mingqing1 (AUTHOR) mingqing_xiao@pku.edu.cn, Meng, Qingyan2,3 (AUTHOR) qingyanmeng@link.cuhk.edu.cn, Zhang, Zongpeng4 (AUTHOR) zhangzongpeng@stu.pku.edu.cn, Wang, Yisen1,5 (AUTHOR) yisen.wang@pku.edu.cn, Lin, Zhouchen1,5,6 (AUTHOR) zlin@pku.edu.cn
المصدر: Neural Networks. Apr2023, Vol. 161, p9-24. 16p.
مصطلحات موضوعية: *ARTIFICIAL neural networks, *ACTION potentials, *SUPERVISED learning, *APPROXIMATION error, *FLEXIBLE structures, *ENERGY industries
مستخلص: Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware. However, most supervised SNN training methods, such as conversion from artificial neural networks or direct training with surrogate gradients, require complex computation rather than spike-based operations of spiking neurons during training. In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs. Specifically, we introduce ternary spiking neuron couples and prove that implicit differentiation can be solved by spikes based on this design, so the whole training procedure, including both forward and backward passes, is made as event-driven spike computation, and weights are updated locally with two-stage average firing rates. Then we propose to modify the reset membrane potential to reduce the approximation error of spikes. With these key components, we can train SNNs with flexible structures in a small number of time steps and with firing sparsity during training, and the theoretical estimation of energy costs demonstrates the potential for high efficiency. Meanwhile, experiments show that even with these constraints, our trained models can still achieve competitive results on MNIST, CIFAR-10, CIFAR-100, and CIFAR10-DVS. • Novel method with purely spike-based computation to train spiking neural networks. • Analysis of the approximation error of spikes and the method to reduce the error. • Much lower energy costs by low latency and firing sparsity during training. • Competitive performance on static and neuromorphic datasets. [ABSTRACT FROM AUTHOR]
قاعدة البيانات: Academic Search Index
الوصف
تدمد:08936080
DOI:10.1016/j.neunet.2023.01.026