مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 47 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه الزویر |
نوع نگارش مقاله | مقاله پژوهشی (Research article) |
مقاله بیس | این مقاله بیس میباشد |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Deep Neural Networks with Weighted Spikes |
ترجمه عنوان مقاله | شبکه های عصبی عمیق با اسپایک وزن دار |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی |
مجله | محاسبات عصبی – Neurocomputing |
دانشگاه | Department of Electrical and Computer Engineering – Seoul National University – Korea |
کلمات کلیدی | شبکه عصبی Spiking، ولتاژ گذاری کوتاه مدت وزنی، یادگیری تحت نظارت |
کلمات کلیدی انگلیسی | Spiking neural network, weighted spike, supervised learning |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.neucom.2018.05.087 |
کد محصول | E8625 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction
Nowadays deep neural networks (DNN s) are continuously expanding their influences on application areas such as image classification [1], speech recognition [2], natural language processing [3] and many others. However, its heavy 5 computational load and high energy consumption still block the broader use of DNNs in practical applications that require large-scale data processing in real-time [4]. As an alternative, spiking neural networks (SNN s) have been studied by many researchers for the purpose of building neuromorphic hardware [5, 6, 7, 8, 9] with low energy consumption. A spiking neural network 10 consists of spiking neurons, each of which fires an output spike only when its membrane potential is charged above a certain threshold [10]. The generated spike is propagated into the neurons in the next layer and increases/decreases their membrane potentials. In this manner, the communication between neurons is performed by spikes. In SNNs, the processing of each input spike in a 15 neuron accompanies only a single simple addition operation onto the membrane potential, while conventional artificial neural networks (ANNs) require multi-bit input signals and a multiplication operation in addition to accumulation, which consumes much larger energy. There are various approaches to train a spiking neural network. Among 20 those, one popular way is to train an ANN with the same topology and then convert the synaptic weights to those of the SNN. The resulting SNN achieves high classification accuracy comparable to the ANN even for a deep topology. Most of the ANN-to-SNN conversion approaches use Poisson-distributed rate coding [11], where spike firing rate or the number of spikes generated within a 25 certain time interval approximates the signal intensity. Rate coding inevitably requires a long time to represent high precision information, which means it has a low information capacity. Because of this, the classification latency increases much longer if the depth of SNN increases since spikes can be propagated into next neurons only after the membrane potentials of the current neurons are 30 charged over the given threshold. Moreover, the information represented by the spike firing rate becomes more error-prone in deeper layers of the network [12], and thus larger number of spikes is required to reduce the approximation errors in deep neural networks. Since the number of additions is proportional to the number of spikes arriving at neurons in SNNs, larger number of spikes incurs 35 more dynamic energy consumption and significantly diminishes the merit of low energy consumption in deep SNNs. |