مقاله انگلیسی رایگان در مورد اثر رقیق سازی در شبکه های عصبی بازگشتی نامتقارن – الزویر 2018

 

مشخصات مقاله
انتشار مقاله سال 2018
تعداد صفحات مقاله انگلیسی 10 صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
منتشر شده در نشریه الزویر
نوع نگارش مقاله مقاله پژوهشی (Research article)
نوع مقاله ISI
عنوان انگلیسی مقاله Effect of dilution in asymmetric recurrent neural networks
ترجمه عنوان مقاله اثر رقیق سازی در شبکه های عصبی بازگشتی نامتقارن
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی پزشکی، فناوری اطلاعات
گرایش های مرتبط بیوالکتریک، شبکه های کامپیوتری
مجله شبکه های عصبی – Neural Networks
دانشگاه Center for Life Nanoscience – Istituto Italiano di Tecnologia – Italy
کلمات کلیدی شبکه عصبی بازگشتی، عصب های McCulloch-Pitts، مدل های حافظه، حداکثر ذخیره سازی حافظه
کلمات کلیدی انگلیسی Recurrent neural networks, McCulloch–Pitts neurons, Memory models, Maximum memory storage
شناسه دیجیتال – doi
https://doi.org/10.1016/j.neunet.2018.04.003
کد محصول E8768
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

بخشی از متن مقاله:
1. Introduction

Recurrent neural networks are able to store stimuli-response associations, and serve as a model of how live neural networks store and recall behaviors as responses to given stimuli. A discretetime deterministic recurrent N binary-neuron neural network is completely characterized by its N 2 edges, and its instantaneous state is defined by a neuron activation vector σ, which is a binary vector of size N. In this paper, we consider a specific kind of recurrent neural network, which is initialized, analogously to a Hopfield network, by assigning to the network’s neurons an initial pattern which is the network stimulus or input. The collection of all possible neuron activation vectors contains 2N allowed vectors σ, these vectors can be partitioned in three categories: steady states, limit cycles, and transient states. Steady states are neuron activation states that do not change in time, and limit cycles are sequences of neuron activation vectors that repeat cyclically, with a period that we call cycle length. From now on, we will consider a steady state as a limit cycle of length 1. A network, given any initial activation vector, always evolves to a limit cycle, which for this reason we also refer to as attractor. In other words, a network associates a limit cycle to any initial neural activation state that is given as an input. For this reason, limit cycles can be considered as behaviors stored as responses to initial stimuli. In the case of length 1 cycles, limit behaviors are a single activation state which in the case of Hopfield networks correspond to the recollection of a memory. In the case of cycles with a length greater than 1, stored limit behaviors are sequences of activation patterns which may correspond to a stored dynamical sequence, such as the performance of a complex motor task, or a dynamic sequence of static memories. In principle, a recurrent neural network stores a certain number of limit behaviors as vectors from a 2N set in a data structure defined by N 2 parameters. Furthermore, these vectors can be recovered in responses to input stimuli. This clearly has intriguing analogies with content-addressable memory systems capable of indexing large strings of bits (Carpenter, 1989; Hopfield, 1987).

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا