مقاله انگلیسی رایگان در مورد تجزیه شبکه های عصبی روتور هاپفیلد – IEEE 2017
مشخصات مقاله | |
ترجمه عنوان مقاله | تجزیه شبکه های عصبی روتور هاپفیلد با استفاده از اعداد پیچیده |
عنوان انگلیسی مقاله | Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers |
انتشار | مقاله سال ۲۰۱۷ |
تعداد صفحات مقاله انگلیسی | ۵ صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه IEEE |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | scopus – master journals – JCR – MedLine |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
۷٫۹۸۲ در سال ۲۰۱۷ |
رشته های مرتبط | مهندسی فناوری اطلاعات و کامپیوتر |
گرایش های مرتبط | شبکه های کامپیوتری و هوش مصنوعی |
نوع ارائه مقاله |
ژورنال |
مجله / کنفرانس | یافته ها در حوضه شبکه های عصبی و سیستم های یادگیری- IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS |
دانشگاه | Mathematical Science Center – University of Yamanashi – Japan |
کلمات کلیدی | قانون یادگیری Hebbian، شبکه های عصبی روتور هاپفیلد (RHNN ها)، شبکه های عصبی هاپفیلد متقارن پیچیده (SCHNNs)، برآورد خطی گسترده |
کلمات کلیدی انگلیسی | Hebbian learning rule, rotor Hopfield neural networks (RHNNs), symmetric complex-valued Hopfield neural networks (SCHNNs), widely linear estimation |
شناسه دیجیتال – doi |
https://doi.org/10.1109/TNNLS.2017.2657781 |
کد محصول | E9512 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract I INTRODUCTION II COMPLEX-VALUED HOPFIELD NEURAL NETWORKS III SYMMETRIC COMPLEX-VALUED HOPFIELD IV ROTOR HOPFIELD NEURAL NETWORKS V REPRESENTATION OF CHNN USING RHNN VI REPRESENTATION OF SCHNN USING RHNN VIII CONCLUSION References |
بخشی از متن مقاله: |
Abstract
A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs. INTRODUCTION Acomplex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network [1]–[۳]. It is often applied to image storage [4], [5]. Several modifications and extensions of CHNN have been proposed [6]–[۱۶]. A symmetric CHNN (SCHNN) is a modification of a CHNN [17]. An SCHNN has the same number of connection parameters and almost the same storage capacity as a CHNN. Since an SCHNN does not have rotational invariance, however, it has much better noise tolerance. A rotor Hopfield neural network (RHNN) is an extension of a CHNN [18]–[۲۰]. Like SCHNN, RHNN does not have rotational invariance and improves noise tolerance significantly. In addition, RHNN has twice the storage capacity of CHNN, although twice many connection parameters are necessary [20]–[۲۲]. RHNN has been applied to dynamic associative memories, such as chaotic associative memories that never recall rotated patterns [23], [24]. In this brief, the relations between CHNNs, SCHNNs, and RHNNs are studied. CHNNs and SCHNNs are organized based on complex numbers. Meanwhile, RHNNs are organized based on 2-D vectors and 2 × ۲ matrices. RHNNs include both CHNNs and SCHNNs; CHNNs and SCHNNs can be represented by RHNNs. It is shown that an RHNN is uniquely decomposed into a CHNN and SCHNN. The decomposition matches a widely linear estimation [25]. However, there is a difference. In a widely linear estimation, the connections are one way. Meanwhile, in an RHNN, they are mutual. Widely linear estimation with complex numbers has been utilized in communications and adaptive filters [26]–[۲۸]. In recent years, applications of widely linear estimation have been extended to quaternionic signal processing [29]. Moreover, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs. This brief provides a new perspective on the RHNN, and extends the strategy of learning algorithms. |