مقاله انگلیسی رایگان در مورد شبکه های عصبی پسرو در کاهش نگاشت – IEEE 2019
مشخصات مقاله | |
ترجمه عنوان مقاله | آموزش شبکه های عصبی پسرو در کاهش نگاشت مجموعه داده های بزرگ با ابعاد بسیار بالا با تکامل جهانی |
عنوان انگلیسی مقاله | Training Back Propagation Neural Networks in MapReduce on High-Dimensional Big Datasets With Global Evolution |
انتشار | مقاله سال ۲۰۱۹ |
تعداد صفحات مقاله انگلیسی | ۱۳ صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه IEEE |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
۴٫۶۴۱ در سال ۲۰۱۸ |
شاخص H_index | ۵۶ در سال ۲۰۱۹ |
شاخص SJR | ۰٫۶۰۹ در سال ۲۰۱۸ |
شناسه ISSN | ۲۱۶۹-۳۵۳۶ |
شاخص Quartile (چارک) | Q2 در سال ۲۰۱۸ |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | مهندسی کامپیوتر، مهندسی فناوری اطلاعات |
گرایش های مرتبط | شبکه های کامپیوتری |
نوع ارائه مقاله |
ژورنال |
مجله / کنفرانس | دسترسی – IEEE Access |
دانشگاه | College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China |
کلمات کلیدی | همگرایی، موازی سازی داده های توزیع شده، تکامل، کاهش نگاشت، شبکه عصبی |
کلمات کلیدی انگلیسی | Convergency, distributed data-parallelism, evolution, MapReduce, neural network |
شناسه دیجیتال – doi |
https://doi.org/10.1109/ACCESS.2019.2951189 |
کد محصول | E13978 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract I. Introduction II. Related Work III. BPNN and its Traditional Training IV. Data-Parallel Training With the Evolution of Local BPNNs V. Experiments and Evaluation Authors Figures References |
بخشی از متن مقاله: |
Abstract
Owing to its scalability and high fault-tolerance even on a distributed environment built up with personal computers, MapReduce has been introduced to parallelise the training of Back Propagation Neural Networks (BPNNs) on high-dimensional big datasets. Based on the evolution of local BPNNs produced by distributed Map tasks with different data splits, the paper proposes a novel approach to the distributed data-parallel training of BPNNs in MapReduce. The approach provides a reasonable measure to get global convergent BPNN candidates from local BPNNs only convergent on the specific data splits. Further, it not only can reduce the iterations to get the global convergent BPNN, but also shows great advantages in avoiding the training to get trapped into a local optimum on high-dimensional big datasets. To improve the training efficiency further, local BPNNs from the same computing node are merged based on the average of their weight matrices before they act as individuals of the population for the global evolution. Our approach also leverages Random Project based sampling techniques to evaluate the fitness of each individual in order to lower the computation cost in the evolution stage. Experiments show that our proposed approach improves the training efficiency highly compared to the stand-alone or traditional MapReduce BPNN training, and improves model accuracy for larger datasets. The comparison with 23 other popular classification approaches also shows that our proposed approach has big advantages in accuracy. Introduction An Artificial Neural Network (ANN) is a computer model to essentially mimic the knowledge acquisition and organisational skills of the human brain, which consists of a number of interconnected processing elements called neurons [1]. The neurons of an ANN are usually arranged into two or more layers logically, and interact with each other via weighted connections. These scalar weights determine the nature and strength of the influence between the interconnected neurons. Each neuron can be connected to all the neurons in the next layer. There is an input layer where data are presented to the neural network, and an output layer that holds the response of the network to the input. It is the intermediate layers, also known as hidden layers, that enable these networks to represent and compute complicated associations between patterns. Neural networks essentially learn through the adaptation of their connection weights according to input data [2]. Back Propagation Neural Network (BPNN), one of the most popular ANNs, employs the back-propagation algorithm for its connection weight adaptation and can approximate any continuous nonlinear functions by arbitrary precision with enough number of neurons [3]. We call this process the training of a neural network and the input data containing potential patterns is called training samples. In the past decades, ANNs have been widely used to model uncertain nonlinear functions [4], [5], and have shown great advantages in pattern recognition, classification and modelling of nonlinear relationships involving a multitude of variables [6]. |