دانلود رایگان مقالات الزویر - ساینس دایرکتدانلود رایگان مقالات پژوهشی کامپیوتردانلود رایگان مقالات ژورنالی کامپیوتردانلود رایگان مقالات سال 2018دانلود رایگان مقاله ISI شبکه های عصبی به زبان انگلیسیدانلود رایگان مقاله ISI مهندسی کامپیوتر به زبان انگلیسی سال 2022 و 2023دانلود رایگان مقاله ISI هوش مصنوعی به زبان انگلیسیسال انتشار

مقاله انگلیسی رایگان در مورد بررسی شبکه های عصبی با وزن های تصادفی – الزویر ۲۰۱۸

 

مشخصات مقاله
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۱۰ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
منتشر شده در نشریه الزویر
نوع نگارش مقاله مقاله پژوهشی (Research article)
نوع مقاله ISI
عنوان انگلیسی مقاله A review on neural networks with random weights
ترجمه عنوان مقاله بررسی شبکه های عصبی با وزن های تصادفی
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط هوش مصنوعی
مجله محاسبات عصبی – Neurocomputing
دانشگاه College of Computer Science and Software Engineering – Shenzhen University – China
کلمات کلیدی شبکه های عصبی مواد غذایی، مکانیسم آموزشی، شبکه عصبی با وزن تصادفی
کلمات کلیدی انگلیسی Feed-forward neural networks, Training mechanism, Neural networks with random weights
شناسه دیجیتال – doi
http://dx.doi.org/10.1016/j.neucom.2017.08.040
کد محصول E8629
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

بخشی از متن مقاله:
۱٫ Introduction

Artificial neural networks (ANNs) have received considerable attention due to its powerful ability in image processing, speech recognition, natural language processing, etc. The ANN models performance depends largely on the quantity and quality of data, computing power, and the efficiency of algorithms. Traditional ANNs train models by iteratively tuning all the weights and biases in minimizing a loss function which is defined as the difference between model predictions and real observations. During the training process, the derivatives of the loss function are back propagated to each layer to guide parameter adjustment [1]. Unfortunately this method has several critical drawbacks, such as slow convergence, local minima problem, and model selection uncertainty. Deep learning refers to train a multilayer neural network by using a gradient based technique, which has become an unprecedented hot research topic after AlphaGo, an artificial intelligence program based on deep learning technology, beat Lee Sedol, the famous 18-time Go world champion [2]. Deep learning trains models in a similar way as the traditional ANNs do. In deep learning, all the parameters are first initialized by using unsupervised methods and then are tuned by using Back Propagation (BP) technique method [1]. The multilayer architecture can be treated as a whole and all the internal parameters need to be fine-tuned iteratively. As the depth increases, training a deep learning model needs tremendous amount of time, even on the powerful GPU-based computers [3–۷]. In addition, deep learning with BP has all weaknesses that ANNs have. Neural network with random weights (NNRW) provides a solution for the problems that traditional ANNs and the BP-based deep learning approaches have. NNRW is defined as a non-iterative training algorithm in which the hidden weights and biases are randomly selected from a given range and kept same throughout the training process while the weights between the hidden layer and the output layer are obtained analytically. Compared with traditional learning with global tuning such as deep learning with BP-based method, NNRW can achieve much faster training speed with acceptable accuracy. In addition, NNRW is easy to implement and its universal approximation capability has been proven in theory [8–۱۰]. In recent years, there are several review articles about NNRW have been published. Deng et al. [11] provided an overview of extreme learning machine (ELM) theory and its variants, especially on online sequential ELM (OS-ELM), incremental ELM (I-ELM), ELM ensembles, etc. In addition, [11] mentioned some of the embryos of deep ELM architecture, such as ELM Auto-encoder (ELM-AE) and Multilayer ELM (ML-ELM). With the rapid development of ELM, many improved algorithms and diverse applications have emerged recently. Huang et al. [12] has shown that, apart from classification and regression, ELM can be extended to deal with compression, feature learning, and clustering. The ELM hardware implementation and parallel computation techniques of ELM are also being mentioned in [12].

نوشته های مشابه

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا