مقاله انگلیسی رایگان در مورد کاهش معماری مبتنی بر c-means فازی با استفاده از یک شبکه عصبی احتمالی – الزویر ۲۰۱۸
مشخصات مقاله | |
ترجمه عنوان مقاله | کاهش معماری مبتنی بر c-means فازی با استفاده از یک شبکه عصبی احتمالی |
عنوان انگلیسی مقاله | Fuzzy c-means-based architecture reduction of a probabilistic neural network |
انتشار | مقاله سال ۲۰۱۸ |
تعداد صفحات مقاله انگلیسی | ۳۱ صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس میباشد |
نمایه (index) | MedLine – Scopus – Master Journal List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
۸٫۴۴۶ در سال ۲۰۱۷ |
شاخص H_index | ۱۲۱ در سال ۲۰۱۹ |
شاخص SJR | ۲٫۳۵۹ در سال ۲۰۱۷ |
شناسه ISSN | ۰۸۹۳-۶۰۸۰ |
شاخص Quartile (چارک) | Q1 در سال ۲۰۱۷ |
رشته های مرتبط | مهندسی کامپیوتر، فناوری اطلاعات |
گرایش های مرتبط | معماری کامپیوتری، هوش مصنوعی، شبکه های کامپیوتری |
نوع ارائه مقاله |
ژورنال |
مجله | شبکه های عصبی – Neural Networks |
دانشگاه | Faculty of Electrical and Computer Engineering – Rzeszow University of Technology – Poland |
کلمات کلیدی | شبکه عصبی احتمالی، c-means فازی، کاهش معماری، طبقه بندی |
کلمات کلیدی انگلیسی | probabilistic neural network، fuzzy c–means، architecture reduction، classification |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.neunet.2018.07.012 |
کد محصول | E10733 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
۱- Introduction ۲- Probabilistic neural network ۳- Fuzzy c-means algorithm ۴- Proposed algorithm ۵- Input data sets ۶- Parameter settings ۷- Simulation experiments ۸- Conclusions References |
بخشی از متن مقاله: |
Abstract The efficiency of the probabilistic neural network (PNN) is very sensitive to the cardinality of a considered input data set. It results from the design of the network’s pattern layer. In this layer, the neurons perform an activation on all input records. This makes the PNN architecture complex, especially for big data classification tasks. In this paper, a new algorithm for the structure reduction of the PNN is put forward. The solution relies on performing a fuzzy c-means data clustering and selecting PNN’s pattern neurons on the basis of the obtained centroids. Then, to activate the pattern neurons, the algorithm chooses input vectors for which the highest values of the membership coefficients are determined. The proposed approach is applied to the classification tasks of repository data sets. PNN is trained by three different classification procedures: conjugate gradients, reinforcement learning and the plugin method. Two types of kernel estimators are used to activate the neurons of the network. A 10-fold cross validation errors for the original and the reduced PNNs are compared. Received results confirm the validity of the introduced algorithm. Introduction It is known that the complexity of the PNN’s architecture proposed by Specht (1990) is high. This complexity is an effect of using all of the input vectors to activate the neurons in the network’s pattern layer. Therefore, to date, considerable research attention has been paid to the structure optimization of the PNN. For example, Burrascano (1990) applies the learning vector quantization procedure to find representative patterns that can be used to build neurons in PNNs. This procedure defines a number of records that are reference vectors, which approximate the probability density functions of the input classes. Chtioui et al. (1996) present cardinality reduction of the input data for a PNN by hierarchical clustering. The solution utilizes the technique of the reciprocal neighbours, allowing for the concentration of examples that are closest to each other. Zaknich (1997) introduces the quantization method for PNN structure simplification. The input space is split into a hypergrid of a fixed-size; for all hypercubes, representative cluster centers are determined. The number of training vectors in each hyper-cube is therefore reduced to one. Chang et al. (2008) show an expectation-maximization (EM) method as the training algorithm for a PNN. The idea relies on predefining a deterministic number of clusters as the input data set. A global k–means algorithm is used as the solution. Kusy and Kluska (2013, 2017) present an application of k–means clustering and support vector machines to simplify the PNN’s architecture. The appropriately selected centroids and the support vectors are chosen to construct the pattern neurons for the network. |