مشخصات مقاله | |
ترجمه عنوان مقاله | بهبود عملکرد یادگیری عمیق با استفاده از الگوریتم یادگیری بیرونی HTM جنگل های تصادفی |
عنوان انگلیسی مقاله | Improving deep learning performance using random forest HTM cortical learning algorithm |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 6 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه IEEE |
مقاله بیس | این مقاله بیس نمیباشد |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی، الگوریتم ها و محاسبات |
مجله / کنفرانس | اولین کارگاه آموزشی بین المللی در زمینه یادگیری عمیق و نمایش – First International Workshop on Deep and Representation Learning |
دانشگاه | Faculty Of Eng. Delta Univ. for Science and Technology – Gamasa City – Egypt |
کلمات کلیدی | یادگیری عمیق؛ الگوریتم جنگل تصادفی؛ الگوریتم HTM؛ درصد خطای مطلق میانگین؛ چرخه duty |
کلمات کلیدی انگلیسی | Deep learning; Random Forest algorithm; HTM algorithm; mean absolute percentage error; duty cycle |
شناسه دیجیتال – doi |
https://doi.org/10.1109/IWDRL.2018.8358209 |
کد محصول | E9509 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract I INTRODUCTION II LITERATURE REVIEW III PROBLEM FORMULATION IV PROPOSED METHODOLOGY V RESULTS AND DISCUSSION CONCLUSION References |
بخشی از متن مقاله: |
Abstract
Deep Learning is an artificial intelligence function that imitates the mechanisms of the human mind in processing records and developing shapes to be used in selection construction.The objective of the paper is to improve the performance of the deep learning using a proposed algorithm called RFHTMC. This proposed algorithm is a merged version from Random Forest and HTM Cortical Learning Algorithm. The methodology for improving the performance of Deep Learning depends on the concept of minimizing the mean absolute percentage error which is an indication of the high performance of the forecastprocedure. In addition to the overlap duty cycle which its high percentage is an indication of the speed of the processing operation of the classifier. The outcomes depict that the proposed set of rules reduces the absolute percent errors by using half of the value. And increase the percentage of the overlap duty cycle with 15%. INTRODUCTION Deep learning is portion of a widerthoughtful of relatives of system learning methods based on representations of facts. Its gaining knowledge is supervised, semi-supervised or unsupervised [1]. Deep getting to know fashions are loosely associated with facts processing and communication styles in a biological fearful machine, including neural coding that attempts to outline a courting among various stimuli and related neuronal responses within the mind [2]. Deep understanding styles with deep neural networks, deep acceptance networks and regular neural networks have been carried out to fields consisting of speech reputation, natural language processing, social community filtering [3]. One of the fundamental functions of unsupervised getting to know is to offer exact representations for facts, that can be used for detection, popularity, prediction, or visualization. Precise representations do away with beside the point variability of the input information, while preserving the facts this is useful for the remaining venture. One purpose for the modern recovery in unsupervised gaining knowledge is the ability to deliver deep function hierarchies through way of stacking unsupervised modules on pinnacle of every other [3]. The unsupervised module at one diploma in the hierarchy is fed with the illustration vectors produced with the aid of using the level below. Higher degree representations capture excessive stage dependencies between input variables, thereby enhancing the capability of the device to grasp underlying regularities within the statistics. The output of the final layer within the hierarchy can be fed to a conventional supervised classifier. From these views, increasing the performance of deep learning is an important topic for better knowledge gain and efficient data classifications. Learning algorithms can effectively increasing the performance of deep learning.One of the most important algorithm is Random Forest. Sparse illustration has proven massive ability capabilities in dealing with these issues. Random forest is an indicator term for an ensemble of selection bushes. A simplified random forest is depicted in figure1[4]. Where the wooded area chooses the type having the most votes. If the variety of cases within the schooling set is N, then pattern of N instances is taken at random, but with replacement. This model may be the training set for rising the tree. If there are M enter variables, a number of m << M is particular such that at every node, m variables are decided on at random out of the M and the high-quality split on these m is used to split the node. |