مشخصات مقاله | |
ترجمه عنوان مقاله | مروری جامع بر یادگیری عمیق گروهی: فرصت ها و چالش ها |
عنوان انگلیسی مقاله | A comprehensive review on ensemble deep learning: Opportunities and challenges |
نشریه | الزویر |
انتشار | مقاله سال 2023 |
تعداد صفحات مقاله انگلیسی | 18 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
نوع نگارش مقاله |
مقاله مروری (Review Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | Scopus – Master Journals List – JCR – Master ISC |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
6.053 در سال 2022 |
شاخص H_index | 48 در سال 2023 |
شاخص SJR | 0.924 در سال 2022 |
شناسه ISSN | 2213-1248 |
شاخص Quartile (چارک) |
Q1 در سال 2022
|
فرضیه | ندارد |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | کامپیوتر |
گرایش های مرتبط | هوش مصنوعی – مهندسی الگوریتم ها و محاسبات |
نوع ارائه مقاله |
ژورنال |
مجله | مجله دانشگاه ملک سعود – علوم کامپیوتر و اطلاعات – Journal of King Saud University – Computer and Information Sciences |
دانشگاه | Department of Computer Science, Faculty of Graduate Studies for Statistical Research, Cairo University, Cairo, Egypt |
کلمات کلیدی | یادگیری گروهی – روش های گروه – یادگیری ماشین – یادگیری عمیق – یادگیری عمیق گروهی |
کلمات کلیدی انگلیسی | Ensemble learning – Ensemble methods – Machine learning – Deep learning – Ensemble deep learning |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.jksuci.2023.01.014 |
لینک سایت مرجع | https://www.sciencedirect.com/science/article/pii/S1319157823000228 |
کد محصول | e17413 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract 1 Introduction 2 Trends of ensemble learning 3 Foundations of ensemble learning 4 Ensemble methods 5 Evaluating ensembles 6 Application domains 7 Conclusion Declaration of Competing Interest References |
بخشی از متن مقاله: |
Abstract In machine learning, two approaches outperform traditional algorithms: ensemble learning and deep learning. The former refers to methods that integrate multiple base models in the same framework to obtain a stronger model that outperforms them. The success of an ensemble method depends on several factors, including how the baseline models are trained and how they are combined. In the literature, there are common approaches to building an ensemble model successfully applied in several domains. On the other hand, deep learning-based models have improved the predictive accuracy of machine learning across a wide range of domains. Despite the diversity of deep learning architectures and their ability to deal with complex problems and the ability to extract features automatically, the main challenge in deep learning is that it requires a lot of expertise and experience to tune the optimal hyper-parameters, which makes it a tedious and time-consuming task. Numerous recent research efforts have been made to approach ensemble learning to deep learning to overcome this challenge. Most of these efforts focus on simple ensemble methods that have some limitations. Hence, this review paper provides comprehensive reviews of the various strategies for ensemble learning, especially in the case of deep learning. Also, it explains in detail the various features or factors that influence the success of ensemble methods. In addition, it presents and accurately categorized several research efforts that used ensemble learning in a wide range of domains.
Introduction In a world full of diverse and varied data sources. Machine learning has become one of the most important and dominant branches of artificial intelligence methods, which is applied in many fields. There are many different learning algorithms and methods. Each method’s pitfalls and drawbacks are measured in terms of several factors, including performance and scalability. Based on a lot of research in machine learning, two methods dominate learning algorithms; namely deep learning (Deng et al., 2014) and ensemble learning (Polikar, 2012, Sagi and Rokach, 2018, Rokach, 2019). The deep learning techniques can scale and handle complex problems and offer an automatic feature extraction from unstructured data(Kamilaris and Prenafeta-Boldú, 2018). Also, deep learning methods contain several types of network architectures for different tasks, such as feeding forward neural networks (Bebis and Georgiopoulos, 1994), convolutional neural networks (Collobert and Weston, 2008), recurrent neural networks (Yu et al., 2019). Many others (Ain et al., 2017). However, the training process of deep learning models requires a massive effort, and tuning the optimal hyper-parameters requires expertise and extensive trial, which is a tedious and time-consuming task. Also, training more complex deep neural network increases the chance of overfitting.
Conclusion In machine learning, reducing the bias and the variance of models is one of the key factors determining the success of the learning process. In the literature, it has been proven that merging the output of different classification algorithms might decrease the generalization error without increasing the variance of the model. The previous is the key essence of the so-called ensemble learning. Numerous research efforts have preferred ensemble learning over single-model learning in various domains. The main advantage of ensemble learning is combining several individual models to improve prediction performance and obtain a stronger model that outperforms them. In the literature, there are several ensemble techniques to boost classification algorithms. The main difference between any two ensemble methods is training the baseline models and how to combine them. Several research efforts introduced ensemble learning into deep learning models to remedy the problems appearing during the learning process of deep learning models. Usually, the main challenge of deep learning models is that they need a lot of knowledge and experience to tune the optimal hyperparameters aiming at reaching a global minimum error. However, finding the optimal hyperparameters requires an exhausting technique in the search space, which in turn becomes a tedious and time-consuming task. Thus, several research efforts have applied deep ensemble learning in many fields, and most of these efforts are articulated around simple ensemble methods. This paper provided a comprehensive review of the various strategies for ensemble learning, especially in the case of deep learning. The paper also illustrated the recent trends in ensemble learning using quantitative analysis of several research papers. Moreover, the paper offered various factors that influence ensemble methods’ success, including sampling the training data, training the baseline models, and the fusion techniques of the baseline models. Also, the papers discussed the pros and cons of each ensemble method. Additionally, the paper extensively introduced and presented several research efforts that used ensemble learning in a wide range of domains and categorized these efforts into either traditional machine or deep learning models as baseline classifiers. It is worth noting that an ensemble of deep learning models using simple averaging methods is not a smart choice and is very sensitive to biased baseline models. On the other hand, Injecting diversity in ensemble deep learning can become robust to the biased baseline models. The diversity can be achieved by training different baseline deep learning architectures over several data samples. The diversity, however, is limited by the computation cost and the availability of suitable data to be sampled. |