مشخصات مقاله | |
انتشار | مقاله سال 2017 |
تعداد صفحات مقاله انگلیسی | 26 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه وایلی |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Interactive evolutionary approaches to multiobjective feature selection |
ترجمه عنوان مقاله | رویکرد تکاملی تعاملی برای انتخاب ویژگی های چند هدفه |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی صنایع و مدیریت |
گرایش های مرتبط | تحقیق در عملیات |
مجله | یافته ها در حوزه بین المللی در تحقیقات عملیاتی – International Transactions In Operational Research |
دانشگاه | Middle East Technical University – Turkey |
کلمات کلیدی | انتخاب ویژگی، انتخاب زیر مجموعه، رویکرد تعاملی، الگوریتم تکاملی |
کلمات کلیدی انگلیسی | feature selection; subset selection; interactive approach; evolutionary algorithm |
کد محصول | E5965 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction
In classification problems, supervised learning algorithms, such as decision trees, support vector machines (SVMs), neural networks, etc. are used to predict the class (or output variable) of an instance by observing its feature (or input variables) values. Supervised learning algorithms train a prediction model over a dataset, in which different feature and class values of some past observations are provided, by understanding the relationship between the features and classes. Hence, the prediction model can be used to classify a new instance based on its features. The classification performance of a learning algorithm depends on its ability to detect the relationship between input and output variables accurately. However, the presence of features that are irrelevant to the class, or the redundancy within the features, may have a negative impact on the classification performance of the learning algorithm (Kohavi and John, 1997). Yu and Liu (2004) classify the features based on their relevance with respect to the output as strongly relevant, weakly relevant, and irrelevant. A feature is strongly relevant to class if its existence affects classification performance independently from the other features used, weakly relevant if it affects the classifi- cation performance depending on the other features used, and irrelevant if the feature does not affect the classification performance at all. They argue that the optimal subset of features in terms of classification performance includes all strongly relevant and weakly relevant and nonredundant features. Selecting a subset that comprises strongly relevant, and weakly relevant and nonredundant features to be used in the prediction model of the learning algorithm (or classifier), instead of using them all, is called as feature selection problem. Feature selection aims to improve the classification performance by eliminating irrelevant and redundant features. The decrease in the number of features to be used in the prediction model is also useful in terms of reducing storage requirements, improving the time efficiency, and simplifying the prediction model itself (Guyon and Elisseeff, 2003). Therefore, feature selection methods are used in many areas, such as handwritten digit recognition (Oliveria et al., 2003), medical diagnosis (Chyzhyk et al., 2014), gene marker recognition (Banerjee et al., 2007), etc. |