مشخصات مقاله | |
انتشار | مقاله سال 2016 |
تعداد صفحات مقاله انگلیسی | 10 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه اسپرینگر |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | An ordered clustering algorithm based on K-means and the PROMETHEE method |
ترجمه عنوان مقاله | یک الگوریتم خوشه بندی مرتب شده بر اساس K-means و روش PROMETHEE |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر و فناوری اطلاعات |
گرایش های مرتبط | مهندسی الگوریتم ها و محاسبات، شبکه های کامپیوتری |
مجله | مجله بین المللی یادگیری ماشین و سایبرنتیک – International Journal of Machine Learning and Cybernetics |
دانشگاه | PLA University of Science and Technology – China |
کلمات کلیدی | وشه مرتب شده، روش PROMETHEE، بردار وزن، خوشه بندی K-means |
کلمات کلیدی انگلیسی | Ordered cluster, PROMETHEE method, Weight vector, K-means clustering |
کد محصول | E7612 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1 Introduction
Clustering is a fundamental problem in the data analysis, which can be widely applied to machine learning, pattern recognition, information retrieval and data mining [1–5]. The main idea of clustering is to divide the set of data into a certain number of clusters (groups, subsets, or categories) which has high similarity in the same cluster according to the clustering objective. One of the most well-known clustering algorithms is the K-means algorithm [6], which minimizes the sums of the distances of all the alternatives to the corresponding cluster center. The K-means clustering has become one of the most popular clustering algorithms because it is very fast and simple for implementation. A lot of studies have focused on this category of algorithms. For instance, Melnykov [7] discussed the k-mean clustering algorithm under Mahalanobis distances and proposed a novel approach for initializing covariance matrices. Bezdek [8] proposed the fuzzy c-means algorithm, which is based on the K-means clustering algorithm and fuzzy logic, to deal with nontrivial data and uncertainties encountered in real life [9, 10]. More generally, Xu and Wu [11] extended the fuzzy c-means algorithm to the intuitionistic fuzzy environment [12] and proposed the intuitionistic fuzzy c-means algorithm. Chen et al. [13] investigated the K-means clustering algorithm under hesitant fuzzy environment. In order to produce the nonlinear separating hypersurfaces between clusters, the kernel K-means clustering algorithm and fuzzy kernel K-means clustering algorithm have been developed [2]. In recent years, the K-means algorithms have been employed and developed so that they can benefit big data processing. Deng et al. [14] used a K-means clustering to separate the big dataset into several parts. In Mashayekhy et al. [15], Bolon-Canedo et al. [16] and Duan et al. [17], the K-means algorithm was implemented for real-time big data processing. This algorithm has also been considered for funds classification [18] and electron microscopy [19] in big data setting. A generalized version of K-means algorithm has also been proposed for processing temporal data [20]. However, the existing K-mean clustering algorithms are mainly used to cluster the data into several groups which don’t have any relation among them. In multi-criteria decision aid (MCDA), the decision maker (DM) may desire to get ‘‘ordered clusters’’ in which there exist the ordered relations among the clusters. This kind of problems can be referred to as multi-criteria ordered clustering problems. The identification of ordered clusters can provide the priority relations of alternatives for the DMs. Although the ordered clusters can’t provide more accurate relations of alternatives than the complete rankings of all the alternatives, the ordered clustering is also necessary in the real life problems. For example, in the ranking of world universities, the DMs may not give the accurate rankings of some universities because they have no obvious differences. Therefore, it is reasonable that we partition the alternatives with no significant differences into the ordered clusters. For multi-criteria ordered clustering problems, De Smet et al. [21] proposed an exact algorithm (we call it De Smet et al.’s method) to find a completely ordered partition based on the valued preference degrees. However, De Smet et al. [21] only used the ordinal properties of the pairwise preference relations to obtain the ordered clusters. They didn’t fully exploit the underlying structure of a data set to produce better ordered clustering results. |