مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 13 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه الزویر |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Scalable transfer support vector machine with group probabilities |
ترجمه عنوان مقاله | انتقال مقیاس پذیر ماشین بردار پشتیبانی با احتمالات گروهی |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی |
مجله | محاسبات عصبی – Neurocomputing |
دانشگاه | School of Information Science and Engineering – Changzhou University – China |
کلمات کلیدی | مجموعه داده های بزرگ، طبقه بندی، ماشین بردار پشتیبانی، یادگیری انتقال، احتمالات گروهی |
کد محصول | E5640 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction
Learning from group probabilities [1–6] is attractive in the scenarios where the samples are provided as groups and only the label proportion of the samples is available. One of the most natural applications comes in analyzing the outcomes of political elections, where the population of all voters in an electoral district is known, but only the total number of votes per party in each district is revealed. However, from an analysis of this data, e.g. the dependence of votes on variables such as income or household types, can show up interesting connections, and may be used to uncover election fraud when outliers from this model are uncovered. This case is quite different from the traditional supervised, unsupervised and semi-supervised learning problems. Fig. 1 shows the difference among several traditional learning algorithms, including: supervised learning, unsupervised learning, semi-supervised learning and learning from group probabilities. Learning from group probabilities can be regarded as an algorithm lying somewhere between supervised learning algorithms and semi-supervised learning algorithms. Various algorithms have been developed by utilizing the group probabilities. For example, Quadrianto et al. [2] applied consistent estimators which could reconstruct the correct labels with high probability into a uniform convergence sense. Later, Rüping [3] proposed a parametric classifier, called IC-SVM, which integrated inverse calibration technology into support vector regression (SVR). Besides, Stolpe and Morik [4] developed a clustering based algorithm to learn from label proportions. Recently, Qi et al. [6] introduced an effective model called LLPs via nonparallel support vector machine (LLP-NPSVM). LLP-NPSVM determined the label of samples according to two nonparallel hyper-planes under the supervision of label proportion information. The current study on learning from group probabilities often assumes that the training set is large enough to train a robust classifier [3]. However, this assumption may not always hold. In practice, the data features or data distributions may be different, which leads to the lack of group probabilities since there are not enough groups in corresponding data. |