مشخصات مقاله | |
ترجمه عنوان مقاله | اکسپلویت روابط اطلاعات مرکب در ماشین بردار پشتیبانی |
عنوان انگلیسی مقاله | Exploiting multiplex data relationships in Support Vector Machines |
انتشار | مقاله سال 2019 |
تعداد صفحات مقاله انگلیسی | 8 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله | مقاله پژوهشی (Research article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | scopus – master journals – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) | 3.962 در سال 2017 |
شاخص H_index | 168 در سال 2019 |
شاخص SJR | 1.065 در سال 2019 |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی، مهندسی نرم افزار |
نوع ارائه مقاله | ژورنال |
مجله / کنفرانس | الگو شناسی – Pattern Recognition |
دانشگاه | Department of Informatics – Aristotle University of Thessaloniki – Greece |
کلمات کلیدی | روابط داده چندگانه، ماشین بردار پشتیبانی، مقررات مبتنی بر گراف، آموزش هسته چندگانه |
کلمات کلیدی انگلیسی | Multiplex data relationships, Support Vector Machine, Graph-based regularization, Multiple Kernel Learning |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.patcog.2018.07.032 |
کد محصول | E9447 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract 1 Introduction 2 Related work 3 Multiplex data relationships in Support Vector Machines 4 Experiments 5 Conclusion References |
بخشی از متن مقاله: |
abstract
In this paper, a novel method for introducing multiplex data relationships to the SVM optimization process is presented. Different properties about the training data are encoded in graph structures, in the form of pairwise data relationships. Then, they are incorporated to the SVM optimization problem, as modified graph-regularized basekernels, each highlighting a different property about the training data. The contribution of each graph-regularized kernel to the SVM classification problem, is estimated automatically. Thereby, the solution of the proposed modified SVM optimization problem lies in a regularized space, where data similarity is expressed by a linear combination of multiple single-graph regularized kernels. The proposed method exploits and extends the findings of Multiple Kernel Learning and graph-based SVM method families. It is shown that the available kernel options for the former can be broadened, and the exhaustive parameter tuning for the latter can be eliminated. Moreover, both method families can be considered as special cases of the proposed formulation, hereafter. Our experimental evaluation in visual data classification problems denote the superiority of the proposed method. The obtained classification performance gains can be explained by the exploitation of multiplex data relationships, during the classifier optimization process. Introduction Computer vision/visual analysis methods have found industrial applications in several areas such as in robotic systems e.g., unmanned aerial vehicles and virtual reality, and their growth over the past few years have been immense. Such visual analysis applications including face recognition, object recognition, human action recognition, human/object tracking and many other applications, are commonly addressed as classification problems [1,2]. One of the most widely studied classification methods in visual analysis applications is the Support Vector Machines (SVM) classifier. SVM-based methods and extensions have been employed in mathematical/engineering problems including one-class and multiclass classification, regression and semi-supervised learning [3– 6]. In its simplest form, SVM learns from labeled data examples originating from two classes, the hyperplane that separates them with the maximum margin, at the training data input (or feature) space. After its first proposal, SVM has been extended to determine decision functions in feature spaces obtained by employing non-linear data mappings, where data similarity is implicitly expressed by a kernel function. The explicit data mapping is not required to be known, if the adopted kernel function satisfies Mercer conditions [7]. Common practices for determining a feature space where SVM provides satisfactory performance to a given classification/regression problem, involve selecting a kernel function from a set of widely adopted kernel functions e.g., polynomial, sigmoid, Radial Basis Function (RBF), and thereby tuning the corresponding hyperparameters using e.g., cross validation, based on previous knowledge about the problem at hand. In every case, the performance of SVM heavily depends on the adopted kernel function choice, since the optimal solution for each problem might lie in unknown feature spaces. In order to determine the optimal feature space for SVM operation, Multiple Kernel Learning (MKL) methods have been proposed. Their basic assumption is that the optimal underlying data mapping, i.e., the optimal kernel function, is a weighted combination (either linear or non-linear) of Multiple Kernel functions, the so-called basekernels [8–11]. The participation of each kernel to the optimal solution is determined by a parameter vector, i.e., the basekernel weights. The weights of the basekernels are estimated in an automated fashion along with the SVM hyperplane, by following an additional optimization procedure (e.g., single-step sequential optimization, two-step optimization). Standard MKL methods employ Lp or L1 loss functions for determining the kernel weights, with the latter producing sparse solutions and the former providing fast convergence [12,13]. |