مشخصات مقاله | |
ترجمه عنوان مقاله | ماشین های بردار پشتیبانی دوگانه غیرمنتظره تصادفی برای مشکلات بزرگ مقیاس |
عنوان انگلیسی مقاله | Insensitive stochastic gradient twin support vector machines for large scale problems |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 34 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | scopus – master journals – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
4.305 در سال 2017 |
شاخص H_index | 142 در سال 2018 |
شاخص SJR | 1.635 در سال 2018 |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی |
نوع ارائه مقاله |
ژورنال |
مجله / کنفرانس | علوم اطلاعاتی – Information Sciences |
دانشگاه | School of Mathematical Sciences – Inner Mongolia University – P.R.China |
کلمات کلیدی | طبقه بندی، ماشین بردار پشتیبانی، ماشین بردار حامی دوتایی، کاهش شیب تصادفی، مشکل مقیاس بزرگ |
کلمات کلیدی انگلیسی | Classification, support vector machine, twin support vector machine, stochastic gradient descent, large scale problem |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.ins.2018.06.007 |
کد محصول | E10197 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract Keywords 1 Introduction 2 Related works 3 SGTSVM 4 Experiments 5 Conclusion Acknowledgments References |
بخشی از متن مقاله: |
Abstract
Within the large scale classification problem, the stochastic gradient descent method called PEGASOS has been successfully applied to support vector machines (SVMs). In this paper, we propose a stochastic gradient twin support vector machine (SGTSVM) based on the twin support vector machine (TWSVM). Compared to PEGASOS, our method is insensitive to stochastic sampling. Furthermore, we prove the convergence of SGTSVM and the approximation between TWSVM and SGTSVM under uniform sampling, whereas PEGASOS is almost surely convergent and only has an opportunity to obtain an approximation to SVM. In addition, we extend SGTSVM to nonlinear classification problems via a kernel trick. Experiments on artificial and publicly available datasets show that our method has stable performance and can handle large scale problems easily. Introduction As a powerful classification tool, support vector machines (SVMs) [4, 42] have been widely used in various practical problems [19, 14, 9]. SVM searches parallel hyperplanes with the maximum margin between them to achieve classification. By dropping the parallelism condition, the twin support vector machine (TWSVM) [10, 33], which uses a pair of nonparallel hyperplanes, has been proposed. Benefiting from the nonparallel hyperplanes, TWSVM classifies some different types of heterogeneous data better than SVM. Therefore, TWSVM has been deeply studied and enhanced, resulting in the development of, e.g., the twin bounded support vector machine (TBSVM) [33], twin parametric margin support vector machine (TPMSVM) [22] and weighted Lagrangian twin support vector machine (WLTSVM) [31]. These classifiers have been widely applied in many practical problems [32, 39, 17, 38, 3, 30, 26, 25, 24]. Due to both SVM and TWSVM needing to solve quadratic programming problems (QPPs), it is difficult for these techniques to handle large scale problems [21, 36]. To accelerate the training of SVM, many improvements have been proposed. On the one hand, sequential minimal optimization (SMO) [23, 2], successive over-relaxation (SOR) [18] and the dual coordinate descent method (DCD) [6] were proposed to solve the dual problem of SVM. |