مشخصات مقاله | |
ترجمه عنوان مقاله | کاربرد تکنیک SVM موازی مبتنی بر MapReduce در پالایش (فیلترینگ) اسپم ها در مقیاس وسیع |
عنوان انگلیسی مقاله | A MapReduce based parallel SVM for large scale spam filtering |
انتشار | مقاله سال 2011 |
تعداد صفحات مقاله انگلیسی | 4 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه IEEE |
نوع نگارش مقاله |
– |
مقاله بیس | این مقاله بیس نمیباشد |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | مهندسی نرم افزار – معماری کامپیوتر |
نوع ارائه مقاله |
کنفرانس |
مجله / کنفرانس | Eighth International Conference on Fuzzy Systems and Knowledge Discovery |
دانشگاه | School of Engineering and Design, Brunel University, Uxbridge, Middlesex, UB8 3PH, UK |
کلمات کلیدی | یادگیری ماشین، طبقه بندی، مفاهیم هستی شناسی، ماشین بردار پشتیبانی، رایانش موازی |
کلمات کلیدی انگلیسی | Machine Learning, Classification, Ontology, Semantics, Support Vector Machine, Parallel Computing |
شناسه دیجیتال – doi |
https://doi.org/10.1109/FSKD.2011.6020074 |
کد محصول | E11890 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract I.Introduction II.Distrubuting SVM with Mapreduce III.Ontology For Accuracy Augmentation IV.Experimental Results V.Conclusions And Future Work |
بخشی از متن مقاله: |
Abstract Spam continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) based techniques have been proposed for spam classification. However, SVM training is a computationally intensive process. This paper presents a parallel SVM algorithm for scalable spam filtering. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the distributed SVM reduces the training time significantly. Ontology based concepts are also employed to minimize the impact of accuracy degradation when distributing the training data amongst the SVM classifiers. INTRODUCTION Support Vector Machine (SVM) based approaches have persistently gained popularity in terms of their application for text classification and machine learning [1], [2]. Classification in SVM based approaches is founded on the notion of hyperplanes [3]. The hyperplanes act as class segregators in common binary classification, such as spam or ham in the context of spam filtering. SVM training is a computationally intensive process. Numerous SVM formulations, solvers and architectures for improving SVM performance have been explored and proposed [4], [5] including distributed and parallel computing techniques. SVM decomposition is another widespread technique for improving the performance in SVM training [6], [7]. Decomposition approaches work on the basis of identifying a small number of optimization variables and tackling a set of fixed size problems. Another widespread and effective practice is to split the training data into smaller fragments and use a number of SVM’s to process the individual data chunks. This in turn reduces overall training time. Various forms of summarizations and aggregations are then performed to process the final set of global support vectors [8]. Numerous forms of decomposition which are based on a data splitting strategy approach can suffer from issues including convergence and accuracy. Challenges related to chunk aliasing as well as outlier accumulation tend to intensify problems in a distributed SVM context. Adopting a training data set splitting strategy commonly amplifies issues related to data imbalance and data distribution instability. |