مشخصات مقاله | |
ترجمه عنوان مقاله | به سوی یک سامانه توصیهگر برای حفظ قابل اطمینان حریم خصوصی |
عنوان انگلیسی مقاله | Towards a more reliable privacy-preserving recommender system |
انتشار | مقاله سال 2019 |
تعداد صفحات مقاله انگلیسی | 18 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس میباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
6.774 در سال 2018 |
شاخص H_index | 154 در سال 2019 |
شاخص SJR | 1.620 در سال 2018 |
شناسه ISSN | 0020-0255 |
شاخص Quartile (چارک) | Q1 در سال 2018 |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | دارد |
رفرنس | دارد |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | امنیت اطلاعات، مهندسی نرم افزار، مهندسی الگوریتم ها و محاسبات |
نوع ارائه مقاله |
ژورنال |
مجله | علوم اطلاعات – Information Sciences |
دانشگاه | Department of Computer Science and Information Engineering, National Taiwan University, Taiwan |
کلمات کلیدی | توصیه نامه حفظ حریم خصوصی، حریم خصوصی تفاضلی، فاکتورگیری ایمن ماتریس توزیع شده، الگوریتم های تصادفیساز پاسخ |
کلمات کلیدی انگلیسی | Privacy-preserving recommendation، Differential privacy، Secure distributed matrix factorization، Randomized response algorithms |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.ins.2018.12.085 |
کد محصول | E11561 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- Preliminaries 3- Methodology 4- Evaluation 5- Application to factorization machine 6- Conclusions and future work References |
بخشی از متن مقاله: |
Abstract This paper proposes a privacy-preserving distributed recommendation framework, Secure Distributed Collaborative Filtering (SDCF), to preserve the privacy of value, model and existence altogether. That says, not only the ratings from the users to the items, but also the existence of the ratings as well as the learned recommendation model are kept private in our framework. Our solution relies on a distributed client-server architecture and a two-stage Randomized Response algorithm, along with an implementation on the popular recommendation model, Matrix Factorization (MF). We further prove SDCF to meet the guarantee of Differential Privacy so that clients are allowed to specify arbitrary privacy levels. Experiments conducted on numerical rating prediction and one-class rating action prediction exhibit that SDCF does not sacrifice too much accuracy for privacy. Introduction Collaborative filtering (CF) is one of the most popular models for recommending items [27]. Its basic idea is to make recommendation based on the similarity between users or between items. CF-based models are trained by making use of user feedbacks on items. CF models can be divided into two categories of feedback: numerical and one-class. Numerical feedback consists of numeric values (e.g., ratings between 1 and 5). One-class feedback is a kind of record/action for a specific action (e.g., purchase an item or not). As the training process of CF-based models relies on large-scale data, service providers should collect a huge collection of user feedback records. However, if the servers are untrusted or contain vulnerabilities, the collection of user feedbacks may lead to privacy liability due to data leakage. Even if the servers are curious-but-honest, which means the services are functioning normally, feedback data leakage can still make private attributes and even real identity of users be inferred by hackers [7,10]. There are three categories of data leakage in CF-based recommenders. The first is Value Leakage: exposure of the values of feedbacks, such as the rating scores. The second is Existence Leakage: exposure of the existence of feedbacks, such as whether a user rates an item. For example, if the attacker knows that user u rates a book i, they can be quite certain that u has read i. The third is Model Leakage: exposure of the trained model. Models are important since given the CF model, attackers can estimate the ratings from any user to any item. |