مقاله انگلیسی رایگان در مورد فراتر از حریم خصوصی – اعمال حریم خصوصی افتراقی در هوش مصنوعی – IEEE 2022
مشخصات مقاله | |
ترجمه عنوان مقاله | فراتر از حریم خصوصی: اعمال حریم خصوصی تفاضلی در حوزه های کلیدی هوش مصنوعی |
عنوان انگلیسی مقاله | More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence |
نشریه | آی تریپل ای – IEEE |
سال انتشار | ۲۰۲۲ |
تعداد صفحات مقاله انگلیسی | ۲۰ صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
نوع نگارش مقاله |
مقاله پژوهشی (Research article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | JCR – Master Journal List – Scopus |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
۶٫۰۹۳ در سال ۲۰۲۰ |
شاخص H_index | ۱۸۳ در سال ۲۰۲۲ |
شاخص SJR | ۲٫۴۳۱ در سال ۲۰۲۰ |
شناسه ISSN | ۱۰۴۱-۴۳۴۷ |
شاخص Quartile (چارک) | Q1 در سال ۲۰۲۰ |
فرضیه | ندارد |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | دارد |
رفرنس | دارد |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | امنیت اطلاعات – مهندسی نرم افزار – هوش مصنوعی |
نوع ارائه مقاله |
ژورنال |
مجله / کنفرانس | معاملات IEEE در دانش و مهندسی داده – IEEE Transactions on Knowledge and Data Engineering |
دانشگاه | School of Computer Science, China University of Geosciences, China |
کلمات کلیدی | حریم خصوصی افتراقی – هوش مصنوعی – یادگیری ماشینی – یادگیری عمیق – سیستم های چند عاملی |
کلمات کلیدی انگلیسی | Differential privacy – artificial intelligence – machine learning – deep learning – multi-agent systems |
شناسه دیجیتال – doi |
https://doi.org/10.1109/TKDE.2020.3014246 |
لینک سایت مرجع |
https://ieeexplore.ieee.org/document/9158374 |
کد محصول | e17088 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract ۱ Introduction ۲ Preliminary ۳ Differential Privacy in Machine Learning ۴ Differential Privacy in Deep Learning ۵ Differential Privacy in Multi-Agent Systems ۶ Future Research Directions ۷ Conclusion References |
بخشی از متن مقاله: |
Abstract Artificial Intelligence (AI) has attracted a great deal of attention in recent years. However, alongside all its advancements, problems have also emerged, such as privacy violations, security issues and model fairness. Differential privacy, as a promising mathematical model, has several attractive properties that can help solve these problems, making it quite a valuable tool. For this reason, differential privacy has been broadly applied in AI but to date, no study has documented which differential privacy mechanisms can or have been leveraged to overcome its issues or the properties that make this possible. In this paper, we show that differential privacy can do more than just privacy preservation. It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI. With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving AI performance with differential privacy techniques. Introduction Artificial Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field. For example, multi-agent systems can be applied to distributed control systems [1], while distributed machine learning has been adopted by Google for mobile users [2]. However, as AI becomes more and more reliant on data, several new problems have emerged, such as privacy violations, security issues, model instability, model fairness and communication overheads. As just a few of the tactics used to derail AI, adversarial samples can fool machine learning models, leading to incorrect results. Multi-agent systems may receive false information from malicious agents. As a result, many researchers have been exploring new and existing security and privacy tools to tackle these new emerging problems. Differential privacy is one of these tools. Differential privacy is a prevalent privacy preservation model which guarantees whether an individual’s information is included in a dataset has little impact on the aggregate output. Fig. 1 illustrates a basic differential privacy framework using the following example. Consider two datasets that are almost identical but differ in only one record and that, access to the datasets is provided via a query function f. If we can find a mechanism that can query both datasets and obtain the same outputs, we can claim that differential privacy is satisfied. In that scenario, an adversary cannot associate the query outputs with either of the two neighbouring datasets, so the one different record is safe. Hence, the differential privacy guarantees that, even if an adversary knows all the other records in a dataset except for one unknown individual, they still cannot infer the information of that unknown record. Conclusion In this paper, we investigated the use of differential privacy in selected areas of AI. We described the critical issues facing AI and the basic concepts of differential privacy, highlighting how differential privacy can be applied to solving some of these problems. We discussed the strengths and limitations of the current studies in each of these areas and also pointed out the potential research areas of AI where the benefits of differential privacy remain untapped. In addition to the three areas of focus in this article – machine learning, deep learning and multi-agent learning – there are many other interesting areas of research in AI that have also leveraged differential privacy, such as natural language processing, computer vision, robotics, etc. Surveying differential privacy in these areas is something we intend to do in future work. |