مشخصات مقاله | |
ترجمه عنوان مقاله | کاهش حاشیه زاویه ای درون رده ای برای تشخیص چهره |
عنوان انگلیسی مقاله | Inter-class angular margin loss for face recognition |
انتشار | مقاله سال 2020 |
تعداد صفحات مقاله انگلیسی | 6 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
3.809 در سال 2019 |
شاخص H_index | 72 در سال 2020 |
شاخص SJR | 0.562 در سال 2019 |
شناسه ISSN | 0923-5965 |
شاخص Quartile (چارک) | Q2 در سال 2019 |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | کامپیوتر |
گرایش های مرتبط | هوش مصنوعی، مهندسی نرم افزار، مهندسی الگوریتم ها و محاسبات |
نوع ارائه مقاله |
ژورنال |
مجله | پردازش سیگنال. ارتباط تصویر – Signal Processing. Image Communication |
دانشگاه | Shenzhen Key Lab. of Info. Sci&Tech/Shenzhen Engineering Lab. of IS&DCP, Department of Electronic Engineering/Graduate School at Shenzhen, Tsinghua University, China |
کلمات کلیدی | تشخیص چهره، کاهش IAM، واریانس درون رده، فاصله درون رده ای، کاهش بیشینه نرم افزاری |
کلمات کلیدی انگلیسی | Face recognition، IAM loss، Inter-class variance، Intra-class distance، Softmax loss |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.image.2019.115636 |
کد محصول | E14827 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- Related work 3- The proposed method 4- Discussion 5- Experiments 6- Conclusions References |
بخشی از متن مقاله: |
Abstract Increasing inter-class variance and shrinking intra-class distance are two main concerns and efforts in face recognition. In this paper, we propose a new loss function termed inter-class angular margin (IAM) loss aiming to enlarge the inter-class variance. Instead of restricting the inter-class margin to be a constant in existing methods, our IAM loss adaptively penalizes smaller inter-class angles more heavily and successfully makes the angular margin between classes larger, which can significantly enhance the discrimination of facial features. The IAM loss can be readily introduced as a regularization term for the widely-used Softmax loss and its recent variants to further improve their performances. We also analyze and verify the appropriate range of the regularization hyper-parameter from the perspective of backpropagation. For illustrative purposes, our model is trained on CASIA-WebFace and tested on the LFW, CFP, YTF and MegaFace datasets; the experimental results show that the IAM loss is quite effective to improve state-of-the-art algorithms. Introduction Convolutional neural networks (CNNs) are widely used for face recognition [1–15], in which recent researches have been focused on increasing the inter-class variance and reducing the intra-class distance. A typical pipeline of using a network for training WebFace can be found in Fig. 1, in which the network is trained with the loss function in the last layer, and the representation in the penultimate layer is used as the feature of human faces. Hence the recent efforts and achievements in increasing the interclass variance and reducing the intra-class distance can be summarized into two categories. First, to optimize the Euclidean distance between facial features, mainly through regularization. For example, the Triplet loss [6] makes the intra-class Euclidean distance of features shorter than the interclass distance. Wen et al. [16] reduce the intra-class Euclidean distance by adding an extra penalty. The Marginal loss of [17] and our past work [18] limit both intra-class and inter-class Euclidean distances to improve recognition accuracy. The Range loss [19] overcomes the problem of long-tailed data by equalizing intra-class Euclidean distance and increasing inter-class Euclidean distance. Except for the Triplet loss, all above methods add a regularization term on the basis of the Softmax loss, which is generally adjusted via a regularization hyper-parameter. |