مقاله انگلیسی رایگان در مورد شبکه انتقال عمیق با مدل سه بعدی morphable در تشخیص چهره – IEEE 2018

 

مشخصات مقاله
ترجمه عنوان مقاله شبکه انتقال عمیق با مدل های سه بعدی morphable در تشخیص چهره
عنوان انگلیسی مقاله Deep Transfer Network with 3D Morphable Models for Face Recognition
انتشار مقاله سال 2018
تعداد صفحات مقاله انگلیسی 7 صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه IEEE
مقاله بیس این مقاله بیس نمیباشد
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط هوش مصنوعی، مهندسی نرم افزار
نوع ارائه مقاله
کنفرانس
مجله / کنفرانس کنفرانس بین المللی تشخیص صورت و ژست اتوماتیک – IEEE International Conference on Automatic Face & Gesture Recognition
دانشگاه Beijing University of Posts and Telecommunications – China
شناسه دیجیتال – doi
https://doi.org/10.1109/FG.2018.00067
کد محصول E10404
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
I Introduction
II Data Augmentation
III Deep Transfer Network for Face Recognition
IV Experiments
V Conclusions
REFERENCES

 

بخشی از متن مقاله:
Abstract

Data augmentation using 3D face models to synthesize faces has been demonstrated to be effective for face recognition. However, the model directly trained by using the synthesized faces together with the original real faces is not optimal. In this paper, we propose a novel approach that uses a deep transfer network (DTN) with 3D morphable models (3DMMs) for face recognition to overcome the shortage of labeled face images and the dataset bias between synthesized images and corresponding real images. We first utilize the 3DMM to synthesize faces with various poses to augment the training dataset. Then, we train a deep neural network using the synthesized face images and the original real face images. The results obtained on LFW show that the accuracy of the model utilizing synthesized data only is lower than that of the model using the original data, although the synthesized dataset contains much considerably images with more unconstrained poses. This result shows that a dataset bias exists between the synthesized faces and the real faces. We treat the synthesized faces as the source domain, and we treat the actual faces as the target domain. We use the DTN to alleviate the discrepancy between the source domain and the target domain. The DTN attempts to project source domain samples and target domain samples to a new space where they are fused together such that one cannot distinguish the domain from which a specific image is from. We optimize our DTN based on the maximum mean discrepancy (MMD) of the shared feature extraction layers and the discrimination layers. We choose AlexNet and Inception-ResNet-V1 as our benchmark models. The proposed method is also evaluated on the LFW and SLLFW databases. The experimental results show that our method can effectively address the domain discrepancy. Moreover, the dataset bias between the synthesized data and the real data is remarkably reduced, which can thus improve the performance of the convolutional neural network (CNN) model.

INTRODUCTION

Deep learning, particularly convolutional neural networks (CNNs), has achieved promising results in face recognition in recent years. Though CNNs are impressive, training a robust and reliable neural network requires large-scale labeled data. The reported CNNs, [1], [2], [3], [4] and so on, are trained on different face databases; unfortunately, most of these databases are not publicly available. One commonly used face dataset that is publicly available is the CASIA-WebFace collection [5] with only 495K images, which are not enough images to train many large CNNs such as FaceNet [2]. Therefore, in most real-world applications, harvesting and labeling large datasets have become an effective approaches to enhance the performance of CNNs. Not only the quantity but also the variation are of importance in data collection. To train a model with good generalization ability, the training data should simultaneously consider inter-class variations (differences between different people) and intra-class variations (differences within the same person), which is difficult and requires considerable effort. Masi I et al. [6] realize that collecting and labeling massive training sets is not easy for improving networks. They synthesize training data using a 3D generic face model to augment the training dataset. The idea that face images can be synthetically generated by using 3D rendering technology to aid face recognition systems was proposed long ago. This idea was originally proposed in [7] and then effectively used in [1] [8] [9]. In constrast to the above method, Masi I et al. [6] use other transformations to generate new images (e.g., other poses, different shapes, and facial expressions) rather than generating frontal faces, which increases the size of the CASIA WebFace collection to several times its original size. Experimental results have demonstrated its effectiveness and have achieved state-of-the-art performances on the LFW and IJB-A datasets. Later, Masi I et al. [10] considered the computational cost of rendering and proposed a new method for the rapid synthesis of massive face sets for face recognition. However, there are two limitations in the aforementioned methods.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا