مقاله انگلیسی رایگان در مورد بازسازی چهره از طریق شبکه های مولد تخاصمی – IEEE 2019

مقاله انگلیسی رایگان در مورد بازسازی چهره از طریق شبکه های مولد تخاصمی – IEEE 2019

 

مشخصات مقاله
ترجمه عنوان مقاله بازسازی چهره از طریق شبکه های مولد تخاصمی تو در تو
عنوان انگلیسی مقاله Face Inpainting via Nested Generative Adversarial Networks
انتشار مقاله سال ۲۰۱۹
تعداد صفحات مقاله انگلیسی ۱۰ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه IEEE
نوع نگارش مقاله
مقاله پژوهشی (Research Article)
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) Scopus – Master Journals List – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۴٫۶۴۱ در سال ۲۰۱۸
شاخص H_index ۵۶ در سال ۲۰۱۹
شاخص SJR ۰٫۶۰۹ در سال ۲۰۱۸
شناسه ISSN ۲۱۶۹-۳۵۳۶
شاخص Quartile (چارک) Q2 در سال ۲۰۱۸
مدل مفهومی ندارد
پرسشنامه ندارد
متغیر ندارد
رفرنس دارد
رشته های مرتبط مهندسی کامپیوتر، مهندسی فناوری اطلاعات
گرایش های مرتبط شبکه های کامپیوتری
نوع ارائه مقاله
ژورنال
مجله / کنفرانس دسترسی – IEEE Access
دانشگاه  School of Printing and Packaging, Wuhan University, Wuhan 430072, China
کلمات کلیدی بازسازی چهره، شبکه عصبی عمیق، شبکه های مولد تخاصمی تو در تو
کلمات کلیدی انگلیسی  Face inpainting, deep neural network, nested GAN
شناسه دیجیتال – doi
https://doi.org/10.1109/ACCESS.2019.2949614
کد محصول  E13922
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
I. Introduction
II. Related Work
III. Approaches
IV. Experimental Results
V. Conclusion
Authors
Figures
References

 

بخشی از متن مقاله:
Abstract

Face inpainting aims to repaired damaged images caused by occlusion or cover. In recent years, deep learning based approaches have shown promising results for the challenging task of image inpainting. However, there are still limitation in reconstructing reasonable structures because of oversmoothed and/or blurred results. The distorted structures or blurred textures are inconsistent with surrounding areas and require further post-processing to blend the results. In this paper, we present a novel generative model-based approach, which consisted by nested two Generative Adversarial Networks (GAN), the subconfrontation GAN in generator and parent-confrontation GAN. The sub-confrontation GAN, which is in the image generator of parent-confrontation GAN, can find the location of missing area and reduce mode collapse as a prior constraint. To avoid generating vague details, a novel residual structure is designed in the sub-confrontation GAN to deliver richer original image information to the deeper layers. The parentconfrontation GAN includes an image generation part and a discrimination part. The discrimination part of parent-confrontation GAN includes global and local discriminator, which benefits the reconstruction of overall coherency of the repaired image while obtaining local details. The experiments are executed over the publicly available dataset CelebA, and the results show that our method outperforms current state-of-the-art techniques quantitatively and qualitatively.

Introduction

Face inpainting is a challenging task of recovering details of facial features on high-level image semantics. It can be applied in many face recognition occasions, such as wearing sunglasses, microphone occlusion during performance, and covering mask. The purpose of inpainting technology is to repair the broken part of the image with known image information. The most important goal of this task is to avoid introducing noise into non-repaired areas and to generate reliable repaired areas. Based on this technique, noise, hiatus and scratch can be removed. Because of the strong correlation between pixels in one image, lost image information can be restored as much as possible based on undamaged or occluded area of the image and its pattern priori. During inpainting process, the content information of the whole image is considered, including lowlevel texture information and high-level semantic information. Traditional inpainting methods rely on low level cues to find best matching patches from the uncorrupted sections in the same image [1]–[۳]. These methods work well for background completions and repetitive texture pattern. However, low level features are limited for face inpainting task as face image consists of many unique components, and inpainting process needs to be carried out with a highlevel semantic level [4]–[۶]. The traditional methods based on finding patches with similar appearance patches does not always perform well.

ثبت دیدگاه