مقاله انگلیسی رایگان در مورد ترکیب صورت مختص دامنه برای تشخیص چهره – IEEE 2019

IEEE

 

مشخصات مقاله
ترجمه عنوان مقاله ترکیب صورت مختص دامنه برای تشخیص چهره در ویدیو از نمونه ساده گرفته شده از هر انسان
عنوان انگلیسی مقاله Domain-Specific Face Synthesis for Video Face Recognition from a Single Sample Per Person
انتشار مقاله سال ۲۰۱۹
تعداد صفحات مقاله انگلیسی ۱۶ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه IEEE
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) scopus – master journals – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۵٫۸۲۴ در سال ۲۰۱۷
شاخص H_index ۸۵ در سال ۲۰۱۸
شاخص SJR ۱٫۲۷۴ در سال ۲۰۱۸
رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط هوش مصنوعی، مهندسی نرم افزار
نوع ارائه مقاله
ژورنال
مجله / کنفرانس یافته های IEEE در حوضه قانون و امنیت اطلاعاتی – IEEE Transactions on Information Forensics and Security
دانشگاه École de technologie supérieure – Université du Québec – Canada
کلمات کلیدی تشخیص چهره، تک نمونه به ازای هر فرد، ترکیب صورت، بازسازی چهره ۳D، انتقال نور، طبقه بندی مبتنی بر نمایه، نظارت تصویری
کلمات کلیدی انگلیسی Face Recognition, Single Sample Per Person, Face Synthesis, 3D Face Reconstruction, Illumination Transferring, Sparse Representation-Based Classification, Video Surveillance
شناسه دیجیتال – doi
https://doi.org/10.1109/TIFS.2018.2866295
کد محصول E10403
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
I Introduction
II RELATED WORK – STILL-TO-VIDEO FACE RECOGNITION FROM A SINGLE STILL
III DOMAIN-SPECIFIC FACE SYNTHESIS
IV DOMAIN-INVARIANT STILL-TO-VIDEO FACE RECOGNITION WITH DSFS
V EXPERIMENTAL METHODOLOGY
VI RESULTS AND DISCUSSION
VII CONCLUSIONS
References

 

بخشی از متن مقاله:
Abstract

In video surveillance, face recognition (FR) systems are employed to detect individuals of interest appearing over a distributed network of cameras. The performance of still-tovideo FR systems can decline significantly because faces captured in unconstrained operational domain (OD) over multiple video cameras have a different underlying data distribution compared to faces captured under controlled conditions in the enrollment domain (ED) with a still camera. This is particularly true when individuals are enrolled to the system using a single reference still. To improve the robustness of these systems, it is possible to augment the reference set by generating synthetic faces based on the original still. However, without knowledge of the OD, many synthetic images must be generated to account for all possible capture conditions. FR systems may, therefore, require complex implementations and yield lower accuracy when training on many less relevant images. This paper introduces an algorithm for domain-specific face synthesis (DSFS) that exploits the representative intra-class variation information available from the OD. Prior to operation (during camera calibration), a compact set of faces from unknown persons appearing in the OD is selected through affinity propagation clustering in the captured condition space (defined by pose and illumination estimation). The domainspecific variations of these face images are then projected onto the reference still of each individual by integrating an imagebased face relighting technique inside the 3D reconstruction framework. A compact set of synthetic faces is generated that resemble individuals of interest under the capture conditions relevant to the OD. In a particular implementation based on sparse representation classification, the synthetic faces generated with the DSFS are employed to form a cross-domain dictionary that accounts for structured sparsity where the dictionary blocks combine the original and synthetic faces of each individual. Experimental results obtained with videos from the Chokepoint and COX-S2V datasets reveal that augmenting the reference gallery set of still-to-video FR systems using the proposed DSFS approach can provide a significantly higher level of accuracy compared to state-of-the-art approaches, with only a moderate increase in its computational complexity.

INTRODUCTION

Still-to-video face recognition (FR) is an important function in several video surveillance applications, particularly for watch-list screening. Given one or more reference still images of a target individual of interest, still-to-video FR systems seeks to accurately detect their presence in videos captured over multiple distributed surveillance cameras [1]. Despite the recent progress in computer vision and machine learning, designing a robust system for still-to-video FR remains a challenging problem in real-world surveillance applications. One key issue is the visual domain shift between faces from the enrollment domain (ED), where reference still images are typically captured under controlled conditions, and those from the operational domain (OD), where video frames are captured under uncontrolled conditions with variations in pose, illumination, blurriness, etc. The appearance of faces captured in videos corresponds to multiple non-stationary data distributions that can differ considerably from faces captured during enrollment [2]. Another key issue is the limited number of reference stills that are available per target individual to design facial models. Although still faces from the cohort or other non-target persons, and trajectories of video frames from unknown individuals are typically available. In many surveillance applications (e.g., watch-list screening), only a single reference still per person is available for design, which corresponds to the so-called Single Sample Per Person (SSPP) problem. The performance of still-to-video FR systems can decline significantly due to the limited information available to represent the intra-class variations seen in video frames. Many discriminant subspaces and manifold learning algorithms cannot be directly employed with a SSPP problem. It is also difficult to apply representation-based FR methods such as sparse representation-based classification (SRC) [3].

ارسال دیدگاه

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *