مقاله انگلیسی رایگان در مورد تلفیق الکتروانسفالوگرافی و حالت چهره – IEEE 2019

مقاله انگلیسی رایگان در مورد تلفیق الکتروانسفالوگرافی و حالت چهره – IEEE 2019

 

مشخصات مقاله
ترجمه عنوان مقاله تلفیق الکتروانسفالوگرافی و حالت چهره برای تشخیص احساسات مداوم
عنوان انگلیسی مقاله The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition
انتشار مقاله سال ۲۰۱۹
تعداد صفحات مقاله انگلیسی ۱۳ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه IEEE
نوع نگارش مقاله
مقاله پژوهشی (Research Article)
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) Scopus – Master Journals List – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۴٫۶۴۱ در سال ۲۰۱۸
شاخص H_index ۵۶ در سال ۲۰۱۹
شاخص SJR ۰٫۶۰۹ در سال ۲۰۱۸
شناسه ISSN ۲۱۶۹-۳۵۳۶
شاخص Quartile (چارک) Q2 در سال ۲۰۱۸
مدل مفهومی ندارد
پرسشنامه ندارد
متغیر ندارد
رفرنس دارد
رشته های مرتبط مهندسی پزشکی
گرایش های مرتبط بیوالکتریک
نوع ارائه مقاله
ژورنال
مجله / کنفرانس دسترسی – IEEE Access
دانشگاه  Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, Tianjin University of Technology, Tianjin 300384, China
کلمات کلیدی تشخیص احساسات مداوم، الکتروانسفالوگرافی، حالت های چهره، پردازش سیگنال، ترکیب سطح تصمیم، دینامیک های زمانی
کلمات کلیدی انگلیسی  Continuous emotion recognition, EEG, facial expressions, signal processing, decision level fusion, temporal dynamics
شناسه دیجیتال – doi
https://doi.org/10.1109/ACCESS.2019.2949707
کد محصول  E13914
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
I. Introduction
II. Related Work
III. Materials and Methods
IV. Temporal Dynamics of Emotions
V. Results
Authors
Figures
References

 

بخشی از متن مقاله:
Abstract

Recently, the study of emotion recognition has received increasing attentions by the rapid development of noninvasive sensor technologies, machine learning algorithms and compute capability of computers. Compared with single modal emotion recognition, the multimodal paradigm introduces complementary information for emotion recognition. Hence, in this work, we presented a decision level fusion framework for detecting emotions continuously by fusing the Electroencephalography (EEG) and facial expressions. Three types of movie clips (positive, negative, and neutral) were utilized to elicit specific emotions of subjects, the EEG and facial expression signals were recorded simultaneously. The power spectrum density (PSD) features of EEG were extracted by time-frequency analysis, and then EEG features were selected for regression. For the facial expression, the facial geometric features were calculated by facial landmark localization. Long short-term memory networks (LSTM) were utilized to accomplish the decision level fusion and captured temporal dynamics of emotions. The results have shown that the proposed method achieved outstanding performance for continuous emotion recognition, and it yields 0.625±۰٫۰۲۹ of concordance correlation coefficient (CCC). From the results, the fusion of two modalities outperformed EEG and facial expression separately. Furthermore, different numbers of time-steps of LSTM was applied to analyze the temporal dynamic capturing.

Introduction

Emotion is a psychophysiological process of perception and cognition to object or situation, and it plays an important role in human-human natural communication. However, the emotion recognition was neglected in the field of human-computer interaction (HCI). Due to the explosion of machine learning in the cognitive science, affective computing has emerged to integrate emotion recognition into HCI. Nowadays, emotion recognition system aims to establish a harmonious HCI by endowing computers with the ability to recognize, understand, express and adapt to human emotions [1]. Hence, it provides potentially applications for emotion recognition in many fields, such as human robot interaction (HRI) [2], safe driving [3], social networking [4] and distance education [5]. These applications manifested the different modalities interaction which provides complementary information to improve the precision and robust of the emotion recognition system. In order to represent emotions, the discrete models and dimensional models were proposed by psychologists [6]. The discrete emotion models turned emotion recognition into classification problem. Six basic emotions (happiness, anger, sadness, surprise, fear and disgust) can be recognized as prototypes from which other emotions are derived [7]. However, the emotions expressed in communication are complex, and one basic emotion can hardly describe the human feeling under certain situation. In addition, emotions could be mapped in multi-dimensional spaces that could maximize the largest variance of all the possible emotions [8]. The valence-arousal plane is one of the well-known dimensional models of emotion that maps the emotions into a two-dimensional circular space.

ثبت دیدگاه