مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 9 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه IEEE |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Smart Wearable Hand Device for Sign Language Interpretation System With Sensors Fusion |
ترجمه عنوان مقاله | دستگاه دست هوشمند پوشیدنی برای سیستم تفسیر زبان اشاره با ترکیب سنسور |
فرمت مقاله انگلیسی | |
رشته های مرتبط | فناوری اطلاعات و ارتباطات، مهندسی پزشکی |
گرایش های مرتبط | بیوالکتریک، کاربردهای ICT |
مجله | مجله سنسور – Sensors Journal |
دانشگاه | Department of Electronic Engineering – Keimyung University – South Korea |
کلمات کلیدی | تشخیص ژست، یادگیری ماشین، برنامه تلفن همراه، تشخیص زبان اشاره، محاسبات پوشیدنی |
کلمات کلیدی انگلیسی | Gesture recognition, machine learning, mobile application, sign language recognition, wearable computing |
شناسه دیجیتال – doi |
https://doi.org/10.1109/JSEN.2017.2779466 |
کد محصول | E8527 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
I. INTRODUCTION
SIGN language plays a vital role for deaf and mute people to communicate among themselves or with normal people in a non-verbal manner. Gestures are the primary method to convey messages, which are usually conducted in a three-dimensional space, known as a signing space [1], through an integration of manual and non-manual signals. Manual signals commonly correspond to hand motions and hand posturing, whereas non-manual signals correspond to an external appearance such as mouth movements, facial expressions, and body orientation [2]. Nevertheless, sign language has not been standardized globally. Each nation has developed its own sign language, such as the American Sign Language (ASL) and Germany Sign Language (GSL). However, each sign language varies slightly within different regions of the same country. Hence, it can be a challenge to develop a standardized sign language interpretation system for use worldwide. In a previous study, sign languages have been recognized using two major techniques, i.e., vision and non-vision approaches [3]. In fact, vision method is the major technique applied for sign recognition in the past decades. A system that uses a camera to observe the information obtained through hand and finger motions is the most widely adopted visual-based approach [4]. Tremendous effort and study has gone into the development of vision-based sign recognition systems worldwide. Indeed, vision-based gesture recognition systems can be subdivided into direct and indirect approaches. A direct approach detects the hand gestures based on the RGB color spaces of the skin color. For instance, Goyal et al. [5] identified Indian Sign Language (ISL) using the scale invariance Fourier transform (SIFT) algorithm by searching the matched key points between the input image and images stored in a database. A similar method was also applied by More et al. [6] using the SIFT algorithm, which further reduces the dimensions of the feature vector using a principal component analysis (PCA) algorithm for speeding up the processing time. To detect the dynamic hand gestures used in Japanese Sign Language (JSL), Murakami et al. [7] proposed the use of recurrent neural networks capable of recognizing the JSL finger alphabet, which has 42 symbols. In contrast, Chowdary et al. [8] used a simple scanning method to compute the orientation and movement of fingers in binary converted images captured from a web camera. Khan et al. [9] proposed a more sophisticated gesture recognition system using digital images, including image filtering (pre-processing), image segmentation, color segmentation, skin detection (finger and hand detection using binary images), and template matching. Meanwhile, an indirect approach identifies the fingers and hand gestures based on the RGB color spaces segmented based on different colors for each finger using a data glove. |