مقاله انگلیسی رایگان در مورد یک نظرسنجی در مورد یادگیری عمیق برای کلان داده – الزویر ۲۰۱۸

مقاله انگلیسی رایگان در مورد یک نظرسنجی در مورد یادگیری عمیق برای کلان داده – الزویر ۲۰۱۸

 

مشخصات مقاله
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۱۲ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
منتشر شده در نشریه الزویر
نوع مقاله ISI
عنوان انگلیسی مقاله A survey on deep learning for big data
ترجمه عنوان مقاله یک نظرسنجی در مورد یادگیری عمیق برای کلان داده
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی کامپیوتر و فناوری اطلاعات
گرایش های مرتبط مدیریت سیستم های اطلاعات، شبکه های کامپیوتری
مجله ادغام اطلاعات – Information Fusion
دانشگاه University of Electronic Science and Technology of China – China
کلمات کلیدی یادگیری عمیق، کلان داده، خودکار رمزگذاری، شبکه های اعتقادی عمیق، شبکه های عصبی کانولوشن، شبکه های عصبی عادی
کلمات کلیدی انگلیسی Deep learning, Big data, Stacked auto-encoders, Deep belief networks, Convolutional neural networks, Recurrent neural networks
شناسه دیجیتال – doi http://dx.doi.org/10.1016/j.inffus.2017.10.006
کد محصول E8251
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

بخشی از متن مقاله:
۱٫ Introduction

Recently, the cyber-physical-social systems, together with the sensor networks and communication technologies, have made a great progress, enabling the collection of big data [1,2]. Big data can be defined by its four characteristics, i.e., large volume, large variety, large velocity and large veracity, which is usually called 4V’s model [3–۵]. The most remarkable characteristic of big data is large-volume that implies an explosive in the data amount. For example, Flicker generates about 3.6 TB data and Google processes about 20,000 TB data everyday. The National Security Agency reports that approximately 1.8 PB data is gathered on the Internet everyday. One distinctive characteristic of big data is large variety that indicates the different types of data formats including text, images, videos, graphics, and so on. Most of the traditional data is in the structured format and it is easily stored in the twodimensional tables. However, more than 75% of big data is unstructured. Typical unstructured data is multimedia data collected from the Internet and mobile devices [6]. Large velocity argues that big data is generating fast and requires to be processed in real time. The realtime analysis of big data is crucial for e-commerce to provide the online services. Another important characteristic of big data is large veracity that refers to the existence of a huge number of noisy objects, incomplete objects, inaccurate objects, imprecise objects and redundant objects [7]. The size of big data is continuing to grow at an unprecedented rate and is will reach 35 ZB by 2020. However, only having massive data is inadequate. For most of the applications such as industry and medical, the key is to find and extract valuable knowledge from big data for prediction services support. Take the physical devices that suffer mechanical malfunctions occasionally in the industrial manufacturing for an example. If we can analyze the collected parameters of devices effectively before the devices break down, we can take the immediate actions to avoid the catastrophe. While big data provides great opportunities for a broad of areas including e-commerce, industrial control and smart medical, it poses many challenging issues on data mining and information processing. Actually, it is difficult for traditional methods to analyze and process big data effectively and efficiently due to the large variety and the large veracity. Deep learning is playing an important role in big data solutions since it can harvest valuable knowledge from complex systems [8]. Specially, deep learning has become one of the most active research points in the machine learning community since it was presented in 2006 [9–۱۱]. Actually, deep learning can track back to the 1940s. However, traditional training strategies for multi-layer neural networks always result in a locally optimal solution or cannot guarantee the convergence. Therefore, the multi-layer neural networks have not received wide applications even though it was realized that the multilayer neural networks could achieve the better performance for feature and representation learning. In 2006, Hinton et al. [12] proposed a twostage strategy, pre-training and fine-tuning, for training deep learning effectively, causing the first back-through of deep learning.

ثبت دیدگاه