مشخصات مقاله | |
ترجمه عنوان مقاله | مطالعه کلان داده و چالش های آن |
عنوان انگلیسی مقاله | A study of big data and its challenges |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 6 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه اسپرینگر |
نوع نگارش مقاله |
مقاله پژوهشی (Research article) |
مقاله بیس | این مقاله بیس نمیباشد |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | رایانش ابری، علوم داده |
نوع ارائه مقاله |
ژورنال |
مجله / کنفرانس | مجله بین المللی فناوری اطلاعات – international journal of information technology |
دانشگاه | Department of Computer Science and Engineering – Jamia Hamdard – India |
کلمات کلیدی | داده های بزرگ، V’s، چالش های داده بزرگ، مشخصات |
کلمات کلیدی انگلیسی | Big data, V’s, Big data challenges, Characteristics |
شناسه دیجیتال – doi |
https://doi.org/10.1007/s41870-018-0185-1 |
کد محصول | E9762 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract 1 Introduction 2 Big data: ‘How big? Why big?’ 3 Issues associated with big data 4 Conclusion References |
بخشی از متن مقاله: |
Abstract
Big data is an emerging torrent. We are held up in a Lake of data and its intensity is continuously increasing. With the fast growth of promising applications like social media, web, mobile services, and other applications across various organizations, there is a rapid growth of data. Thus arises the notion of ‘‘Big Data’’. Data analysis, querying, storage, retrieval organization and modeling are the fundamental challenges associated with it. These challenges are posed due to the fact that big data is complex in nature. In this paper, we address above issues and many more to realize the bottlenecks. But we believe that appropriate research in big data will lead to a new wave of advances that will revolutionize the market and the future analysis platforms, services as well as products and will tackle all the challenges. Introduction The data revolution has just begun. The term ‘Big Data’ is relatively new. It has started gaining momentum a decade ago which as a result has led to tremendous increase of data set size. It has started revolutionising commerce, science, medicine, finance and everyday life. Data creation is taking place at an extraordinary rate due to advances more or less in every field of science [1]. IoT, now also referred as the Internet of Everything connects new devices and new sources that generate mounds of data with every passing second [2]. More than 90% of the data in the world has been produced in the past 2 years only [3]. As per stats [4–6], every day we create 2.5 quintillion [a million raised to the power of five (1030)] bytes of data. However, the word ‘Big’ in big data does not only imply to size. Had it been referring to size only, the way out would have been quite effortless or simple. Instead big data is a broad term that describes the enormous volume of data sets, so large and complex that it becomes difficult to handle, process, analyse, manage, store and retrieve the data sets in a specified time frame using traditional methods. Thus big data may be defined as data that is ‘so huge’, ‘so fast’ and ‘so tough’ for present tools to process. Here ‘so big’ states that organisations must be able to deal with data on a peta byte level, ‘so fast’ indicates that the data needs to be processed very quickly, similarly ‘so hard’ indicates that data can not fit in the existing set of tools for processing and managing. Basically we can say that the big data differs from the traditional data in three ways—the amount of data (size), the rate of data creation and transmission (velocity) and the heterogeneity of data—structured, unstructured or semi-structured (variety). |