مشخصات مقاله | |
ترجمه عنوان مقاله | تخلیه مبتنی بر یادگیری عمیق توزیع شده برای رایانش مرزی شبکه های تلفن همراه |
عنوان انگلیسی مقاله | Distributed Deep Learning-based Offloading for Mobile Edge Computing Networks |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 8 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه اسپرینگر |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | Scopus – Master Journal List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
2.850 در سال 2018 |
شاخص H_index | 79 در سال 2019 |
شاخص SJR | 0.426 در سال 2018 |
شناسه ISSN | 1572-8153 |
شاخص Quartile (چارک) | Q2 در سال 2018 |
رشته های مرتبط | مهندسی فناوری اطلاعات، مهندسی کامپیوتر |
گرایش های مرتبط | رایانش ابری، اینترنت و شبکه های گسترده، سامانه های شبکه ای، مهندسی الگوریتم ها و محاسبات |
نوع ارائه مقاله |
ژورنال |
مجله | شبکه های موبایل و برنامه های کاربردی – Mobile Networks and Applications |
دانشگاه | College of Information Engineering, Zhejiang University of Technology, Hangzhou, China |
کلمات کلیدی | رایانش مرزی موبایل، تخلیه، یادگیری عمیق، یادگیری توزیع شده |
کلمات کلیدی انگلیسی | Mobile edge computing، Offloading، Deep learning، Distributed learning |
شناسه دیجیتال – doi |
https://doi.org/10.1007/s11036-018-1177-x |
کد محصول | E11265 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- System model and problem formulation 3- DDLO algorithm 4- Performance evaluation 5- Conclusion References |
بخشی از متن مقاله: |
Abstract This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) choose to offload their computation tasks to an edge server. To conserve energy and maintain quality of service for WDs, the optimization of joint offloading decision and bandwidth allocation is formulated as a mixed integer programming problem. However, the problem is computationally limited by the curse of dimensionality, which cannot be solved by general optimization tools in an effective and efficient way, especially for large-scale WDs. In this paper, we propose a distributed deep learning-based offloading (DDLO) algorithm for MEC networks, where multiple parallel DNNs are used to generate offloading decisions. We adopt a shared replay memory to store newly generated offloading decisions which are further to train and improve all DNNs. Extensive numerical results show that the proposed DDLO algorithm can generate near-optimal offloading decisions in less than one second. Introduction With the development of wireless communication technology, transmitting tremendous computation tasks from wireless devices to nearby access points or base stations is possible, which triggers meaningful cloud-computing applications, e.g., online gaming, virtual/augmented reality, and real-time media streaming. By deploying computation servers at user side and avoiding backhauling traffic generated by applications to a remote data center, mobile edge computing (MEC) [1–3] provides an efficient approach to bridge user and edge server. It reduces the delay in executing the computation tasks and saves the energy consumption for those delay-sensitive cloud-computing applications. The decision whether a WD offloads its computation task to an MEC server or not should be carefully studied. If computation tasks are aggressively offloaded to the edge server, a severe congestion will occur on the uplink wireless channels, which leads to a significant delay in executing computation tasks. Therefore, to exploit the computation offloading, we need a joint management for offloading decisions and the associated radio bandwidth allocation. However, due to the binary property of offloading decisions, directly enumerating all those possible solutions is computationally prohibitive. Different low-complexity algorithms are proposed to solve the binary computation offloading problem in the literature [4–12]. A distributed algorithm based on game theory is proposed for MEC system in [4], which requires multiple iterations of communications between the edge server and WDs. Another iterative algorithm to solve joint task offloading and resource allocation in MEC networks is to iteratively update the binary offloading decision [5, 6], where the traditional resource allocation problem is solved when the binary offloading decision is present. By relaxing the binary constraints to real variables, [7] proposes an eDors algorithm for MEC systems. |