مقاله انگلیسی رایگان در مورد ترابری گاری لجستیک با یادگیری تقویتی باقیمانده – تیلور و فرانسیس ۲۰۲۲

taylorandfrancis

 

مشخصات مقاله
ترجمه عنوان مقاله یادگیری تقویتی باقیمانده برای حمل و نقل واگن لجستیک
عنوان انگلیسی مقاله Residual reinforcement learning for logistics cart transportation
انتشار  مقاله سال ۲۰۲۲
تعداد صفحات مقاله انگلیسی  ۱۸ صفحه
هزینه  دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده  نشریه تیلور و فرانسیس – Taylor & Francis
نوع نگارش مقاله مقاله پژوهشی (Research article)
مقاله بیس این مقاله بیس میباشد
نمایه (index) JCR – Master Journal List – Scopus
نوع مقاله
ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۲٫۳۳۵ در سال ۲۰۲۰
شاخص H_index ۶۱ در سال ۲۰۲۲
شاخص SJR ۰٫۷۳۶ در سال ۲۰۲۰
شناسه ISSN ۱۵۶۸-۵۵۳۵
شاخص Quartile (چارک) Q2 در سال ۲۰۲۰
فرضیه ندارد
مدل مفهومی دارد
پرسشنامه ندارد
متغیر دارد
رفرنس دارد
رشته های مرتبط مهندسی کامپیوتر – مهندسی صنایع
گرایش های مرتبط هوش مصنوعی – مهندسی نرم افزار – لجستیک و زنجیره تامین
نوع ارائه مقاله
ژورنال
مجله / کنفرانس رباتیک پیشرفته – Advanced Robotics
دانشگاه Department of Aeronautics and Astronautics, The University of Tokyo, Japan
کلمات کلیدی یادگیری تقویتی – تدارکات
کلمات کلیدی انگلیسی  Reinforcement learning – logistics
شناسه دیجیتال – doi https://doi.org/10.1080/01691864.2022.2046504
کد محصول e16644
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:

Abstract

۱٫ Introduction

۲٫ Related works

۳٫ Methodology

۴٫ Experiments

۵٫ Discussion

۶٫ Conclusion

Disclosure statement

Notes on contributors

References

 

بخشی از متن مقاله:

Abstract

     Autonomous logistics cart transportation is a challenging problem because of the complicated dynamics of the logistics cart. In this paper, we tackle the problem by using two robots system with reinforcement learning. We formulate the problem as the problem of making a logistics cart track an arc trajectory. Our reinforcement learning (RL) controller consists of a feedback controller and residual reinforcement learning. The feedback controller regards a logistics cart as a virtual leader and robots as followers, and the robots’ position and velocity are controlled to maintain the formation between the logistics cart and the robots. Residual reinforcement learning is used to modify the other model’s output. Simulation results showed that the residual reinforcement learning controller trained in a physical simulation environment performed better than other methods, especially under the condition with a large trajectory curvature. Moreover, the residual reinforcement learning controller can be transferred to a real-world robot without additional learning in a real-world environment.

Introduction

     Object transportation is increasingly being automated through the use of automated guided vehicles (AGVs) in large warehouses. However, this is less common in smaller warehouses, where objects are typically conveyed by human workers with logistics carts because existing automation systems by AGVs are not supported to transport existing logistics carts. For example, a space below a logistics cart is too small to move under and lift it. To address this, an automated object transportation system for these warehouses [1] was proposed. In this system, the robot’s position is estimated by utilizing images from a camera on the ceiling, and two robots grasp a logistics cart and transport it as shown in Figure 1. The strategy of having two robots hold a logistics cart makes it possible to automate the transportation without additional equipment. However, control for transporting a logistics cart remains a difficult problem because robots need to keep holding the cart. There is currently no method for making a logistics cart track a trajectory

Conclusion

     We proposed a system for logistics cart transportation with a residual reinforcement learning controller. The proposed controller is more sample efficient than a reinforcement learning controller trained from scratch and has a higher performance than the feedback controller. We showed that using simulation reduces the cost of gathering experiences, and the results of realworld experiments suggest that the residual reinforcement learning controller learned in a simulation environment can be transferred to real-world control.

     As future work, we will investigate how to make the controller higher performance than the feedback controller in all available states and reduce the difference between simulation and real world.

ارسال دیدگاه

نشانی ایمیل شما منتشر نخواهد شد.