مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 38 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه الزویر |
نوع نگارش مقاله | مقاله پژوهشی (Research article) |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Uncertain Data Envelopment Analysis |
ترجمه عنوان مقاله | تحلیل پوششی داده های نامعمول |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی صنایع |
گرایش های مرتبط | برنامه ریزی و تحلیل سیستم ها |
مجله | مجله اروپایی تحقیقات عملیاتی – European Journal of Operational Research |
دانشگاه | Department of Management Science – Lancaster University – UK |
کلمات کلیدی | تحلیل پوششی داده ها؛ داده های نامعلوم؛ بهینه سازی قوی؛ مشکل DEA نامشخص؛ طراحی رادیوتراپی |
کلمات کلیدی انگلیسی | Data Envelopment Analysis; Uncertain Data; Robust Optimization; Uncertain DEA Problem; Radiotherapy Design |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.ejor.2018.01.005 |
کد محصول | E8603 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction and Motivation
Data envelopment analysis (DEA) is a well established optimization framework to conduct relative performance measurements among a group of decision making units (DMUs). There are numerous reviews of DEA, see, e.g., Cooper et al. (2007); Emrouznejad et al. (2008); Liu et al. (2013); Zhu (2014), and Hwang et al. (2016); and the concept has found a wide audience in both research and application. The principal idea is to solve an optimization problem for each DMU to identify its efficiency score relative to the other DMUs. Efficiency equates with a score of 1, and if a DMU’s efficiency score is less than 1, then that DMU is outperformed no matter how it is assessed against its competitive cohort. A DEA model is only as good as its data because DMUs are compared against each other through their assessed inputs and outputs. The importance of accurate data is thus acute in establishing a DMU’s performance. However, data is often imperfect, and knowledge about the extent of uncertainty can be vague, if not obscure, as errors commonly have several compounding sources. This fact begs the question of whether or not an inefficient DMU might have been so classified because of some realization of inscrutable data, and if so, then there is a reasonable argument against its perceived under-performance. The question we consider is, what is the minimum amount of uncertainty required of the data that could render a DMU efficient? We address uncertainty through the lens of robust optimization, which is a field of study designed to account for uncertainty in optimization problems. The preeminent theme of robust modeling is to permit a deleterious effect to the objective to better hedge against the uncertain cases that are typically ignored. Indeed, the concern of “over-optimizing” is regularly used to galvanize the use of a robust model that gives a best solution against all reasonable possibilities instead of a non-robust solution that inappropriately exaggerates the weaknesses of estimated or sampled uncertainty. Examples of this sentiment are found in antenna design (Ben-Tal and Nemirovski, 2002), inventory control (Bertsimas and Thiele, 2006), and radiotherapy design (Bertsimas et al., 2010; Chue et al., 2005). References for robust optimization are Ben-Tal et al. (2009) and Bertsimas et al. (2011). Our perspective is counter to the orthodoxy that motivates robust models. The diminishing effect of the objective induced by uncertainty is inverted into a beneficial consideration in DEA due to the way efficiency scores are regularly calculated, as in (1) and (3), see Proposition 1. In particular, a DMU’s efficiency score is non-decreasing as uncertainty increases. This observation suggests a keen interest in uncertainty by an inefficient DMU, as it may have a legitimate claim to efficiency modulo the imperfections of the data. As such, uncertainty might be leveraged to assert improved, if not efficient, performance within the confines of reasonable data imperfections. |