مقاله انگلیسی رایگان در مورد ارزیابی ظرفیت استنباطی استنتاج فازی و سیستمهای خبره – الزویر ۲۰۲۰
مشخصات مقاله | |
ترجمه عنوان مقاله | ارزیابی تجربی از ظرفیت استنباطی استدلال قابل لغو، استنتاج فازی غیر منتونیک و سیستمهای خبره |
عنوان انگلیسی مقاله | An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems |
انتشار | مقاله سال ۲۰۲۰ |
تعداد صفحات مقاله انگلیسی | ۹۱ صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس میباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
۵٫۸۹۱ در سال ۲۰۱۹ |
شاخص H_index | ۱۶۲ در سال ۲۰۲۰ |
شاخص SJR | ۱٫۱۹۰ در سال ۲۰۱۹ |
شناسه ISSN | ۰۹۵۷-۴۱۷۴ |
شاخص Quartile (چارک) | Q1 در سال ۲۰۱۹ |
مدل مفهومی | دارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | کامپیوتر |
گرایش های مرتبط | هوش مصنوعی، مهندسی نرم افزار، برنامه نویسی کامپیوتر |
نوع ارائه مقاله |
ژورنال |
مجله | سیستم های خبره با برنامه های کاربردی – Expert Systems With Applications |
دانشگاه | School of Computer Science, Technological University Dublin, Kevin Street, Dublin, Ireland |
کلمات کلیدی | استدلال قابل لغو، نظریه استدلال، هوش مصنوعی قابل توضیح، استدلال غیر منتونیک، منطق فازی، سیستمهای خبره، فشار، حجم کار ذهنی |
کلمات کلیدی انگلیسی | Defeasible argumentation، Argumentation theory، Explainable artificial intelligence، Non-monotonic reasoning، Fuzzy logic، Expert systems، Mental workload |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.eswa.2020.113220 |
کد محصول | E14691 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
۱- Introduction ۲- Literature and related work ۳- Design and methodology ۴- Results and discussion ۵- Conclusion and future work References |
بخشی از متن مقاله: |
Abstract Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison. Introduction Uncertainty associated to incomplete, imprecise or unreliable knowledge is inevitable in daily reasoning and in many real-world contexts. Within Artificial Intelligence (AI), many approaches have been proposed for the development of inferential models capable of addressing such uncertainty. Among them, nonmonotonic reasoning emerged from the area of logical AI as an alternative to deductive inferences in logical systems. These were perceived as inadequate for decision making in realistic situations (Bochman, 2007). Hence, reasoning is non-monotonic, or defeasible, when a conclusion can be withdrawn in the light of new information (Reiter, 1988; McCarthy, 1980; Kowalski & Sadri, 1991; Longo, 2015; Brewka, 1991). A number of approaches for dealing with quantitative reasoning under uncertainty exist (Parsons & Hunter, 1998), including computational argumentation (also referred to as defeasible argumentation) (Prakken & Vreeswijk, 2001), fuzzy reasoning (Zadeh et al., 1965) and expert systems (Durkin & Durkin, 1998). These approaches have led to the development of non-monotonic reasoning models based upon knowledge bases often provided by human experts. Intuitively, since these models have been developed with a human-in-the loop intervention, their reasoning processes and their inferences have an intrinsic higher degree of interpretability and transparency when com pared to data-driven approaches for inference. |