مشخصات مقاله | |
ترجمه عنوان مقاله | قسمت اول بهینه سازی کنترل کیفیت: معیارهایی برای ارزیابی عملکرد قابل پیش بینی کنترل کیفیت |
عنوان انگلیسی مقاله | Quality control optimization part I: Metrics for evaluating predictive performance of quality control |
انتشار | مقاله سال 2019 |
تعداد صفحات مقاله انگلیسی | 11 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | MedLine – Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
2.762 در سال 2018 |
شاخص H_index | 127 در سال 2019 |
شاخص SJR | 1.027 در سال 2018 |
شناسه ISSN | 0009-8981 |
شاخص Quartile (چارک) | Q1 در سال 2018 |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | دارد |
رفرنس | دارد |
رشته های مرتبط | مهندسی صنایع |
گرایش های مرتبط | بهینه سازی سیستم ها، برنامه ریزی و تحلیل سیستم ها |
نوع ارائه مقاله |
ژورنال |
مجله | Clinica Chimica Acta |
دانشگاه | The Department of Pathology, University of Utah, Salt Lake City, UT, United States of America |
کلمات کلیدی | کنترل کیفیت، آمار، خطا، ارزش پیش گویی کننده مثبت، ارزش پیش گویی کننده منفی |
کلمات کلیدی انگلیسی | Quality control، Statistics، Error، Positive predictive value، Negative predictive value |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.cca.2019.04.053 |
کد محصول | E12646 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- Theoretical development 3- Methods 4- Results 5- Example calculations 6- Discussion References |
بخشی از متن مقاله: |
Abstract Background: Quality control (QC) policies are usually designed using power curves. This type of analysis reasons from a cause (a shift in the assay results) to an effect (a signal from the QC monitoring process). End users face a different problem: they must reason from an effect (QC signal) to a cause. It would be helpful to have metrics that evaluated QC policies from an end-user perspective. Introduction Laboratories are under increasing pressure to improve performance. Quality control (QC) ensures the reliability of results and is therefore a key component of laboratory performance. Laboratories direct considerable resources to QC and assay improvement and, given the importance of QC, it would be useful to have metrics to evaluate the performance of a QC plan. Unfortunately, few metrics are available. The performance of a QC plan is generally analyzed in terms of the number of events before false rejection and the number of events before error detection [1]. These quantities are also known as the average run length (ARL) and time to signal (TTS) [2]. Run lengths are determined by the statistical power of a QC plan. Statistical power is the probability that a QC plan will produce a signal (i.e., rule violation) when a change in the process occurs (e.g., a shift in the mean) [2–4]. QC plans with greater statistical power are considered superior. Such analyses fail to consider the magnitude of the error and all errors are considered equal. In reality, this assumption is unlikely to be true because larger errors may have more potential for harm (and may be costlier) than smaller errors. A more accurate model might place more weight on larger errors. In particular, power curve analysis only considers an event a false rejection when the QC monitoring system produces a signal and there has been no shift in the mean. This practice overstates the specificity of the QC plan because there may be inconsequential events (i.e., small shifts), which can be safely ignored. Responding to such events wastes resources. A more realistic model might classify errors into categories (e.g., important/unimportant) and use this information to evaluate the performance of a QC plan. The design of QC plans is rarely considered from a user perspective. The typical design perspective is, “Given a shift of a given size, what is the probability of detecting the change if I use a particular QC plan?” The reasoning is from cause to effect. The end-user perspective is different. The end user is confronted with a QC result and asks, “Given this signal, what is the probability that a significant problem has occurred? Is it worth the time to troubleshoot?” Conversely, “Given no signal, what is the probability that no change has occurred?” End users need to reason from an effect (a signal from QC monitoring) to a cause. |