مقاله انگلیسی رایگان در مورد بهینه سازی کنترل کیفیت و معیارهایی برای ارزیابی عملکرد قابل پیش بینی – الزویر ۲۰۱۹

مقاله انگلیسی رایگان در مورد بهینه سازی کنترل کیفیت و معیارهایی برای ارزیابی عملکرد قابل پیش بینی – الزویر ۲۰۱۹

 

مشخصات مقاله
ترجمه عنوان مقاله قسمت اول بهینه سازی کنترل کیفیت: معیارهایی برای ارزیابی عملکرد قابل پیش بینی کنترل کیفیت
عنوان انگلیسی مقاله Quality control optimization part I: Metrics for evaluating predictive performance of quality control
انتشار مقاله سال ۲۰۱۹
تعداد صفحات مقاله انگلیسی ۱۱ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه الزویر
نوع نگارش مقاله
مقاله پژوهشی (Research Article)
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) MedLine – Scopus – Master Journals List – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۲٫۷۶۲ در سال ۲۰۱۸
شاخص H_index ۱۲۷ در سال ۲۰۱۹
شاخص SJR ۱٫۰۲۷ در سال ۲۰۱۸
شناسه ISSN ۰۰۰۹-۸۹۸۱
شاخص Quartile (چارک) Q1 در سال ۲۰۱۸
مدل مفهومی ندارد
پرسشنامه ندارد
متغیر دارد
رفرنس دارد
رشته های مرتبط مهندسی صنایع
گرایش های مرتبط بهینه سازی سیستم ها، برنامه ریزی و تحلیل سیستم ها
نوع ارائه مقاله
ژورنال
مجله  Clinica Chimica Acta
دانشگاه The Department of Pathology, University of Utah, Salt Lake City, UT, United States of America
کلمات کلیدی کنترل کیفیت، آمار، خطا، ارزش پیش گویی کننده مثبت، ارزش پیش گویی کننده منفی
کلمات کلیدی انگلیسی Quality control، Statistics، Error، Positive predictive value، Negative predictive value
شناسه دیجیتال – doi
https://doi.org/10.1016/j.cca.2019.04.053
کد محصول E12646
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract

۱- Introduction

۲- Theoretical development

۳- Methods

۴- Results

۵- Example calculations

۶- Discussion

References

 

بخشی از متن مقاله:

Abstract

Background: Quality control (QC) policies are usually designed using power curves. This type of analysis reasons from a cause (a shift in the assay results) to an effect (a signal from the QC monitoring process). End users face a different problem: they must reason from an effect (QC signal) to a cause. It would be helpful to have metrics that evaluated QC policies from an end-user perspective.
Methods: We developed a simple dichotomous model based on classification of assay errors. Errors are classified as important or unimportant based on a critical shift size, defined as Sc. Using this scheme, we show how QC policies can be analyzed using common accuracy metrics such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We explore the impact of design choices (QC limits, number of repeats) on these performance measures in a number of different contexts.
Results: PPV varies widely (1% to 100%) depending on context. NPV also varies (40% to 100%) but is less sensitive to context than PPV. There are many contexts in which QC policies have low predictive values. In such cases, performance (PPV, NPV) can be improved by adjusting the QC limits or the number of repeats at each QC event.
Conclusion: The effectiveness of QC can be improved by considering the context in which the QC policy will be applied. Using simple assumptions, common accuracy metrics can be used to evaluate QC policy performance.

Introduction

Laboratories are under increasing pressure to improve performance. Quality control (QC) ensures the reliability of results and is therefore a key component of laboratory performance. Laboratories direct considerable resources to QC and assay improvement and, given the importance of QC, it would be useful to have metrics to evaluate the performance of a QC plan. Unfortunately, few metrics are available. The performance of a QC plan is generally analyzed in terms of the number of events before false rejection and the number of events before error detection [1]. These quantities are also known as the average run length (ARL) and time to signal (TTS) [2]. Run lengths are determined by the statistical power of a QC plan. Statistical power is the probability that a QC plan will produce a signal (i.e., rule violation) when a change in the process occurs (e.g., a shift in the mean) [2–۴]. QC plans with greater statistical power are considered superior. Such analyses fail to consider the magnitude of the error and all errors are considered equal. In reality, this assumption is unlikely to be true because larger errors may have more potential for harm (and may be costlier) than smaller errors. A more accurate model might place more weight on larger errors. In particular, power curve analysis only considers an event a false rejection when the QC monitoring system produces a signal and there has been no shift in the mean. This practice overstates the specificity of the QC plan because there may be inconsequential events (i.e., small shifts), which can be safely ignored. Responding to such events wastes resources. A more realistic model might classify errors into categories (e.g., important/unimportant) and use this information to evaluate the performance of a QC plan. The design of QC plans is rarely considered from a user perspective. The typical design perspective is, “Given a shift of a given size, what is the probability of detecting the change if I use a particular QC plan?” The reasoning is from cause to effect. The end-user perspective is different. The end user is confronted with a QC result and asks, “Given this signal, what is the probability that a significant problem has occurred? Is it worth the time to troubleshoot?” Conversely, “Given no signal, what is the probability that no change has occurred?” End users need to reason from an effect (a signal from QC monitoring) to a cause.

ثبت دیدگاه