After the quality requirement for a test is determined, evaluate the bias and imprecision for the test in order to quantify its total error (TE). UnityWeb™ uses the following formula to calculate the laboratory total error (TE) at a 99% (p < 0.01) confidence interval:
This formula shows a test can have a higher bias as long as the CV (imprecision) is low, and vice versa. The objective is to limit the total error in patient test results.
You can evaluate the bias and imprecision of a test using the standard deviation index (SDI) and coefficient of variation ratio (CVR). Several Unity™ Interlaboratory Reports contain both.
As scientists, laboratorians should concern themselves with which component (bias or imprecision) is contributing to error, in what amount, and how the component's performance can be improved. However, as long as patient test results do not have more than the total allowable error (TEa), laboratorians should not be concerned about the reliability of those results. When analytical error exists, the question becomes whether it is critical error; this depends on the quality requirements chosen for the test.
In This Section |