0% found this document useful (0 votes)
14 views27 pages

Here Are the Key Points in Bullet Format for a PowerPoint Presentation

Uploaded by

AngelMercySylus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views27 pages

Here Are the Key Points in Bullet Format for a PowerPoint Presentation

Uploaded by

AngelMercySylus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Here are the key points in bullet format for a PowerPoint presentation (one slide each) to

explain the estimation of LoB (Limit of Blank), LoD (Limit of Detection), and LoQ (Limit of
Quantitation) for assessing analytical sensitivity in clinical biochemistry laboratories, drawing
on information from the provided CLSI EP17-A2 document:

Slide 1: Limit of Blank (LoB)

 Definition: The highest measurement result that is likely to be observed (with a


stated probability α, typically 0.05) when a blank sample (no measurand) is tested. It
is also referred to as the "critical value".

 Principle: Establishes a threshold to differentiate between measurements from a


sample containing no analyte and those that potentially do. A result exceeding the
LoB suggests the possible presence of the measurand, with a probability of a false
positive (Type I error) of α.

 Basic Formulae:

o Nonparametric: The value at the 100(1-α)th percentile of the sorted blank


sample measurement results (e.g., the 95th percentile for α = 0.05).

o Parametric: LoB = MB + cpSDB, where MB is the mean of blank results, SDB is


the standard deviation of blank results, and cp is a multiplier based on α and
the number of blank results.

 Brief Estimation Method (Classical Approach): Perform replicate measurements


(minimum 60 total) on blank samples using at least two reagent lots across at least
three days on a single instrument. Analyze the distribution of all blank results to
determine if a parametric or nonparametric approach is suitable. Calculate LoB for
each reagent lot (if 2-3 lots) and report the maximum, or calculate from the
combined data (if ≥4 lots).

 Practical Considerations:

o Use multiple, independent blank samples to account for matrix variability.

o The choice between parametric and nonparametric analysis depends on the


distribution of blank sample results.

o For highly sensitive assays (e.g., molecular), LoB may be set to zero and
confirmed by testing negative samples.

Slide 2: Limit of Detection (LoD)

 Definition: The lowest measured quantity value that can be distinguished from the
blank with a stated probability (1-β, typically 0.95), where β is the probability of a
false negative (Type II error). It represents the level at which the presence of the
measurand can be reliably detected.
 Principle: Accounts for both the risk of falsely detecting the measurand in a blank
sample (α) and the risk of not detecting the measurand when it is truly present (β).
LoD is always greater than or equal to LoB.

 Basic Formulae:

o Classical (Parametric): LoD = LoB + cpSDL, where SDL is the pooled standard
deviation of low-level sample results, and cp is a multiplier based on β and
the number of low-level results.

o Precision Profile: Iteratively determined as the measurand concentration


where the predicted within-laboratory precision (SDWL) from the precision
profile model, when used in the formula LoD = LoB + cpSDWL, yields an LoD
value equal to that measurand concentration.

o Probit: The measurand concentration corresponding to a predefined hit rate


(e.g., 0.95) derived from a probit regression model of the proportion of
positive results (detection) versus the concentration of serial dilutions of low-
level samples.

 Brief Estimation Methods:

o Classical: Perform replicate measurements on low level samples (around the


assumed LoD) along with blank samples using a similar experimental design
as for LoB. Calculate the pooled SD of the low level samples and use the LoB
to calculate LoD.

o Precision Profile: Conduct precision studies (following protocols like CLSI


EP05) using samples spanning the assumed LoD. Generate a precision profile
(SD or %CV vs. concentration) and fit a model. Estimate LoD iteratively using
the LoB and the precision profile model.

o Probit: Prepare serial dilutions of samples containing the measurand. Test


each dilution in replicate and record whether the result is "detected" or "not
detected". Perform probit regression to estimate the concentration
corresponding to a high probability of detection (e.g., 95%).

 Practical Considerations:

o Low level samples should have measurand concentrations in the approximate


region of the assumed LoD (e.g., one to five times the estimated LoB).

o For molecular assays with LoB = zero, LoD is often defined as the
concentration at which a specified percentage (usually 95%) of results are
positive.
o Consider variability across reagent lots; report the maximum LoD across lots
(for 2-3 lots) or LoD from combined data (for ≥4 lots).

o Include representative samples of all relevant genotypes for molecular assays.

Slide 3: Limit of Quantitation (LoQ)

 Definition: The lowest amount of a measurand in a material that can be


quantitatively determined with stated accuracy (as total error or as independent
requirements for bias and precision) under stated experimental conditions. It
represents the lower limit for reliable quantification.

 Principle: Reflects the measurement procedure's performance against predefined


accuracy goals, making it a more subjective value dependent on the intended use
and acceptable error. LoQ is always greater than or equal to LoD.

 Basic Formulae: Depends on the defined accuracy goals. Common approaches


involve evaluating if the total error (TE) meets a specific criterion. Examples of TE
models include:

o Westgard model: TE = |Bias| + 2s

o RMS model: TE² = Bias² + s²

 Brief Estimation Method: Select a trial LoQ concentration and perform replicate
measurements on multiple low level samples at that concentration using at least
two reagent lots on a single instrument over multiple days. Determine the bias
(using samples with known assigned values) and precision at this concentration.
Calculate the TE using the chosen model and compare it to the predefined accuracy
goal. The LoQ is the lowest concentration at which the accuracy goal is met. If no
concentration meets the goal, higher concentrations should be tested.

 Practical Considerations:

o Clearly define the accuracy goals for the measurement procedure at low
measurand concentrations, preferably in terms of total error or independent
goals for bias and precision. These goals should be relevant to the clinical
application.

o Use low level samples with known assigned values (obtained from a
reference method or certified reference materials) to allow for bias
estimation.

o Precision estimates for LoQ should reflect both repeatability and day-to-day
variability (within-laboratory precision).

o The reported LoQ must always be accompanied by a statement of the


underlying accuracy goals.
o A variant approach allows for LoQ evaluation as part of an LoD evaluation
using the precision profile approach, provided the low level samples have
known measurand concentrations to assess bias.

Here is a mind map outlining the steps to estimate the Limit of Blank (LoB) using the classical
approach as described in CLSI EP17-A2.

**Classical Approach for LoB Estimation**

----------------------------------------------------------------------

| |

**1. Experimental Design & Data Collection** **2. Data Analysis**

| |

* **Blank Samples:** At least four independent blank samples should be used. A blank
sample does not contain the analyte of interest or has a concentration at least an order of
magnitude less than the lowest level of interest.

* **Replicate Measurements:** Perform a minimum of two replicates per sample per


day.

* **Test Days:** Conduct testing for at least three days.

* **Instrument:** Use a single instrument system throughout the study.

* **Reagent Lots:** Employ at least two different reagent lots.

* **Total Blank Replicates:** Aim for a minimum of 60 total blank replicates per
reagent lot across all samples and days.

| |

----------------------------------- -----------------------------------

| | | |

**a) Nonparametric Option** **b) Parametric Option** **3. Final LoB


Determination** **4. Error Risk**

*(No distributional assumptions)* *(Assumes normal distribution)* |


|

| | | |
* **Combine Data:** Combine all blank sample results. If using two or three reagent
lots, perform analysis on each lot separately. For four or more lots, combine all data.

* **Sort Data:** Sort all combined blank sample results from lowest to highest.

* **Determine Rank Position:** Calculate the rank position for the LoB using the
formula (for a 95% probability, α = 0.05):

**Rank Position = 0.5 + B \* 0.95**

where 'B' is the total number of blank results.

* **Find LoB Value:** The LoB is the measurement result at the calculated rank
position in the sorted list. If the rank is a non-integer, interpolate between the bracketing
values.

* **Combine Data:** Combine all blank sample results. If using two


or three reagent lots, perform analysis on each lot separately. For four or more lots, combine
all data.

* **Calculate Mean (MB):** Determine the mean of all blank


results in the combined dataset.

* **Calculate Standard Deviation (SDB):** Determine the standard


deviation of all blank results in the combined dataset.

* **Calculate cp Multiplier:** Use the formula:

**cp = 1.645 \* √(1 + (4K / (B - K)))**

where 'K' is the number of blank samples and 'B' is the total
number of blank results. 1.645 is the 95th percentile of a normal distribution.

* **Calculate LoB:** Use the formula:

**LoB = MB + cp \* SDB**

* If two or three reagent lots were used, the final


LoB is the **highest LoB value** calculated for any of the individual lots.

* If four or more reagent lots were used, the


final LoB is the value calculated from the **combined dataset**.

* The LoB represents the highest


measurement result likely to be observed for a blank sample with a stated probability (α).

* The typical and recommended α


value is **0.05**, indicating a 5% risk of a false positive result.

* The LoB should be reported with


the associated α error risk.
This mind map breaks down the classical approach to LoB estimation into its main
components, providing a structured overview of the process as outlined in the CLSI EP17-A2
guideline. Remember to consult the guideline for complete details and specific
considerations.

Based on the sources, here are the steps and sample requirements for evaluating the Limit
of Quantitation (LoQ) of a clinical laboratory measurement procedure, primarily drawing
from Section 6 of the document titled "Evaluation of Detection Capability for Clinical
Laboratory Measurement Procedures; Approved Guideline—Second Edition (EP17-A2)".

The LoQ represents the lowest measurand concentration that can be quantitatively
determined with stated accuracy under specific experimental conditions. It is applicable only
to quantitative measurement procedures. The definition of LoQ requires specifying the
underlying accuracy goals, which can be defined in terms of a total error (TE) goal or
separate goals for bias and precision. Because the LoQ is tied to these predefined accuracy
goals, it is considered a more subjective value and may vary among different users or
applications depending on the chosen goals. By definition, the LoQ must be greater than or
equal to the LoD.

General Protocol for Evaluation of the Limit of Quantitation

The document describes a general protocol based on the minimal design requirements for
an LoD evaluation using the classical approach. The core idea is to select a target
concentration as a trial LoQ, prepare multiple low level samples at that concentration, and
process them in replicate to assess if they meet the predefined accuracy goals.

Minimal Experimental Design Requirements

The minimal experimental design requirements for the general LoQ evaluation protocol are:

 Two reagent lots.

 One instrument system.

 Three days.

 Four independent low level samples of known measurand concentration.


Measurements should be acquired from multiple, independent low level samples or
pools of samples to account for matrix variability, with at least four samples used in
the study. These samples should ideally be commutable with native patient samples.

 Three replicates per sample (for each reagent lot, instrument system, and day
combination).
 A minimum of 36 total low level sample replicates per reagent lot (across all low
level samples, instrument systems, and days).

It is noted that the developer may wish to augment the number of factors, their levels, or
the number of replicates to increase the rigor of the resulting LoQ estimates. For definitions
incorporating a bias component, an assigned value must be known for each sample, ideally
traceable to reference materials or procedures of acceptable accuracy.

Experimental Steps

The experimental steps for the general LoQ evaluation protocol are as follows-:

1. Decide on the experimental design factors and number of levels, as well as the
processing plan for the specific measurement procedure.

2. Specify the LoQ definition and associated accuracy goals, and select the trial LoQ
measurand level.

3. Obtain low level samples targeted at the trial LoQ. Ensure each sample has a known
value (e.g., from a reference measurement procedure or calculation from a known
starting concentration) for bias estimation. Prepare sufficient aliquots, plus extra for
potential issues.

4. Process the designated number of replicate tests for each sample according to the
processing plan each testing day.

5. Review measurement results daily to check for processing errors or missing results.
Identify potential outliers and assign causes. Outliers from assignable causes (other
than analytical errors) may be retested and substituted. Document any retests and
original results. If more than five such outliers are identified across all low level
sample results from any one reagent lot due to assignable causes, the study for that
lot should be rejected and repeated.

6. Ensure sufficient measurement results are available at the end of testing to begin
data analysis. A minimum of 36 total low level sample results at the trial LoQ
measurand level are required per reagent lot.

Data Analysis

The data analysis process determines the final LoQ estimate. The specific steps depend on
the chosen LoQ definition (e.g., Westgard TE model or RMS model), but the basic approach
is similar. The following steps use the Westgard TE model as an illustration:

1. Analyze the data independently for each reagent lot if there are two or three lots, or
use the combined dataset if four or more lots were used.

2. Calculate the average value and standard deviation for each low level sample across
all replicates for the given reagent lot.
3. Calculate the bias for each low level sample by subtracting its assigned value (R)
from the observed average value (x).

4. Calculate the Total Error (TE) for each sample using the chosen model (e.g.,
Westgard: TE = Bias + 2*s). Convert to %TE if needed relative to the sample's
assigned value.

5. Repeat Steps 1 to 3 (calculate average/SD, bias, TE) for all other reagent lots if
applicable.

6. Review the observed TE estimates for each reagent lot against the predefined
accuracy goal. For each reagent lot, the sample with the lowest concentration that
met the accuracy specifications is taken as the LoQ for the lot.

7. The greatest LoQ across all lots (if two or three lots were used) or the LoQ from the
combined dataset (if four or more reagent lots were used) is taken as the LoQ for the
measurement procedure.

If all samples for one or more reagent lots fail to meet the accuracy goal, the study must be
repeated with new low level samples targeted to a greater measurand concentration.

Variant Approach: Combined LoD and LoQ Evaluation

A variant approach allows for evaluation of the LoQ as part of an LoD evaluation using the
precision profile approach. This is particularly suitable if the LoQ is defined solely by a
precision requirement. The significant change is that the low level samples used in the
precision profile study must have known measurand concentrations to allow for bias
calculation. The experimental design and steps follow those for the precision profile
approach (Section 5.4.1–5.4.2). After data collection, TE (or other metric) is calculated for
each sample/reagent lot, plotted against the measurand concentration (creating a TE
profile), and fitted with a model. The measurand concentration corresponding to the LoQ
accuracy goal is then determined from this profile or model and reported as the LoQ.

Based on the sources, evaluating the Limit of Quantitation (LoQ) involves a specific protocol,
and the experimental design for this protocol is based on the minimal requirements for a
Limit of Detection (LoD) evaluation using the classical approach. The LoQ is defined as the
lowest measurand concentration that can be quantitatively determined with stated accuracy
under specific experimental conditions.

Here are simplified steps for the LoQ evaluation protocol, drawing from the experimental
design based on the classical approach and the experimental steps described in the sources:

Minimal Sample and Experimental Design Requirements (Based on Classical LoD Design):
 Two reagent lots.

 One instrument system.

 Three days.

 Four independent low level samples of known measurand concentration. These


samples should ideally be commutable with native patient samples. Each sample
must have a known value (R) for bias estimation, ideally traceable to reference
materials or procedures of acceptable accuracy.

 Three replicates per sample for each combination of reagent lot, instrument system,
and day.

 A minimum of 36 total low level sample replicates per reagent lot collected across
all low level samples, instrument systems, and days.

(Note: While the design starts with these minimums, developers may increase factors or
replicates for more rigor).

Experimental Steps for LoQ Evaluation:

1. Plan the Experiment: Decide on the factors (like reagent lots, days, instrument
systems) and how many of each you will use, based on the minimal requirements
and your needs. Plan the schedule for testing.

2. Define LoQ & Target Level: Clearly state how you will define the LoQ (what accuracy
goals, e.g., total error model) and pick a specific measurand concentration level you
will test as your "trial LoQ".

3. Obtain and Prepare Samples: Get or prepare low level samples specifically targeted
at your chosen trial LoQ concentration. You need at least four independent samples.
Crucially, you must know the true or assigned value (Ri) of the measurand in each
of these samples. Prepare enough aliquots of these samples for all planned testing,
plus some extra.

4. Perform Testing: On each testing day, run the required number of replicate tests for
each sample according to your plan.

5. Review Results Daily: Check the measurement results each day for any errors or
missing data. Look for unusual results (outliers) and try to figure out why they
happened. If an outlier has a clear cause (like a processing mistake, not just random
error), you can retest and use the new result, but document everything. If too many
results from one reagent lot have issues from assignable causes, you might need to
repeat the study for that lot.
6. Ensure Enough Data: Make sure you have collected enough measurement results by
the end of the testing (at least 36 total low level sample results at the trial LoQ level
for each reagent lot you are evaluating).

After completing these experimental steps, you would proceed to data analysis (Section 6.5)
to calculate metrics like bias, precision, and total error for each sample and reagent lot. You
would then compare these results against your predefined accuracy goals to determine if
the trial LoQ concentration meets the criteria. If it does for all lots (or the combined data if
using four or more lots), the lowest concentration tested that met the goals is the LoQ for
that lot, and the highest LoQ across lots (or the combined data LoQ) is the LoQ for the
procedure. If the samples at the trial concentration do not meet the goals, you would need
to repeat the study with samples at a higher concentration.

ACCURACY VERIFICATION

Based on the sources and our conversation history, verifying the accuracy of a quantitative
testing method, specifically focusing on bias estimation between two measurement
procedures using patient samples, follows a structured "classical approach" outlined in CLSI
document EP09-A3. Bias is an estimate of systematic measurement error and relates to
measurement trueness, which is a component of accuracy.

Here are simplified steps for conducting a measurement procedure comparison study for
bias estimation, particularly from the perspective of a clinical laboratory introducing a new
procedure, which is a common scenario:

Steps to Verify Accuracy (Bias Estimation) Using Patient Samples:

1. Determine the Purpose: Clearly define why you are comparing the two
measurement procedures (e.g., verifying manufacturer claims or independently
quantifying bias).

2. Select Measurement Procedures: Identify the candidate procedure you are


evaluating and the comparative procedure you will use.
3. Familiarize Personnel: Ensure the individuals performing the tests are familiar with
both measurement procedures and instrument systems involved.

4. Plan the Study:

o Decide on the number of patient samples needed.

o Consider influential factors like calibration, run, day, reagent lot, calibrator lot,
and instrument, although a clinical laboratory typically uses one candidate
system [52, 62, 6.4].

o Plan the sample sequence and duration of the study [6.5, 6.6].

5. Obtain and Handle Patient Samples: Collect patient samples spanning the measuring
interval of the procedures [52, 57, 60, 6.1]. Handle all samples according to standard
precautions, treating them as potentially infectious.

6. Perform Testing: Measure the selected patient samples using both the candidate and
comparative measurement procedures according to your plan. Clinical laboratories
typically use single, nonreplicated sample measurements for each procedure,
although averaging replicates can decrease uncertainty if feasible.

7. Review Data During Collection: Inspect results daily for errors or issues [48, 6.7].
Document any rejected data [6.9].

8. Visually Inspect Data: Once data collection is complete, create visual plots like
scatter plots and difference plots to characterize the relationship between the two
procedures and identify potential issues or patterns.

9. Perform Quantitative Analysis:

o Estimate bias from difference plots [49, 9.1].

o Use regression analysis techniques (like Ordinary Linear Regression, Deming,


or Passing-Bablok, depending on the data characteristics and assumptions) to
quantify the relationship between the procedures, including bias estimates at
selected concentrations [49, 9.2, 9]. Formulas for bias (e.g., x̄ - Ri, though Ri
refers to a reference value not typically used in a method comparison with an
equivalent method, where difference is estimated instead of bias relative to a
"true" value) and quantitative metrics are applied here.

o Calculate confidence intervals for bias and regression parameters [38, 49,
9.3].

10. Interpret Results and Compare to Performance Criteria: Compare the estimated bias
values (or other accuracy metrics) against predefined acceptance criteria or
performance goals.

Number of Samples Needed:


 For a clinical laboratory introducing a measurement procedure (verification), it is
recommended to attempt to measure at least 40 patient samples that span the
measuring interval of the measurement procedures.

 For manufacturers establishing or verifying claims, a larger number of samples (≥100)


is recommended.

Data Analysis:

 Data analysis begins with visual inspection using scatter plots and difference plots to
understand the relationship between the two procedures.

 Quantitative analysis involves estimating bias, often at selected concentrations or as


an average over the measuring interval.

 Regression techniques are frequently used to characterize the relationship more


robustly, especially by manufacturers, but clinical laboratories may also choose to use
them. Various regression methods (e.g., Ordinary Linear Regression, Deming
Regression, Passing-Bablok Regression) are described for this purpose [30, 38, 9.2].

 Confidence intervals for bias and regression parameters are computed [38, 49, 9.3].

Interpretation:

 The interpretation involves comparing the calculated bias estimates and their
confidence intervals to predetermined performance or acceptance criteria.

 For clinical laboratories verifying a manufacturer's claim, the goal is to confirm that
the estimated bias is within the manufacturer's stated performance claim or within
the laboratory's own acceptable limits [59, 11.2].

 If there is no manufacturer claim or the goal is to independently quantify the bias,


the laboratory compares the calculated bias to its defined acceptable limits.

 The results of the bias estimation help determine if the new procedure is acceptable
for use or if adjustments (like changes to reference intervals or medical decision
values) are needed to account for the observed difference.

This process, as described in EP09-A3, focuses on evaluating the systematic error (bias)
component of a measurement procedure's accuracy by comparing it to another procedure
using patient samples across the relevant measuring range. It is a distinct but related
classical validation step compared to evaluating performance at the very low end (like
LoQ/LoD), which focuses more on precision and bias relative to low-value samples.
PRECISION TESTING

Based on the excerpts provided, specifically from CLSI EP15-A3 which focuses on User
Verification of Precision and Estimation of Bias:

Here are the simple steps for precision verification as outlined or implied by the document,
along with the requested details:

1. Familiarization Period: Before starting the study, the laboratory should go through a
familiarization period [11, 2.1].

2. Experimental Design: Follow the specific experimental procedure design provided in


the guideline [2, 11, 2.3]. The document describes a uniform experimental design.

3. Data Collection: Collect data according to the design.

o How many samples required? The sources don't specify a total number of
patient samples like 40 (that number relates to EP09 for bias estimation using
patient samples, discussed previously). For EP15's precision verification and
bias estimation (using materials with known concentrations), the design
focuses on collecting a specific number of measurements on specific
materials. While the number of materials (e.g., control levels) isn't explicitly
stated as a fixed number like 2 or 3, the protocol involves testing materials
with known concentrations. You will need enough material to perform the
required measurements over the study period.

o How many replicates? The committee chose to use five replicates per day for
the experiment.

o Over how many days? The protocol is designed to be completed within as


few as five days. The committee specifically kept the number of days at five.

4. Data Analysis: Analyze the collected data. The guideline replaces complicated
calculations with tables where possible. It is recommended that the user have access
to computer and statistical software for calculations, such as CLSI's StatisPro software
or generic spreadsheet software.

5. Interpretation: Interpret the results by comparing the estimated precision against


predefined performance standards or manufacturer's claims.

Regarding intra-assay and inter-assay precision verification:

 The experimental design of testing five replicates per day over five days is
specifically chosen to obtain reliable estimates of both repeatability and within-
laboratory imprecision.
 Repeatability (often related to intra-assay precision) refers to the variation within a
short period, typically within the same run or day. The multiple replicates run on the
same day help estimate this component.

 Within-laboratory imprecision (often related to inter-assay precision or total


precision) includes sources of variation that occur over time, such as day-to-day
differences. Running the study over five days helps capture this variability, allowing
for the estimation of this component as well.

In summary, for precision verification using EP15-A3, the core experimental design involves
testing selected materials with five replicates per day over five days to estimate both
within-day (repeatability) and day-to-day (within-laboratory) components of precision.

LINEARITY VERIFICATION

Based on the provided excerpts from CLSI document EP06-A, here are the steps for linearity
verification, including details on samples, replicates, days, data analysis, and interpretation:

CLSI EP06-A is intended to provide a statistical approach for establishing or verifying the
linear range of quantitative measurement procedures for both manufacturers and users.
Linearity is important because a straight-line relationship between observed values and true
concentrations allows for simple and easy interpolation of results. The document uses a
polynomial method to evaluate linearity.

Here are the steps for linearity verification as per EP06-A:

1. Design the Study and Set Goals

 Design the experiment according to the guideline's protocol.

 Establish goals for allowable error due to nonlinearity and repeatability [38, 40, 6.3].
These goals should be derived from goals for bias and should be less than or equal to
bias goals, which in turn should be less than or equal to measurement error goals.
The goals should be based on the laboratory's needs and understanding of the
method's capabilities.

2. Prepare Samples/Materials

 Use samples or materials with varying concentrations that are known relative to one
another (e.g., by dilution ratios or formulation). They do not need to be equally
spaced or dilution-based, but the relationship between concentrations should be
known.
 A hierarchy of acceptable matrices is provided, with patient-sample pools being ideal
[4.3, 20].

 Ensure enough volume of each sample/material is prepared for the entire


experiment.

3. Conduct Measurements

 Assay the prepared samples according to the experimental design [4.8, 38].

 The analytical sequence should be random [4.5, 17].

 The entire data gathering protocol should take place in as short a time interval as
possible, ideally on the same day.

4. Data Collection

 Record the results, preferably in a computer spreadsheet [4.8, 25].

5. Preliminary Data Examination and Outlier Test

 Examine the recorded data for obvious excessive differences (errors). If an analytical
problem is found, correct it and repeat the experiment.

 Visually examine the plotted data for potential outliers at each concentration. Plot
individual results or means (Y-axis) against sample concentration or relative
concentration (X-axis). Look for gross deviations, misplaced points, transcription
errors, or instrument failures. Visual evaluation is the most sensitive test for outliers.

 Arrange data in chronological order to evaluate for drift or trends.

 Look at the differences between responses at each level for indications of


nonlinearity.

 If a single replicate value (yi) appears too far removed from others for a given
concentration, visually evaluate it as an outlier [28, 5.2]. Outliers are typically due to
mistakes.

 Apply an outlier criterion and eliminate the result if met. A single outlier can be
removed and does not need replacement.

 Two or more unexplained outliers cast doubt on the system's performance and
suggest imprecision; these values should usually be taken as typical operation, and
troubleshooting is needed. Do not search for "good" data by repeated testing
without correcting the cause.

6. Data Analysis (Polynomial Evaluation of Linearity) [5.3, 29, 38]


 Perform polynomial regression analysis for first-, second-, and third-order
polynomials using the collected data. This can often be done with statistical or
spreadsheet software.

 Test for statistical significance of the nonlinear coefficients (b2 from the second-order
model, b2 and b3 from the third-order model) using a t-test. Degrees of freedom are
calculated based on the number of levels and replicates.

 If none of the nonlinear coefficients are statistically significant (p > 0.05), the dataset
is considered linear by this protocol. Proceed to the repeatability check.

 If any nonlinear coefficient is statistically significant (p < 0.05), nonlinearity has been
detected. This indicates statistical significance, but not necessarily clinical
importance.

7. Determine the Degree of Nonlinearity (If Statistically Significant) [5.3.3, 38, 39, 40]

 Pick the nonlinear polynomial (second or third order) that provides the best fit,
indicated by the lowest standard error of regression (Sy.x).

 Calculate the deviation from linearity (DLi) at each concentration level tested. This is
the difference between the value predicted by the best-fitting nonlinear polynomial
and the value predicted by the linear (first-order) polynomial at that concentration.

 The DLi values represent the magnitude of nonlinearity at each tested level.

8. Evaluate Repeatability [5.4, 38, 40]

 Estimate the repeatability of the method (SDr or CVr) using the differences between
replicate measurements at each level, preferably pooling the variance across levels if
appropriate.

 Compare the estimated repeatability (SDr or CVr) with the goal for repeatability
established in Step 1.

 If SDr is larger than the goal, precision may be inadequate for a reliable
determination of linearity. You may need to investigate the cause, correct the
problem, or increase the number of replicates.

9. Interpretation and Decision [5.3.3, 38, 39, 40]

 Compare the calculated deviation from linearity (DLi) at each level with the
allowable error criterion for nonlinearity established in Step 1.

 If every DLi is less than the criterion, then the method is considered acceptably
linear over the tested range, even if statistically significant nonlinearity was detected.

 If any DLi exceeds the criterion, there is a problem with nonlinearity at that level or
within that range. You can try to find and correct the cause, or, if the nonlinearity is
at an end of the tested range, you can remove that point and re-run the analysis to
determine if the remaining, narrower range is acceptably linear.

 A visual plot of the data (and optionally, the fitted models and allowable error
boundaries) should also be examined. While insufficient alone for acceptance, it
helps understand the data and guide analysis.

 Based on the analysis and interpretation against performance criteria, determine the
linear range – the range over which the testing system's results are acceptably linear
(where nonlinear error is less than the criterion).

Number of Samples/Levels, Replicates, and Days:

 Samples/Concentration Levels:

o The basic data collection requires measurements from five to nine samples
with varying concentrations.

o At least five solutions of different concentrations are required to use the


polynomial method. This is the minimum number to reliably describe the
linear range.

o More points (concentrations) provide a more exact description of linearity


and allow for a wider linear range.

o Specific recommendations for the number of levels depend on the purpose:

 To confirm that the linear range is valid in a laboratory (user


verification): 5 to 7 levels.

 To validate claims for an "in-house" or modified method: 7 to 9 levels.

 To establish the linear range (manufacturers/developers): 9 to 11


levels. Use a range 20% to 30% wider than anticipated [18, 4.6].

 Replicates:

o Solutions must be run at least in duplicate (two replicates).

o Specific recommendations for the number of replicates per level depend on


the purpose:

 To confirm validity in a laboratory (user verification): 2 replicates at


each level.

 To validate claims or establish the linear range


(developers/manufacturers): 2 to 4 replicates at each level, depending
on expected imprecision.
o The number of replicates should be sufficient to produce a reliable estimate
of the concentration at each level; for some analytes/concentrations, this may
require 3 to 5 replicates. Users should use their best judgment. Using
different numbers of replicates at different levels is acceptable.

 Days:

o The entire data gathering protocol should take place in as short a time
interval as possible.

o Ideally, all results for a single analyte should be obtained on the same day.
However, this may not be practical for all analytes.

o Samples should be assayed randomly during a single run or closely grouped


analytical runs. The EP06-A protocol does not specify running the experiment
over multiple days like EP05-A2 (20 days) or EP15-A3 (5 days); its focus is on
assessing linearity, isolated as much as possible from day-to-day variation.
The duration is primarily about getting all measurements for the linearity
assessment done efficiently.

 Total Replicates Tested:

o The total number of replicates tested depends on the number of levels and
the number of replicates per level chosen for the study purpose.

o For a user verification study (5-7 levels, 2 replicates per level): The total
replicates would be (5 to 7 levels) * 2 replicates/level = 10 to 14 replicates.

o For a method validation study (7-9 levels, 2-3 replicates per level): The total
replicates could be (7 levels * 2 reps) to (9 levels * 3 reps) = 14 to 27
replicates.

o For a manufacturer's establishment study (9-11 levels, 2-4 replicates per


level): The total replicates could be (9 levels * 2 reps) to (11 levels * 4 reps) =
18 to 44 replicates.

o Examples provided show 5 levels * 2 replicates/level = 10 total replicates and


6 levels * 2 replicates/level = 12 total replicates.

Based on the provided excerpts from CLSI EP17-A2:

The sources indicate that detection capability is a fundamental performance characteristic


of clinical laboratory measurement procedures, often marking the low-end boundary of the
measuring interval. This capability is characterized by the Limit of Blank (LoB), Limit of
Detection (LoD), and Limit of Quantitation (LoQ), which reflect increasing certainty from
distinguishing a blank sample (LoB), to simply detecting the measurand (LoD), to reliably
quantifying it within defined accuracy goals (LoQ).

The Lower Limit of the Measuring Interval (LLMI) is defined as the lowest measurand
concentration at which all defined performance characteristics of the measurement
procedure are met (e.g., acceptable bias, imprecision, and linearity). The LoB is always less
than the LoD, which is less than or equal to the LoQ.

However, the sources state that "In some special cases, the concepts of LoB, LoD, and LoQ
may not be meaningful". In these specific situations, "the LLMI is set with respect to other
criteria".

The source provides examples of such special cases: hemostasis screening tests like
prothrombin time and activated partial thromboplastin time. The reason given for these
examples is that they reflect complex interactions of many components, with no clear
unique measurand. For these tests, there isn't a "detection limit" in the traditional sense.
Instead, the measuring interval (and thus the LLMI) is typically set based on instrument
processing constraints and clinical utility, such as the time limit for the measurement.

While your query specifically mentions Troponin I and Procalcitonin, the provided sources do
not use Troponin I or Procalcitonin as examples of analytes where LoB, LoD, and LoQ may
not be meaningful in the way described for hemostasis tests.

However, the sources do mention Troponin I in relation to the Limit of Quantitation (LoQ)
and the deprecated term "functional sensitivity". The LoQ, unlike LoB and LoD which are
statistical constructs based on precision and error probabilities, is based on predefined
accuracy goals. These accuracy goals are set by the developer or user based on the clinical
applications of the measurement procedure and may include requirements for bias,
precision, or total error. For analytes like Troponin, where low levels are clinically critical and
require high precision, functional sensitivity (a form of LoQ based solely on a precision
requirement like 10% CV) has been used. This highlights how the clinical need dictates the
required accuracy goals, which in turn defines the LoQ.

Therefore, while the source confirms that LoB, LoD, and LoQ can be meaningless in certain
special cases due to the nature of the test (like hemostasis assays), and that LLMI is set by
other criteria in such cases, it does not specifically apply this reasoning to Troponin I or
Procalcitonin. It does show that for analytes like Troponin, the LoQ is critically dependent on
meeting accuracy goals that are often defined by clinical criteria. Since the LLMI requires
all performance characteristics to be met, the point at which clinical accuracy goals are met
(the LoQ) can significantly influence where the LLMI is set for these analytes.

ANALYTICAL SPECIFICITY
Based on the provided sources, checking analytical specificity involves evaluating the
susceptibility of a measurement procedure to interference from other substances in the
sample. Analytical specificity is the ability of a measurement procedure to measure solely
the measurand. Interference is defined as the influence of a quantity that is not the
measurand but affects the measurement result.

CLSI document EP07 describes experimental procedures for evaluating interference


characteristics. Two basic approaches are outlined, which can be used by manufacturers to
characterize procedures and by laboratories to verify interference claims or investigate
discrepant results:

1. Evaluating the effect of potentially interfering substances added to the sample of


interest (often using spiked samples). This includes:

o Interference Screen (Paired-Difference Testing): A primary approach to


identify potential interferents.

o Characterization of Interference Effects (Dose-Response Testing): Used to


determine the relationship between interferent concentration and the
magnitude of interference.

2. Evaluating the bias of individual, representative patient specimens in comparison to


a highly specific comparative measurement procedure.

Here are the simple steps derived from these approaches for checking analytical specificity,
focusing on the common methods:

1. Paired-Difference Testing (Interference Screen)

This method involves comparing results from a sample pool with the potential interferent
added ("test pool") to the same pool without the interferent ("control pool").

 Number of Samples: This refers to the number of substance types tested. For each
substance, you need two pools: a test pool and a control pool.

 Number of Replicates: You need an adequate number of replicate measurements for


each pool (test and control) within a single analytical run. The number of replicates
(n) per pool depends on:

o The magnitude of the clinically significant difference you want to detect.

o The desired confidence level (e.g., 95%) and power (e.g., 95%).

o The repeatability (within-run precision) of the measurement procedure.

o A table is provided showing required replicates for 95% confidence and power
based on the ratio of the maximum acceptable interference (dmax) to the
repeatability standard deviation (s). For example, if dmax/s = 1.5, you need 12
replicates per pool. If dmax/s = 2.0, you need 7 replicates per pool. The
number of replicates is always rounded up.

 Procedure:

1. Determine the analyte concentration(s) to test (often medical decision


concentrations).

2. Establish the criterion for a "clinically significant" difference (dmax).

3. Determine the required number of replicates (n) for each pool based on dmax
and repeatability.

4. Prepare a base pool (ideally from healthy individuals, simulating the typical
specimen matrix).

5. Prepare a stock solution of the potential interfering substance.

6. Prepare the "test" pool by adding a specified volume fraction of the stock
solution to the base pool to achieve the desired interferent concentration
(usually a high, "worst-case" concentration). Avoid introducing other
substances.

7. Prepare the "control" pool by adding the same volume of the solvent used for
the stock solution to the base pool.

8. Prepare n aliquots of both the test and control pools.

9. Analyze the test (T) and control (C) samples in alternating order (e.g.,
C1T1C2T2...CnTn) within a single analytical run. Random order is also
acceptable. Include extra samples to assess carryover if needed.

10. Record the results.

 Data Analysis:

1. Compute the mean result for the test pool (x̄test) and the control pool
(x̄ control).

2. Calculate the observed interference effect (dobs) as the difference between


the means: dobs = x̄ test - x̄ control.

3. Compare dobs to a calculated cut-off value (dc). The cut-off is based on the
statistical test (one-sided or two-sided) and the repeatability.

 Interpretation:

o If dobs is less than or equal to dc, conclude the bias caused by the substance
is less than the defined clinically significant difference (dmax).
o If dobs is greater than dc, accept the alternative hypothesis that the
substance interferes.

o Remember that observed interference may differ from the true effect due to
sampling error. Also, consider that the artificial nature of spiked samples
might introduce artifacts.

2. Dose-Response Testing (Characterization)

If interference is detected, this method helps understand the relationship between the
interferent concentration and the magnitude of the effect.

 Number of Samples/Levels: Prepare a series of test samples with systematically


varying concentrations of the interferent. Five concentrations are generally
sufficient to determine a linear dose-response.

 Number of Replicates: Generally triplicate measurements are sufficient at each test


concentration. Replicates from all samples are used to pool repeatability information,
which can reduce the number needed compared to paired testing.

 Procedure:

1. Determine the highest and lowest interferent concentrations to test.

2. Define the clinically significant difference (dmax).

3. Determine the number of replicates (n) per concentration.

4. Prepare high and low pools containing the interferent at the highest and
lowest concentrations.

5. Prepare intermediate concentration pools by mixing the high and low pools
(e.g., mid-pool, 25% pool, 75% pool).

6. Prepare n aliquots of each pool.

7. Analyze the series of pools within the same analytical run. Analyze in
ascending/descending order or random order to minimize drift effects.

8. Record results and calculate net results by subtracting the average


concentration of the low pool.

9. (Optional) Verify interferent concentrations if a method is available.

 Data Analysis:

1. Plot the observed effect (bias) against the interferent concentration.

2. Examine the plot for the shape of the relationship.


3. If the relationship appears linear, perform linear regression analysis to
determine the slope, intercept, and residual error.

 Interpretation: The regression analysis provides the relationship between interferent


concentration and bias, allowing estimation of the effect at any concentration within
the tested range. The magnitude of interference can be determined.

3. Evaluating Interference Using Patient Specimens

This method analyzes patient samples that naturally contain the potential interferent and
compares the results to a highly specific comparative measurement procedure.

 Number of Samples: Select a test group of patient specimens known to contain the
suspected interferent and a control group of specimens without the interferent,
spanning a similar analyte concentration range. If the effect is large and precision is
good, 10 to 20 samples in each group may be sufficient. More samples may be
needed for smaller biases masked by imprecision.

 Procedure:

1. Select test and control sample groups.

2. Select a reference or qualified comparative measurement procedure.

3. Analyze each sample in duplicate by both the procedure being evaluated


and the comparative procedure. Perform analyses in as short a time span as
possible. Spread runs over several days and alternate/randomize sample
order to minimize effects.

4. Record results and average duplicates for each sample.

5. (Optional) Measure the concentration of the potential interferent in the


samples.

 Data Analysis:

1. Calculate the average bias for each sample (evaluated procedure result minus
comparative procedure result).

2. Visually plot the results with bias on the y-axis and the comparative
procedure concentration on the x-axis, using different symbols for test and
control groups.

3. Visually assess if there is systematic bias in the test group results compared
to the control group.

4. If suspected interferent concentration is known, plot bias vs. interferent


concentration to see if they correlate.

 Interpretation:
o Review the plots for systematic bias.

o Compare the range of observed differences in the test group to your


interference criteria.

o Be aware that simply observing bias in patient samples and correlating it with
a substance does not definitively prove a cause-effect relationship; another
substance present with the suspected one could be the actual interferent.

These procedures help establish or verify analytical specificity claims regarding potential
interfering substances.

CARRYOVER

Based on the provided CLSI document EP10-A3-AMD, here are the simple steps to assess
sample carry-over using its preliminary evaluation procedure:

Purpose: The procedure described in CLSI EP10-A3 is intended for the preliminary
evaluation of quantitative clinical laboratory measurement procedures. It is a quick check
designed to detect major problems, including sample carry-over. The goal is to determine if
a device has problems requiring further evaluation or referral to the manufacturer.

Sample Type: You need three stable pools of measurand that span the claimed or medically
relevant interval for the measurement procedure.

 These pools should represent low, mid, and high concentrations.

 They can be obtained commercially (e.g., control materials) or made from patient
sample pools.

 The midlevel pool concentration must be exactly halfway between the high- and
low-level concentrations. An efficient way to achieve this is by mixing equal parts of
the high and low pools.

 The sample matrix must be appropriate for the measurement procedure as specified
by the manufacturer.

 Sufficient material must be prepared to last for the entire evaluation, ensuring
stability over time.

Number of Samples, Runs, and Days:


 Number of Samples per Run: Each run consists of analyzing ten samples in a specific
sequence.

 Number of Samples Used for Calculation: Only the results from the last nine
samples in the sequence are used for data analysis; the first sample is a prime and is
not used in calculations.

 Number of Runs/Days for Clinical Laboratories: Perform at least one run per day for
at least five days.

 Number of Runs for Manufacturers: Manufacturers are suggested to perform 20 or


more runs.

 The same number of runs should be performed each day to simplify data analysis.

Procedure:

1. Obtain the three stable pools (low, mid, high).

2. Ensure the measurement procedure device is properly calibrated.

3. Prepare the samples for analysis according to the manufacturer's instructions and
standard precautions.

4. For each run, analyze the ten samples in the exact sequence specified: Mid, High,
Low, Mid, Mid, Low, Low, High, High, Mid.

5. It is critical that the analyzer can process samples in this strict order. This sequence
was specifically designed to allow for the estimation of sample carry-over and other
effects using multiple regression. Note that this protocol was originally designed for
continuous flow analyzers, and random access analyzers may give invalid results if
the strict sequence cannot be followed.

6. If any of the last nine samples in a run is rejected, lost, or not reported, the entire
run must be repeated.

7. Repeat steps 3-6 for the required number of days/runs (at least five days for clinical
labs).

8. Record the observed value (Y) for each of the ten samples in every run. Note the
level (Low, Mid, High) and assign coded values (-1 for low, 0 for mid, +1 for high). Use
Data Summary Sheet #1 (Appendix A) or a similar method to record the data. The
result for the first sample (the prime) is recorded but not used in the subsequent
analysis steps.

Data Analysis: Carry-over is assessed using multiple linear regression analysis.

1. For each run, perform the multiple linear regression calculation using the nine
analyzed sample results and their corresponding coded variables. The model includes
terms for intercept (B0), slope (B1), sample carry-over (B2), nonlinearity (B3), and
linear drift (B4). Data Summary Sheet #4 (Appendix A) illustrates the calculations
using predefined multipliers based on the specific EP10 sample sequence.

2. For each run, calculate the t-statistic for the regression coefficient B2 (sample carry-
over). Data Summary Sheet #5 (Appendix A) provides the formulas and steps for
calculating the standard error and the t-statistic for each regression parameter. The t-
statistic tests if the estimated coefficient is significantly different from zero for that
run.

3. Calculate the percent carry-over for each run using the formula: Percent carry-over =
(B2 / B1) * 100.

4. Summarize the calculated regression coefficients (B0adj, B1adj, B2adj, B3adj, B4) and
their t-statistics from each run using Data Summary Sheet #6 (Appendix A).

5. Examine the t-values for sample carry-over (B2adj) across the runs. The document
suggests a one-sample sign test to determine if a parameter estimate (like carry-
over) is different from zero across all runs.

Interpretation:

1. Review the t-statistics for the sample carry-over coefficient (B2) from each run
summary (Data Summary Sheet #6).

2. A significant t-statistic (e.g., |t| > 4.6 for p < 0.01 with 4 degrees of freedom for a
single run) suggests that sample carry-over is statistically detectable in that run.

3. Determine if the significant carry-over problem recurs in other runs. The summary
on Data Summary Sheet #6 and potentially plotting the parameter estimates versus
the day can help.

4. If sample carry-over is found to be significant (either in multiple single runs or based


on the overall summary), it should motivate the evaluator to investigate the origin
of this error. The system/measurement procedure should be evaluated more
extensively to determine the actual source of the problem, or the manufacturer
should be contacted.

5. It's important to note that a statistically significant result may not be of practical
importance. Users must decide how much error to allow for each error source,
including sample carry-over. A problem limited to only one run may often be ignored.

6. If significant nonlinearity is also detected, the interpretation of slope and intercept


parameters as measures of proportional and constant error may not be meaningful.

This procedure provides a preliminary assessment using a specific experimental design and
statistical model. More rigorous and comprehensive interference testing, which may include
different approaches to assessing carry-over, is typically covered in documents like CLSI
EP07.

You might also like