100% found this document useful (5 votes)
97 views

A Practical Approach to Quantitative Validation of Patient Reported Outcomes A Simulation based Guide Using SAS, 1st Edition pdf epub

This document is a comprehensive guide on the quantitative validation of patient-reported outcomes (PRO) using SAS, authored by Andrew G. Bushmakin and Joseph C. Cappelleri. It covers methodologies for assessing measurement properties of clinical outcome assessments, including psychometric validation, reliability, and construct validity, with a strong emphasis on simulation-based examples. The book is intended for educational purposes and is published by John Wiley & Sons, Inc.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (5 votes)
97 views

A Practical Approach to Quantitative Validation of Patient Reported Outcomes A Simulation based Guide Using SAS, 1st Edition pdf epub

This document is a comprehensive guide on the quantitative validation of patient-reported outcomes (PRO) using SAS, authored by Andrew G. Bushmakin and Joseph C. Cappelleri. It covers methodologies for assessing measurement properties of clinical outcome assessments, including psychometric validation, reliability, and construct validity, with a strong emphasis on simulation-based examples. The book is intended for educational purposes and is published by John Wiley & Sons, Inc.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

A Practical Approach to Quantitative Validation of Patient

Reported Outcomes A Simulation based Guide Using SAS -


1st Edition

Visit the link below to download the full version of this book:

https://medipdf.com/product/a-practical-approach-to-quantitative-validation-of-p
atient-reported-outcomes-a-simulation-based-guide-using-sas-1st-edition/

Click Download Now


This edition first published 2023
© 2023 John Wiley & Sons, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as
permitted by law. Advice on how to obtain permission to reuse material from this title is available at
http://www.wiley.com/go/permissions.

The right of Andrew G. Bushmakin and Joseph C. Cappelleri to be identified as the authors of this work has
been asserted in accordance with law.

Registered Office
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit
us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print-­on-­demand. Some content that
appears in standard print versions of this book may not be available in other formats.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/
or its affiliates in the United States and other countries and may not be used without written permission. All
other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any
product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty


In view of ongoing research, equipment modifications, changes in governmental regulations, and the
constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader
is urged to review and evaluate the information provided in the package insert or instructions for each
chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or
indication of usage and for added warnings and precautions. While the publisher and authors have used their
best efforts in preparing this work, they make no representations or warranties with respect to the accuracy
or completeness of the contents of this work and specifically disclaim all warranties, including without
limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be
created or extended by sales representatives, written sales materials or promotional statements for this work.
The fact that an organization, website, or product is referred to in this work as a citation and/or potential
source of further information does not mean that the publisher and authors endorse the information or
services the organization, website, or product may provide or recommendations it may make. This work is
sold with the understanding that the publisher is not engaged in rendering professional services. The advice
and strategies contained herein may not be suitable for your situation. You should consult with a specialist
where appropriate. Further, readers should be aware that websites listed in this work may have changed or
disappeared between when this work was written and when it is read. Neither the publisher nor authors
shall be liable for any loss of profit or any other commercial damages, including but not limited to special,
incidental, consequential, or other damages.

Library of Congress Cataloging-­in-­Publication Data


Names: Bushmakin, Andrew G., author. | Cappelleri, Joseph C., author.
Title: A practical approach to quantitative validation of patient-reported
outcomes : a simulation-based guide using SAS / Andrew G. Bushmakin and
Joseph C. Cappelleri.
Description: Hoboken, NJ : Wiley, 2023. | Includes bibliographical
references and index.
Identifiers: LCCN 2022024236 (print) | LCCN 2022024237 (ebook) | ISBN
9781119376378 (cloth) | ISBN 9781119376316 (adobe pdf) | ISBN
9781119376309 (epub)
Subjects: MESH: Patient Reported Outcome Measures | Patient Outcome
Assessment | Reproducibility of Results | Computer Simulation | Clinical Outcome Assessment
Classification: LCC R853.Q34 (print) | LCC R853.Q34 (ebook) | NLM W 84.41
| DDC 610.72/1–dc23/eng/20220706
LC record available at https://lccn.loc.gov/2022024236
LC ebook record available at https://lccn.loc.gov/2022024237

Cover Design: Wiley


Cover Image: Courtesy of Andrew Bushmakin

Set in 9.5/12.5pt STIXTwoText by Straive, Pondicherry, India


­Disclosure

Andrew G. Bushmakin and Joseph C. Cappelleri are employees of Pfizer Inc.


This book is written for educational and instructional purposes, with emphasis on
the methodology of quantitative validation of patient-­reported outcomes. Views and
opinions expressed in this book are the authors’ own and do not necessarily reflect
those of Pfizer Inc.
vii

Contents

Preface xi
About the Authors xv

1 Introduction 1
1.1 What Is a PRO Measure? 1
1.2 Development of a PRO Measure 4
1.2.1 Concept Identification 4
1.2.1.1 Literature and Instrument Review 5
1.2.1.2 Patient-­Centered Input 6
1.2.2 Item Development 9
1.2.3 Cognitive Interviews 11
1.2.4 Additional Considerations 12
1.2.5 Documentation of Development Process with Conceptual
Framework 13
1.3 Psychometric Validation 15
1.3.1 Psychometric Evaluation Data 16
1.3.2 Psychometric Properties 17
1.3.2.1 Distributional Characteristics 19
1.3.2.2 Measurement Model Structure 20
1.3.2.3 Reliability 22
1.3.2.4 Construct Validity 23
1.3.2.5 Ability to Detect Change 24
1.3.2.6 Interpretation 25
1.4 Learning Through Simulations 26
1.5 Summary 27
References 28
viii Contents

2 Validation Workflow 35
2.1 Clinical Trials as a Data Source for Validation 35
2.2 Validation Workflow for Single-­Item Scales 39
2.3 Confirmatory Validation Workflow for Multi-­item Multi-­domain
Scales 43
2.4 Validation Flow for a New Multi-­item Multi-­domain Scale 45
2.4.1 New Scale with Known Conceptual Framework 45
2.4.2 New Scale with Unknown Measurement Structure 47
2.5 Cross-­Sectional Studies and Field Tests 48
2.6 Summary 49
References 49

3 An Assessment of Classical Test Theory and Item Response Theory 51


3.1 Overview of Classical Test Theory 52
3.1.1 Basics 52
3.1.2 Illustration 52
3.1.3 Another Look 53
3.2 Person-­Item Maps 55
3.2.1 CTT Revisited 55
3.2.2 Note on IRT 56
3.2.3 Implementation of Person-­Item Maps 58
3.2.4 CTT-­Based Scoring vs. IRT-­Based Scoring 69
3.3 Summary 78
References 80

4 Reliability 83
4.1 Reproducibility/Test–Retest 85
4.1.1 Measurement Error Model 85
4.1.2 Two Time Points 87
4.1.3 Random-­Effects Model for ICC Estimation 90
4.1.4 Test–Retest Reliability Assessment in the Context of Clinical
Studies 95
4.1.4.1 Pre-­Treatment/Pre-­Baseline Data 95
4.1.4.2 Post-­Baseline Data 97
4.1.4.3 Time Period Between Observations 101
4.1.5 Spearman-­Brown Prophecy Formula 104
4.1.6 Domain Score Test–Retest vs. Item Test–Retest 109
4.1.7 Observer-­Based and Interviewer-­Based Scales 111
4.1.8 Uncovering True Relationship Between Measurements 113
4.1.8.1 Accounting for Measurement Error 113
4.1.8.2 Measurement Error Model with Two Observations 122
Contents ix

4.2 Cronbach’s Alpha 129


4.2.1 Likert-­Type Scales 129
4.2.2 Dichotomous Items 139
4.3 Summary 148
References 148

5 Construct Validity and Criterion Validity 151


5.1 Exploratory Factor Analyses 153
5.1.1 Modeling Assumptions 153
5.1.2 Exploratory Factor Analysis Implementation 159
5.1.3 Evaluating the Number of Factors and Factor Loadings 165
5.1.3.1 Scree Plot 165
5.1.3.2 Correlated Latent Factors 168
5.1.3.3 Parallel Analysis with Reduced Correlation Matrix 171
5.1.3.4 Factor Loadings 175
5.2 Confirmatory Factor Analyses 179
5.2.1 Confirmatory Factor Analysis Model 179
5.2.2 Confirmatory Factor Analysis Model Implementation 183
5.2.3 Confirmatory Factor Analysis with Domains Represented by a Single
Item 192
5.2.4 Second-­Order Confirmatory Factor Analysis 204
5.2.4.1 Implementation of the Model with at Least Three First-­Order Latent
Domains 204
5.2.4.2 Implementation of the Model with Two First-­Order Latent
Domains 207
5.2.5 Formative vs. Reflective Model 213
5.2.6 Bifactor Model 219
5.2.7 Confirmatory Factor Analysis Using Polychoric Correlations 227
5.3 Convergent and Discriminant Validity 231
5.3.1 Convergent and Discriminant Validity Assessment 231
5.3.2 Convergent and Discriminant Validity Evaluation in a Clinical
Study 232
5.4 Known-­Groups Validity 237
5.5 Criterion Validity 242
5.6 Summary 247
References 248

6 Responsiveness and Sensitivity 251


6.1 Ability to Detect Change 252
6.1.1 Definitions and Concepts 252
6.1.2 Ability to Detect Change Analysis Implementation 255
x Contents

6.1.3 Correlation Analysis to Support Ability to Detect Change 263


6.1.4 Deconstructing Correlation Between Changes 268
6.2 Sensitivity to Treatment 270
6.2.1 What Is the Sensitivity to Treatment? 270
6.2.2 Concurrent Estimation of the Treatment Effects for a Multi-­Domain
Scale 273
6.2.2.1 Assessment of the Treatment Effect for a Single Domain 273
6.2.2.2 Assessment of the Treatment Effects for a Multi-­Domain Scale 279
6.3 Summary 292
References 293

7 Interpretation of Patient-­Reported Outcome Findings 295


7.1 ­Meaningful Within-­Patient Change 296
7.1.1 Definitions and Concepts 296
7.1.2 Anchor-­Based Method to Assess Meaningful Within-­Patient
Change 298
7.1.3 Cumulative Distribution Functions to Supplement Anchor-­Based
Methods 310
7.2 Clinical Important Difference 315
7.2.1 Meaningful Within-­Patient Change Versus Between-­Group
Difference 315
7.2.2 Anchor-­Based Method to Assess Clinically Important Difference 316
7.3 Responder Analyses and Cumulative Distribution Functions 320
7.3.1 Treatment Effect Model 320
7.3.2 MWPC Application: A Responder Analysis 323
7.3.3 Using CDFs for Interpretation of Results 325
7.4 Summary 331
References 332

Index 335
xi

Preface

This book is organized as one volume with interconnected chapters, with each
chapter devoted to the methodology of assessments of specific measurement
properties of clinical outcome assessments (COAs), which include patient-
reported outcomes (PRO), clinician-reported outcomes (ClinRO), observer-
reported outcomes (ObsRO), and performance outcome assessments (PerfO).
In covering the topics, we made a considerable effort to illustrate the methodol-
ogy with an extensive number of simulated examples, motivated by and
grounded in our experience with practical applications, covering all key topics
of the quantitative validation of a COA scale. All simulations are conducted in
SAS, the primary software used in the pharmaceutical industry.
Chapter 1 discusses qualitative research including concept identification, item
development, cognitive interviews, and other steps in the instrument develop-
ment process. It is assumed that content validity for the COA instrument of inter-
est has been achieved, and, in doing so, it covers the important concepts of the
unobservable or latent attribute under study that the instrument purports to
measure. Hence, the content of the PRO (or COA) instrument is taken as an ade-
quate reflection of the construct to be measured. Given this, subsequent chapters
focus on the quantitative validation of PRO measures in particular and, when
applicable, to COAs in general.
Chapter 2 describes quantitative validation workflows that should be applicable
for most realistic scenarios and study designs. The chapter elucidates the distinc-
tive opportunities and challenges when using clinical trials data as the source to
psychometrically validate a scale. There is, however, always a possibility that some
new scale will need some adjustments to the workflows highlighted and discussed
in this chapter.
Chapter 3 provides an overview of classical test theory (CTT) and compares
CTT assumptions with the item response theory (IRT) model. A different
xii Preface

paradigm on how we think about items in CTT vs. IRT is discussed and illustrated.
The relationships between CTT-­based scoring and IRT-­based scoring are dis-
cussed. Based on several examples, this chapter illustrates that both theories are, in
general, comparable in terms of the produced scores (which is an ultimate pur-
pose of a measurement scale).
Chapter 4 covers test-­retest reliability and internal reliability. Test-­retest relia-
bility is introduced based on the basic conventional measurement error model
and is discussed in the context of a clinical study. Spearman–Brown prophecy
formula is used to contrast the reliability of a single measurement with reliability
of an average score. The other major section investigates the methodology behind
Cronbach’s alpha for Likert-­type scales and also includes applications with
dichotomous items.
Chapter 5 centers on construct validity. As a method to determine the fac-
tor structure of a scale, exploratory factor analysis is analyzed. As a way to
test whether a measurement model of a scale fits the data, confirmatory fac-
tor analysis is examined. Methodological issues associated with both
approaches are discussed at length. The chapter also describes such impor-
tant properties as convergent and discriminant validity assessment. The lon-
gitudinal model for known-­groups validity assessment is introduced and
detailed. A model using all available data from a clinical study for criterion
validity is emphasized.
Chapter 6 centers on the ability to detect change property. An analytic model-­
based implementation on the ability to detect change is presented, which allows
to quantify the relationship between changes in the target PRO (or COA) scale
and changes in the anchor (external) scale. Correlational analysis to support an
instrument’s ability to detect change is investigated. It is shown that correlations
between score changes on a pair of variables may provide only adjunct evidence
on the ability to detect change on a target scale. The second theme of this chapter
relates to an instrument’s sensitivity to treatment effects. A framework and an
implementation of one unified multi-­domain longitudinal model, intended for a
scale with multiple domains assessed over time, is discussed in detail.
Chapter 7 discusses the methods and challenges of assessments of meaningful
within-­patient change (MWPC) and clinical important difference (CID) for a
measurement scale. As with the other chapters, this chapter contains methods
rooted in the current regulatory documents (especially from the US Food and
Drug Administration) and in the existing and more recent literature. Applications
of the MWPC and CID for interpretation of the results of treatment effects analy-
ses are highlighted.
Preface xiii

As authors, we sought to develop this book to be viewed as a comprehensive


guide for all key steps that need to be undertaken in practice during the quantita-
tive validation of a measurement scale. And, for this purpose, it is our aspiration
that this monograph can serve as a single, thorough reference that will benefit
readers in their research and understanding of the material.

Andrew G. Bushmakin
Joseph C. Cappelleri
xv

About the Authors

Andrew G. Bushmakin earned his MS in applied mathematics and physics


from the National Research Nuclear University (former Moscow Engineering
Physics Institute, Moscow, Russia). He has more than 20 years of experience in
mathematical modeling and data analysis. He is a director of biostatistics in the
Statistical Research and Data Science Center at Pfizer Inc. He has co-­authored
numerous articles and presentations on topics ranging from mathematical
modeling of neutron physics processes to patient-­reported outcomes, as well as
­several monographs.

Joseph C. Cappelleri earned his MS in statistics from the City University of


New York, PhD in psychometrics from Cornell University, and MPH in epidemi-
ology from Harvard University. He is an executive director of biostatistics in the
Statistical Research and Data Science Center at Pfizer Inc. As an adjunct profes-
sor, he has served on the faculties of Brown University, University of Connecticut,
and Tufts Medical Center. He has delivered numerous conference presentations
and has published extensively on clinical and methodological topics. He is a fel-
low of the American Statistical Association and recipient of the ISPOR Avedis
Donabedian Outcomes Research Lifetime Achievement Award.­
1

Introduction

1.1 ­What Is a PRO Measure?

The US Food and Drug Administration defines a patient-­reported outcome


(PRO) as follows: “Any report of the status of a patient’s health condition that
comes directly from the patient, without interpretation of the patient’s response by a
clinician or anyone else. The outcome can be measured in absolute terms (e.g., severity
of a symptom, sign, or state of a disease) or as a change from a previous measure.
In clinical trials, a PRO instrument can be used to measure the effect of a medical
intervention on one or more concepts (i.e., the thing being measured, such as a
symptom or group of symptoms, effects on a particular function or group of func-
tions, or a group of symptoms or functions shown to measure the severity of a health
condition)” (FDA 2009).
Similarly, the European Medicines Agency, in its reflection paper on the
“Regulatory Guidance for the Use of Health-­related Quality of Life Measures in
the Evaluation of Medicinal Products,” defines a patient-­reported outcome as
“any outcome directly evaluated by the patient and based on patient’s perception
of a disease and its treatment(s)” (EMA 2005). A PRO measure that will be used
to collect self-­reported information from patients may be as simple as a single item
or as complex as a multidimensional instrument. (In this book, the terms “instru-
ment,” “measure,” “questionnaire,” and “scale” are used interchangeably; moreo-
ver, the terms “item” and “question” are used interchangeably.)
A PRO measure (sometimes referred to as PROM) is an umbrella term that
includes a whole host of subjective outcomes such as pain, fatigue, depression,
aspects of well-­being (e.g., physical, functional, psychological), treatment

A Practical Approach to Quantitative Validation of Patient-Reported Outcomes: A Simulation-Based


Guide Using SAS, First Edition. Andrew G. Bushmakin and Joseph C. Cappelleri.
© 2023 John Wiley & Sons, Inc. Published 2023 by John Wiley & Sons, Inc.

You might also like