100% found this document useful (1 vote)
1K views

Reliability of Structures

Reliability of Structure
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
100% found this document useful (1 vote)
1K views

Reliability of Structures

Reliability of Structure
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 350
RELIABILITY OF STRUCTURES Andrzej S. Nowak University of Michigan Kevin R. Collins University of Michigan Boston Burr Ridge, IL Dubuque,1A Madison, WI New York San Francisco St. Louis Bangkok Bogoté Caracas Lisbon London Madrid Mexico City Milan New Delhi Seoul Singapore Sydney Taipei Toronto McGraw-Hill Higher Education A Division of The MeGreaw Hill Companies RELIABILITY OF STRUCTURES International Edition 2000 Exclusive rights by McGraw-Hill Book Co ~ Singapore, for manufacture and export. This book cannot be re-exported from the country to which it is sold by McGraw-Hill. Copyright © 2000 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher. Some ancillaries, including electronic and print components, may not be available to customers outside the United States. 10 09 08 07 06 05 04 03 20 09 08 07 06 05 04 03 02 O1 FC BIE Library of Congress Cataloging-in-Publication Data Nowak, Andrzej S. Reliability of structures / Andrzej Nowak, Kevin R. Collins. pcm Includes bibliographical reference and index. ISBN 0-07-048163-6 1. Structural analysis (Engineering). 2. Reliability Engineering). 1. Collins, Kevin R, Il. Title. TA645.N64 2000 624.1°71-de21 99-026732 ‘When ordering th le, use ISBN 0-07-116354-9 Printed in Singapore ABOUT THE AUTHORS ANDRZEJ S. NOWAK isa professor of civil and environmental engineering at the University of Michigan, He received his M.S. (1970) and Ph.D. (1975) from Politechnika Warszawska in Poland, Prior to joining the faculty at the University of Michigan in 1979, he worked at the University of Waterloo in Canada (1976~78) and the State University of New York in Buffalo (1978-79). Professor Nowak’s research has led to the development of a probabilistic basis for the new generation of design codes for highway bridges, including load and resistance factors for the LRFD AASHTO Code and the Ontario Highway Bridge Design Code, and fatigue evaluation criteria for BS-5400 (United Kingdom). He has authored or coauthored over 250 publications, including books, journal papers, and articles in conference proceedings. Professor Nowak is also an active member of national and international professional organizations. He chairs TRB Committee A2C00 on Structures, ASCE Committee on Struc- tural Safety and Reliability, IABSE WC I on Structural Performance, Safety and Analysis, and IFIP WG 7.5 on Reliability and Optimization of Structural Systems. He is a past chair of ACI Committee 348 on Structural Safety, and TRB Committee A2COS on Dynamics and Field Testing of Bridges. He is a Fellow of ASCE, Fellow of ACI, Honorary Professor of Politechnika Warszawska, and a recipient of the ASCE Moisseiff Award for the paper entitled “Calibration of LRFD Bridge Code.” KEVIN R. COLLINS isan assistant professor of civil and environmental engineering at the University of Michigan. He received his bachelor of civil engineering (BCE) degree from the University of Delaware in May 1988, his master of science (MS) degree from Virginia Polytechnic Institute and State University in December 1989 and his doctor of philosophy (Ph.D.) degree from the University of Minos in October 1995. Between his M.S. and Ph.D. degrees he worked for MPR Associates, Inc., in Washington, D.C., for 2 1/2 years. Dr. Collins’ research interests are in the areas of earthquake engineering, structural dynamics, and structural reliability. Dr. Collins is an associate member of the American Society of Civil Engineers (ASCE), a member of the Earthquake Engineering Research Institute (EERI), a member of the Amer- ican Society for Engineering Education (ASE), an affiliate member of the Seismological Society of America (SSA), and an affiliate member of the Structural Engineers Association of Michigan (SEAMi). He belongs to the honor societies of Chi Epsilon, Tau Beta Pi, and Phi Kappa Phi. eas THE OBJECTIVE OF this book is to provide the reader with a practical tool for reliability analysis of structures. The material is intended to serve as a textbook for a one-semester course for undergraduate seniors or graduate students with a background in structural engineering and structural mechanics. Previous exposure to probability and statistics is helpful but not required; the most important aspects of probability and statistics are reviewed early in the text. Many books on reliability are written for researchers, often approaching the subject from a mathematical and theoretical perspective. The focus of this book is on practical applications of structural reliability theory. The basic concepts, in- terpretations, and equations are presented, and their use is then demonstrated in examples. The book should be helpful to both students and practicing structural engineers and should broaden their perspective by considering reliability as an im- portant dimension of structural design. In particular, the methodology discussed here is applicable in the development of design codes, the development of more reliable designs, optimization, and the rational evaluation of existing structures. ORGANIZATION OF THE BOOK Chapter | introduces structural reliability analysis. The objectives of the study of reliability of structures and the sources of uncertainty inherent in structural design are discussed. Chapter 2 briefly reviews the theory of probability and statistics. The emphasis is placed on the definitions and formulas needed for derivation of reliability analysis procedures. The random variable is defined and its parameters, such as the mean, median, standard deviation, coefficient of variation, cumulative distribution func- tion, probability density function, and probability mass function, are considered. ‘The probability distributions commonly used in structural reliability applications are reviewed; these include the normal, lognormal, extreme types I, I, and Ill, viii PREFACE uniform, Poisson, and gamma distributions. A brief discussion of Bayesian meth- ods is also included. In Chapter 3, functions of random variables are considered. Concepts and pa- rameters such as covariance, coefficient of correlation, and covariance matrix are described. Formulas are derived for parameters of a function of random variables. ‘Special cases considered in this chapter are the sum of uncorrelated normal random variables and the product of uncorrelated lognormal random variables, Chapter 4 presents some simulation techniques that can be used to solve struc- tural reliability problems. The Monte Carlo simulation technique is the focus of this chapter. Two other methods are also discussed: the Latin hypercube sampling method and Rosenblueth’s point estimate method. ‘The concepts of limit states and limit state functions are defined in Chapter 5. Reliability and probability of failure are considered as functions of load and resis- tance. The fundamental structural reliability problem is formulated. The reliability analysis methods are also presented in Chapter 5. The simple second-moment mean value formulas are derived. Then, the Hasofer-Lind reliability index is defined. An iterative procedure is shown for variables with full distributions available. Load models are presented in Chapter 6. The considered load components include dead load, live load for buildings and bridges, and environmental loads (such as wind, snow, and earthquake). Some techniques for combining loads together in reliability analyses are also presented. Resistance models are discussed in Chapter 7. Statistical parameters are pre- sented for steel beams, columns, tension members, and connections. Noncomposite and composite sections are considered. For reinforced concrete members and pre- stressed concrete members, the parameters are given for flexural capacity and shear. ‘The results are based on the available test data and simulations. The development of a reliability-based design code is outlined in Chapter 8. The basic steps for finding load and resistance factors and a calibration procedure used in several recent research projects are presented. Chapter 9 deals with the important topic of system reliability. Useful formulas are presented for a series system, a parallel system, and mixed systems. The effect of correlation between structural components on the reliability of a system is evaluated. The approach to system reliability analysis is demonstrated using simple practical examples, Models of human error in structural design and construction are reviewed in Chapter 10. Errors are classified with regard to mechanism of occurrence, cause, and consequences, Error survey results are discussed. A strategy to deal with errors is considered. Special focus is placed on the sensitivity analysis. Sensitivity functions are presented for typical structural components. ACKNOWLEDGMENTS Work on this book required frequent discussions and consultation with many ex- perts in theoretical and practical aspects of structural reliability. Therefore, we would like to acknowledge the support and inspiration we received over many PREFACE ix years from our colleagues and teachers, in particular Niels C. Lind, Palle Thoft- Christensen, Dan M. Frangopol, Mircea D. Grigoriu, Rudiger Rackwitz, Giuliano Augusti, Robert Melchers, Michel Ghosn, Fred Moses, James T. P. Yao, Ted V. Galambos, M. K. Ravindra, Brent W. Hall, Robert Sexsmith, Yozo Fujino, Hitoshi Furuta, Gerhard Schueller, Y. K. Wen, Wilson Tang, Alfredo Ang, C. Allin Cor- nell, Bruce Ellingwood, Janusz Murzewski, John M. Kulicki, Dennis Mertz, Jozef Kwiatkowski, and Tadeusz Nawrot. ‘We are grateful to many former and current doctoral students, particularly, Rajeh Al-Zaid, Hassan Tantawi, Abdulrahim Arafah, Juan A. Megarejo, Jianhua Zhou, Jack R. Kayser, Shuenn Chern Ting, Sami W. Tabsh, Bui-Seung Hwang, Young-Kyun Hong, Naji Arwashan, Ahmed S. Yamani, Hani H. Nassif, Jeffrey A. Laman, Hassan H. El-Hor, Sangjin Kim, Vijay Saraf, and Chan-Hee Park. We also thank Dr. Maria Szerszen, Kathleen Seavers, Tadeusz Alberski, Ahmet Sanli, Junsik Eom, Chamgshiou Way, and Gustavo Parra-Montesinos, who helped with the preparation of some of the text, figures, and examples. A special thanks is in order for the four external reviewers who read the manuscript and made valuable suggestions and comments for improvement. Finally, we would like to thank our wives, Jolanta and Karen, for their patience and support. Andrzej $. Nowak Kevin R. Collins CONTENTS Introduction Ll 12 13 14 15 Overview Objectives of the Book Possible Applications Historical Perspective Uncertainties in the Building Process Random Variables 2d 22 23 24 25 2.6 27 28 2.9 Basic Definitions 2.1.1 Sample Space and Event / 2.1.2. Axioms of Probability / 2.1.3 Random Variables / 2.1.4 Basic Functions Properties of Probability Functions (CDF, PDF, and PMF) Parameters of a Random Variable 2.3.1 Basic Parameters / 2.3.2 Sample Parameters / 2.3.3 Standard Form ‘Common Random Variables 2.4.1 Uniform Random Variables / 2.4.2. Normal Random Variables / 2.4.3 Lognormal Random Variables / 2.4.4 Gamma Distribution / 24.5. Extreme Type I (Gumbel Distribution, Fisher-Tippett Type 1) / 2.4.6 Extreme Type Il / 2.4.7 Extreme Type Il (Weibull Distribution) / 2.4.8 Poisson Distribution Probability Paper Interpretation of Test Data Using Statistics Conditional Probability Random Vectors Correlation 2.9.1 Basic Definitions / 2.9.2 Statistical Estimate of the Correlation Coefficient xi Berne 28 35 £88 xii Contents 2.10 Bayesian Updating 2.10.1 Bayes’ Theorem / 2.10.2 Applications of Bayes’ Theorem / 2.10.3 Continuous Case Problems Functions of Random Variables 3.1 Linear Functions of Random Variables 3.2 Linear Functions of Normal Variables 3.3. Product of Lognormal Random Variables 3.4. Nonlinear Function of Random Variables 3.5 Central Limit Theorem 3.5.1 Sum of Random Variables / 3.5.2 Product of Random Variables Problems Simulation Techniques 4.1 Monte Carlo Methods 4.1.1 Basic Concept / 4.1.2. Generation of Uniformly Distributed Random Numbers / 4.1.3 Generation of Standard Normal Random Numbers / 4.1.4 Generation of Normal Random Numbers / 4.1.5 Generation of Lognormal Random Numbers / 4.1.6 General Procedure for Generating Random Numbers from an Arbitrary Distribution / 4.1.7 Accuracy of Probability Estimates / 4.1.8 Simulation of Correlated Normal Random Variables 4.2 Latin Hypercube Sampling 4.3 Rosenblueth’s 2K+1 Point Estimate Method Problems Structural Safety Analysis 5.1 5.2 53 54 55 Limit States SL] Definition of Failure / 5.1.2 Limit State Functions (Performance Functions) Fundamental Case 5.2.1 Probability of Failure / 5.2.2 Space of State Variables Reliability Index 5.3.1 Reduced Variables / 5.3.2 General Definition of the Reliability Index / First-Order Second-Moment Reliability Index / Comments on the First-Order Second-Moment Mean Value Index / 5.3.5 Hasofer-Lind Reliability Index Rackwitz-Fiessler Procedure 5.4.1 Modified Matrix Procedure / 5.4.2 Graphical Procedure / 54.3 Correlated Random Variables Reliability Analysis Using Simulation Problems Structural Load Models 61 ‘Types of Load 50 54 37 37 | 61 66 67 9 69 83 85 88 1 1 96 99 120 138 141 145 145 Contents 6.2 General Load Models 6.3 Dead Load 6.4 Live Load in Buildings 6.4.1 Design (Nominal) Live Load / 6.4.2 Sustained (Arbitrary Point-in-Time) Live Load / 6.4.3 Transient Live Load / 6.4.4 Maximum Live Load 65 Live Load for Bridges 66 Environmental Loads 6.6.1 Wind Load / 6.6.2 Snow Load / 66.3 Earthquake 6:7 Load Combinations 6.7.1 Time Variation / 6.7.2 Borges Model for Load Combination / 6.7.3 Turkstra’s Rule / 6.7.4 Load Coincidence Method Problems Models of Resistance TA Parameters of Resistance 7.2. Steel Components 7.2.1 Hot-Rolled Steel Beams (Noncomposite Behavior) / 7.2.2 Composite Steel Girders / 7.2.3 Shear Capacity of Steel Beams / 7.2.4 Steel Columns / 7.2.5 Cold-Formed Members 7.3 Aluminum Structures 7.4 Reinforced and Prestressed Concrete Components 74.1 Concrete Elements in Buildings / 7.4.2 Concrete Elements in Bridges / 74.3 Resistance of Components with High-Strength Prestressing Bars 7.8 Wood Components Design Codes 81 Overview 8.2. Role of a Code in the Building Process 83 Code Levels, 8.4. Code Development Procedure 84.1 Scope of the Code / 8.4.2 Code Objective / 84.3 Demand Function and Frequency of Demand / 8.4.4 Closeness to the Target (Space Metric) / 84.5. Code Format 85 Calibration of Partial Safety Factors for a Level I Code 8.6 Development of a Bridge Design Code 8.6.1 Scope / 8.6.2 Objectives / 8.6.3 Frequency of Demand / 8.64. Target Reliability Level / 8.6.5 Load and Resistance Factors 87 Conclusions Problems System Reliability 9.1. Elements and Systems 9.2. Series and Parallel Systems 9.2.1 Series Systems / 9.2.2 Parallel Systems / 9.2.3 Hybrid (Combined) Systems 9.3 Reliability Bounds for Structural Systems iii 146 148 148 153 159 168 179 181 181 183 191 192 209 215 215 216 219 219 228 240 248 251 253 253 255 266 10 94 95 9.6 Contents 9.3.1 Boolean Variables / 9.3.2 Series Systems with Positive Correlation / 9.3.3 Parallel Systems with Positive Correlation / 9.3.4 Ditlevsen Bounds ‘for a Series System Systems with Equally Correlated Elements 94.1 Series Systems with Equally Correlated Elements / 9.4.2 Parallel Systems with Equally Correlated Ductile Elements Systems with Unequally Correlated Elements 9.5.1 Parallel System with Ductile Elements / 9.5.2 Series System Summary Problems Uncertainties in the Building Process 10.1 10.2 10.3 10.4 10.5 10.6 10.7 Overview 10.1.1 Human Error / 10.1.2 Categories of Uncertainty / 10.1.3 Theoretical and Actual Failure Rates / 10.1.4 Previous Research Classification of Errors Error Surveys Approach to Errors Sensitivity Analysis 10.5.1 Procedure / 10.5.2 Bridge Slab / 10.5.3 Beam-to-Column Connection / 10.5.4 Timber Bridge Deck / 10.5.5 Partially Rigid Frame Structure / 10.5.6 Rigid Frame Structure / 10.5.7 Noncomposite Steel Bridge Girder / 10.5.8 Composite Steel Bridge Girder / 10.5.9 Reinforced Concrete T-Beam / 10.5.10 Prestressed Concrete Bridge Girder / 105.11 Composite Steel Bridge System Other Approaches Conclusions Bibliography Appendix A Acronyms Appendix B Values of the CDF ®(2) for the Standard Normal Probability Distribution Appendix C Values of the Gamma Function Tk) for 1 = k <2 Index 270 279 285 285 289 289 293 296 299 302 31 312 315 323 325 329 331 INTRODUCTION 1.1 OVERVIEW Many sources of uncertainty are inherent in structural design. Despite what we often think, the parameters of the loading and the load-carrying capacities of struc- tural members are not deterministic quantities (ie., quantities which are perfectly known). They are random variables, and thus absolute safety (or zero probability of failure) cannot be achieved. Consequently, structures must be designed to serve their function with a finite probability of failure. To illustrate the distinction between deterministic and random quantities, con- sider the loads imposed on a bridge by car and truck traffic. The load on the bridge at any time depends on many factors, such as the number of vehicles on the bridge and the weights of the vehicles. As we all know from daily experience, cars and trucks come in many shapes and sizes. Furthermore, the number of vehicles that pass over a bridge fluctuates, depending on the time of day. Since we don’t know the specific details about each vehicle that passes over the bridge or the number of vehicles on the bridge at any time, there is some uncertainty about the total load on the bridge. Hence the load is a random variable. Society expects buildings and bridges to be designed with a reasonable safety level. In practice, these expectations are achieved by following code requirements specifying design values for minimum strength, maximum allowable deflection, and so on. Code requirements have evolved to include design criteria that take into account some of the sources of uncertainty in design. Such criteria are often referred to as reliability-based design criteria. The objective of this book is to provide the background needed to understand how these criteria were developed and to provide a basic tool for structural engineers interested in applying this new approach to other situations. 2 ‘CHAPTER 1: Introduction The reliability of a structure is its ability to fulfill its design purpose for some specified design lifetime. Reliability is often understood to equal the probability that a structure will not fail to perform its intended function. The term “failure” does not necessarily mean catastrophic failure but is used to indicate that the structure does not perform as desired. 12 OBJECTIVES OF THE BOOK This book attempts to answer the following questions: How can we measure the safety of structures? Safety can be measured in terms of reliability or the probability of uninterrupted operation. The complement to reliability is the probability of failure. As we discuss in later chapters, it is often convenient to measure safety in terms of a reliability index instead of probability. How safe is safe enough? As mentioned earlier, it is impossible to have an absolutely safe structure. Every structure has a certain nonzero probability of failure. Conceptually, we can design the structure to reduce the probability of failure, but increasing the safety (or reducing the probability of failure) beyond a certain optimum level is not always economical. This optimum safety level has to be determined. How does a designer implement the optimum safety level? Once the optimum safety level is determined, appropriate design provisions must be established so that structures will be designed accordingly. Implementation of the target reliability can be accomplished through the development of probability- based design codes. 1.3 POSSIBLE APPLICATIONS Structural reliability concepts can be applied to the design of new structures and. the evaluation of existing ones. A new generation of design codes is based on prob- abilistic models of loads and resistances. Examples include the American Institute of Steel Construction Load and Resistance Factor Design (LRFD) code for steel buildings (AISC, 1986, 1994), Ontario Highway Bridge Design Code for bridges (OHBDC, 1979, 1983, 1991), American Association of State Highway and Trans- portation Officials LRFD code (AASHTO, 1994, 1998), Canadian Highway Bridge Design Code (1998), and many European codes (e.g., CEC, 1984).! In general, reliability-based design codes are efficient because they make it easier to achieve either of the following goals: For a given cost, design a more reliable structure. Fora given reliability, design a more economical structure. 'Many acronyms are used in structural engineering and structural reliability. Appendix A lists acronyms used in this book. 14. Historical Perspective 3 ‘The reliability of a structure can be considered as a rational evaluation criterion. It provides a good basis for decisions about repair, rehabilitation, or replacement. A structure can be condemned when the nominal value of load exceeds the nominal load-carrying capacity. But in most cases a structure is a system of components, and failure of one component does not necessarily mean failure of the structural system. When a component reaches its ultimate capacity, it may continue to resist the load while loads are redistributed to other components. System reliability provides a methodology to establish the relationship between the reliability of an element and the reliability of the system. 1.4 HISTORICAL PERSPECTIVE Many of the current approaches to achieving structural safety evolved over many centuries. Even ancient societies attempted to protect the interests of their citizens through regulations. The minimum safety requirements were enforced by specify- ing severe penalties for builders of structures that did not perform adequately. The earliest known building code was used in Mesopotamia. It was issued by Ham- murabi, the king of Babylonia, who died about 1750 B.c. The “code provisions” were carved in stone, and these stone carvings are preserved in the Louvre in Paris, France. (Figure 1.1 is a picture of this “document.”) The responsibilities were de- fined depending on the consequences of failure. Ifa building collapsed killing a son of the owner, then the builder’s son would be put to death; if the owner’s slave was killed, then the builder’s slave was executed; and so on. For centuries, the knowledge of design and construction was passed from one generation of builders to the next. A master builder often tried to copy a successful structure. Heavy stone arches often had a considerable safety reserve. Attempts to increase the height or span were based on intuition. The procedure was essen- tially trial and error. If a failure occurred, that particular design was abandoned or modified, As time passed, the laws of nature became better understood; mathematical theories of material and structural behavior evolved, providing a more rational ba- sis for structural design. In turn, these theories furnished the necessary framework in which probabilistic methods could be applied to quantify structural safety and reliability. The first mathematical formulation of the structural safety problem can be attributed to Mayer (1926), Streletzki (1947), and Wierzbicki (1936). They rec- ognized that load and resistance parameters are random variables and therefore, for each structure, there is a finite probability of failure. Their concepts were further developed by Freudenthal in the 1950s (e.g., Freudenthal, 1956). The formulations involved convolution functions that were too difficult to evaluate by hand. The prac- tical applications of reliability analysis were not possible until the pioneering work of Cornell and Lind in the late 1960s and early 1970s. Cornell proposed a second- moment reliability index in 1969. Hasofer and Lind formulated a definition of a format-invariant reliability index in 1974. An efficient numerical procedure was formulated for calculation of the reliability index by Rackwitz and Fiessler (1978). Other important contributions have been made by Ang, Veneziano, Rosenblueth, cHAPTER I: Introduction FIGURE 1.1 The Code of Hammurabi. ‘The engraved image at the top shows King Hammurabi receiving the Code from the Sun God. The code itself is inscribed on the sides of the stone below the image. (Photograph reproduced with permission of the Musée du Louvre and the Réunion des ‘Musées Nationaux Agence Photographique.) 1.5 Uncertainties in the Building Process 5 Esteva, Turkstra, Moses, Grigoriu, Der Kiuregian, Ellingwood, Corotis, Frangopol, Fujino, Furuta, Yao, Brown, Ayyub, Blockley, Stubbs, Mathieu, Melchers, Augusti, Shinozuka, and Wen. By the end of 1970s, the reliability methods reached a de- gree of maturity, and now they are readily available for applications. They are used primarily in the development of new design codes. ‘The developed theoretical work has been presented in books by Thoft- Christensen and Baker (1982), Augusti, Barrata, and Casciati (1984), Madsen, Krenk, and Lind (1985), Ang and Tang (1984), Melchers (1987), Thoft-Christensen and Murotsu (1986), and Ayyub and McCuen (1997), to name just a few. Other books available in the area of structural reliability include Murzewski (1989) and Marek, Gustar, and Anagnos (1996). It is important to note that most reliability-based codes in current use apply reliability concepts to the design of structural members, not structural systems. In the coming years, one can expect a further acceleration in the development of analytical methods used to model the behavior of structural systems. It is expected that this focus on system behavior will lead to additional applications of reliability theory at the system level. 1.5 UNCERTAINTIES IN THE BUILDING PROCESS The building process includes planning, design, construction, operation/use, and demolition. All components of the process involve various uncertainties. These uncertainties can be put into two major categories with regard to causes: natural and human. Natural causes of uncertainty result from the unpredictability of loads such as wind, earthquake, snow, ice, water pressure, or live load. Another source of uncertainty attributable to natural causes is the mechanical behavior of the materials used to construct the building. For example, material properties of concrete can vary from batch to batch and also within a particular batch. Human causes include intended and unintended departures from an optimum design. Examples of these uncertainties during the design phase include ap- proximations, calculation errors, communication problems, omissions, lack of knowledge, and greed. Similarly, during the construction phase, uncertain- ties arise due to the use of inadequate materials, methods of construction, bad connections, or changes without analysis. During operation/use, the struc- ture can be subjected to overloading, inadequate maintenance, misuse, or even an act of sabotage. Because of these uncertainties, loads and resistances (ie., load-carrying capacities of structural elements) are random variables. It is convenient to consider a random parameter (load or resistance) as a function of three factors: Physical variation factor. This factor represents the variation of load and re- sistance that is inherent in the quantity being considered. Examples include 6 cuaprer 1: Introduction natural variation of wind pressure, earthquake, live load, and material properties. Statistical variation factor. This factor represents uncertainty arising from es- timating parameters based on a limited sample size. In most situations, the natural variation (physical variation factor) is unknown and it is quantified by examining limited sample data. Therefore, the larger the sample size, the smaller the uncertainty described by the statistical variation factor. Model variation factor. This factor represents the uncertainty due to simplifying assumptions, unknown boundary conditions, and unknown effects of other variables. It can be considered as a ratio of the actual strength (test result) and strength predicted using the model. How these three factors come into a reliability analysis is discussed in later chapters. 2 RANDOM VARIABLES THE PURPOSE OF this chapter is to review aspects of the theory of probability and statistics needed for reliability analysis of structures. 2.1 BASIC DEFINITIONS 2.1.1 Sample Space and Event The concepts of sample space and event can best be demonstrated by considering an experiment. For example, the experiment might test material strength, measure the depth of a beam, or determine the occurrence (or nonoccurrence) of a truck on a particular bridge during a specified period of time. In these experiments, the outcomes ate unpredictable. All possible outcomes of an experiment comprise a sample space. Combinations of one or more of the possible outcomes or ranges of outcomes can be defined as events. ‘To further illustrate these concepts, consider the following two examples. EXAMPLE 2.1. Consider an experiment in which some number (n) of standard concrete cylinders is tested to determine their compressive strength, {2, as shown in Figure 2.1 ‘Assume that the test results are K1y X25 X39 006) Xn where x; is the outcome (i.e., the experimental value of f%) of the i-th cylinder. For this experiment, the sample space is an interval including all positive numbers because the compressive strength can be any positive value. The defined sample space for concrete cylinder tests is called a continuous sample space, Theoretically, even £/ = Oi possible (but unlikely) when the mix is made without any cement. The actwal compressive strength varies randomly, and n test results supply only a limited amount of information about its variation. 8 CHAPTER 2: Random Variables Stress Standard cylinder Stress Failure ‘Swain FIGURE 2.1 Concrete cylinder test considered in Example 2.1. Events E), Ey, ...,Em can be defined as ranges of values (or intervals) of com- pressive strength. For example, E, could be defined as the event when the compressive strength is between 0 kips per square inch (ksi) and 1 ksi. Similarly, B, could be defined as the event when the strength is between 1 ksi and 2 ksi. EXAMPLE 2.2. Consider another experiment. A reinforced concrete beam is tested to determine one of the two possible modes of failure: Mode 1: failure occurs by crushing of concrete. Mode 2: failure occurs by yielding of steel. In this case, the sample space consists of two discrete failure modes: mode 1 and mode 2. This sample space has a finite number of elements and it is called a discrete sample space. Each mode of failure can be considered an event. ‘Two special types of events should be mentioned. A certain event is defined as consisting of the entire sample space. The implication of this definition is that a certain event will definitely occur. In Example 2.1, a certain event would be when the compressive strength data are greater than or equal to zero. An impossible event is defined as an outcome that cannot occur. Again, in the context of Example 2.1, an impossible event would be when the compressive strength is less than zero. 2.1.2 Axioms of Probability The following axioms of classical probability theory are included only as a quick reference. A more comprehensive discussion of probability can be found in any introductory-level probability textbook (e.g., Miller, Freund, and Johnson, 1990; Milton and Arnold, 1995; Montgomery and Runger, 1998; Ross, 1998). Let E represent an event, and let @ represent a sample space. The notation PC) is used to denote a probability function defined on events in the sample space, 2.1 Basic Definitions 9 AXIOM 1. For any event E, OsPE)<1 (2.1) where P(E) = probability of event E. In words, the probability of any event must be between 0 and | inclusive. AXIOM 2. PQ) =1 (2.2) In words, this axiom states that the probability of occurrence of an event corresponding to the entire sample space (.e., a certain event) is equal to 1. AXIOM 3. Consider n mutually exclusive events E;, Ez, ..-, En. e (u x) = Pe) (23) 1 iat where P(E) represents the probability of the union of all events Ey, Bz, ...» Fn. other words, it represents the probability of occurrence of E; or Ey or .. . or Ey Mutually exclusive events exist when the occurrence of any one event excludes the occurrence of the others. Two or more mutually exclusive events cannot occur simultaneously. For example, returning to Example 2.1, if we denote compressive strength by ff, an example of mutually exclusive events would be as follows: E; = {0 < f{ < 1000 psi} (24a) Ey = {1000 psi < f% < 2000 psi} (2.4b) Es = {2000 < f, < 3000 psi} (2.4c) Ex = {ff > 3000 psi} (2.44) For these four events, the union of all events is the sample space defined earlier: 4 UE = {0 3000 psi In this case, the random variable can assume only four discrete integer values. 2.1.4 Basic Functions The probability mass function (PMP) is defined for discrete random variables as follows: px (x) = probability that a discrete random variable X is equal to a specific value x where x is a real number. Note that the random variable (with an uncertain value) is denoted by a capital letter, whereas a specific value or realization of the variable is denoted by a lowercase letter. Mathematically, Px(x) = P(X = x) (2.8) For example, if X is a discrete random variable describing conerete strength (f!) as defined in Eq. 2.7, then the values of the PMF function would be px(1) =P(X = 1) (2.9a) Px(2) = P(X = 2) (2.9) 2.1 Basic Definitions Mt px(3) = P(X = 3) (2.9) px(4) = P(X = 4) (2.94) Equations 2.9 are represented graphically in Figure 2.3 for a hypothetical set of values of the PMF function. The cumulative distribution function (CDF) is defined for both discrete and continuous random variables as follows: Fx(x) = the total sum (or integral) of all probability functions (continuous and discrete) corresponding to values less than or equal to x. Mathematically, Fx (x) = P(X < x) (2.10) Consider the f! intervals previously defined by Eqs. 2.9. Let X be a discrete random variable and assume the values of the probability mass function are as follows: px(1) = 0.05 (2.11a) px(2) = 0.20 (2.11b) px(3) = 0.65 (2.11¢) px(4) = 0.10 (2.11d) The corresponding CDF fanction is shown in Figure 2.4. Note that the CDF function is always a nondecreasing function of x. Px) 10 0.65 05 0.20 0.05 0.10 0.0 - 7: 2 7 4 FIGURE 2.3 A probability mass function. Fx@) FIGURE 2.4 A cumulative distribution function for a discrete random variable. 12 CHAPTER 2: Random Variables For continuous random variables, the probability density function (PDF) is defined as the first derivative of the cumulative distribution function. The PDF [fx (x)] and the CDF [Fx (x)] for continuous random variables are related as follows: d fx(x) = ae) (2.12) ix Foo = [ fx (E) dé (2.13) To illustrate these relationships, consider a continuous random variable X. The PDF and CDF functions might look like those shown in Figures 2.5 and 2.6, respec- tively. Equation 2.13 represents the shaded area under the PDF as shown in Figure 2.7 for the case x = a. 06° 08 " 10° 12° 14 FIGURE 2.5 Example of a PDF. FIGURE 2.6 Example of a CDF. x00) x FIGURE 2.7 Relationship between CDF and PDF described by Eq. 2.13. 2.2 Properties of Probability Functions (CDF, PDF, and PMF) 2.2 PROPERTIES OF PROBABILITY FUNCTIONS (CDF, PDF, AND PMF) 13 Several important properties of the cumulative distribution function are enumerated below. Any function which satisfies these six conditions can be considered a CDF. 1. The definition of a CDF is the same for both discrete and continuous random variables. 2. The CDF is a positive, nondecreasing function whose value is between 0 and 1: 0 (%i — x)? px(x:) (discrete random variable) (2.21b) am, An important relationship exists among the mean, variance, and second moment of arandom variable X: of = E(X’) — (2.22) The standard deviation of X is defined as the positive square root of the variance: ox = 03 (2.23) The nondimensional coefficient of variation, Vx, is defined as the standard deviation divided by the mean: 9°: Vy = (2.24) bx This parameter is always taken to be positive by convention even though the mean may be negative. 2.3 Parameters of a Random Variable iby 2.3.2 Sample Parameters The parameters defined in Section 2.3.1 are the theoretical properties of the random variable because they are all calculated based on knowledge of the probability distributions of the variable. In many practical applications, we do not know the true distribution, and we need to estimate parameters using test data. If a set of n observations {X1, X2, .... Xn} are obtained for a particular random variable X, then the true mean 1x can be approximated by the sample mean X and the true standard deviation ox can be approximated by the sample standard deviation sx. ‘The sample mean is calculated as. - is i= aye (2.25) ‘The sample standard deviation is calculated as (2.26) 2.3.3 Standard Form Let X be a random variable. The standard form of X, denoted by Z, is defined as pee x ox Z =e ‘The mean of Z is calculated as follows. We note that the mathematical expecta- tion (mean value) of an arbitrary function, g(X), of the random variable X is defined as . nyo0 =BIB0O1 = [goo Ook (228) Using this definition with Z = g(X), we can show that 1 1 ) = — [E(X) —E(ux)] = — (ix -x)=0 (2.29) ox ox and ; 3 =B@) uh =E (2) | _\2 aE [K-nw] = ox K o% = | ‘Thus the mean of the standard form of a random variable is 0 and its variance is 1. 16 CHAPTER 2: Random Variables 2.4 COMMON RANDOM VARIABLES Any random variable is defined by its cumulative distribution function (CDF), F(x). The probability density function, f(x), of a continuous random variable is the first derivative of Fx(x). The most important variables used in structural reli- ability analysis are as follows: uniform, normal, lognormal, gamma, extreme Type I, extreme Type II, extreme Type III, and Poisson. Each of these is briefly described in the following sections. 2.4.1 Uniform Random Variables Fora uniform random variable, the PDF function has a constant value for all possible values of the random variable within a range [a, b]. This means that all numbers are equally likely to appear. Mathematically, the PDF function is defined as follows: 1 —— a _ (b-a)* Rare (2.33) 2.4.2 Normal Random Variables ‘The normal random variable is probably the most important distribution in structural reliability theory. The PDF for a normal random variable X is 1 1 /x-ux\? f(x) = ——= exp | -> 34) te) me al ox y] =! where [ix and ox ate the mean and standard deviation, respectively. Note that the term in parentheses in Eq, 2.34 is in standard form as presented in Eq. 2.27. Figure 2.10 shows the general shape of both the PDF and CDF of a normal random variable. There is no closed-form solution for the CDF of a normal random variable. However, tables have been developed to provide values of the CDF for the special case in which px = 0 and ox = 1. If we substitute these values in Eq. 2.34, we get fx@) TEX) . 05 0 FIGURE 2.10 PDF and CDF of a normal random variable. 18 (CHAPTER 2: Random Variables the PDF for the standard normal variable z which is often denoted by (2): 1 i. g@= Vm? [ 7 ] = f2(2) (2.35) ‘The CDF of the standard normal variable is typically denoted by ® (z). Many popular mathematics and spreadsheet programs have a standard normal CDF function built in, Values of (z) are listed in Appendix B for values of z ranging from 0 to —8.9. Values of (z) for z > 0 can also be obtained from Appendix B by applying the symmetry property of the normal distribution: &(z) =1— O(-z) (2.36) Figures 2.11 and 2.12 show the shapes of $(z) and (2). The probability information for the standard normal random variable can be used to obtain the CDF and PDF values for an arbitrary normal random variable by performing a simple coordinate transformation. Let X be a general normal random variable, and let Z be the standard form of X. Then by rearranging Eq. 2.27 we can show that X= px + Zox (2.37) By the definition of CDF, we can write Fx(x) = P(X < x) = P(ux + Zox Fx (1300) = © (Pe) = 0(-1) From Appendix B, (—1) = 0.159. (b) From Eq. 2.39, ic 1G Fx(x) = o( ) ee (a) (+2) ox 200 From Eq. 2.36, @(2) = | — ©(~—2). From Appendix B, &(-2) = 0.228 x 107!. Therefore, ©(2) = 1 — (0.228 x 10-!) = 0.977 (c) Observe that x = 1700 is 200 units away from the mean value of 1500. Using Eq, 2.42 we can write Fx (1500 + 200) = 1 — Fx (1500 — 200) 1—0.159 — (obtained in part a above) = 0.841 Fx) 1000) 1500 2000 -* FIGURE 2.15 PDF of normal random variable in Example 2.4. 22 cHapTer 2: Random Variables (d) From Eq. 2.40, 1 X— [hy 1 1300 — 1500 1 6 (-ae 6 8 a) fx(x) a ( x ) => fx) x00? ( 200 ) 200° 1) Using Eq. 2.35, 6(—1) = 0.242. Therefore, fx (1300) = 0.00121. (e) Using Eq. 2.40, at xX— Px oo 1500 — 1500 le f= A ( ) > {Oo = se eee ) mr Using Eq. 2.35, (0) = 0.399. Therefore, fx (1500) = 0.00199. As we will see later, it is often necessary to calculate the inverse of the CDF of the standard normal distribution function (z). Although the inverse doesn’t exist in closed form, an approximate formula for the inverse does exist, and it gives reasonable results over a wide range of probability values. Let p = ®(z). The inverse problem would be to find z = &~!(p). The following formula can be used if p is less than or equal to 0.5: cot eit + ent? =o) = +4 —ototret z= 0") + Trae de +a for p < 0.5 (2.43) where cg = 2.515517 cy = 0.802853 c2 = 0.010328 dy = 1.432788 dy = 0.189269 ds = 0.001308 and t= /—in@) (2.44) For p > 0.5, &-! is calculated for p* = (1 — p), then we use the following relationship: z=0"'(—p) =-o7|(p") (2.45) 2.4.3. Lognormal Random Variables The random variable X is a lognormal random variable if Y = In(X) is normally distributed. A lognormal random variable is defined for positive values only (x > 0). ‘The PDF and CDF can be calculated using distributions $(z) and ¢(z) for the standard normal random variable Z as follows. Fx (x) = P(X < x) = P(InX < Inx) = P(Y < y) = Fy(y) (2.46) Since Y is normally distributed, we can use the standard normal functions as dis- cussed in Section 2.4.2. Specifically, Fx(x) = Fy(y) = @ (*) 2.47) 2.4 Common Random Variables 23 where y = In(x), By = Bingo = mean value of In(X), and oy = oinx) = standard deviation of In(X). These quantities can be expressed as functions of px, ox, and Vx using the following formulas: Orgy = In (Vi + 1) (2.48) a Hinge) = I(x) — 5 Fina (2.49) If Vx is less than 0.2, the following approximations can be used to find of.) and Ping): Sney % VE (2.50) Pancx) © In(x) (2.51) For the PDF function, using Eq. 2.12, we can show that a dy (a@- te) — 2 @ co = tne) fx) = GFK) = ae ( _ incxy Fincxy 52) The general shape of the PDF function for a lognormal variable is shown in Figure a. xO) FIGURE 2.16 PDF of a lognormal random variable. ‘The lognormal distribution is widely used in structural reliability analyses. The following example illustrates its use. EXAMPLE 2.5. Let X be a lognormal random variable with a mean value of 250 and a standard deviation of 30. Calculate Fx (200) and fx (200) Solution G0 Vy = X= = 50.12 =~ 250 = Shag = In (Vi + 1) = 0.0143; inex) = 0.1196 24, CHAPTER 2: Random Variables 1 tine = In(x) ~ 5a = In(250) — 0.5 (0.0143) = 5.51 In@®) = Mino 1n(200) = 5.51 0) = @| woo | Lg | A ee [ 1900) ] [ 0.1196 = 0(-1.77) = 0.0384 1 Ing) = tne | 1 fx (200) = ——-¢ | —__— |= —1.77) = ena [ ine) Gon} (0.1156)° 17? 0.0833 = 300 = 0.00348, 2.44 Gamma Distribution The PDF of a gamma random variable is useful for modeling sustained live load, such as in buildings. It is defined by AOR le fx(x) = a forx > 0 (2.53) where A and k are distribution parameters. The function I’ (k) is the gamma function, which is defined as 00 rk)= i eu du (2.54) 0 and Tk) = (k-D(k-2)---(2)0) = (k— Dt (2.55a) Mk + 1) =P()k (2.55b) Values of P(k) for 1 < k < 2 are tabulated in Appendix C. Some PDF functions for various values of k are shown in Figure 2.17. The mean and variance can be calculated as follows: (2.56) (2.57) 2.4.5 Extreme Type I (Gumbel Distribution, Fisher-Tippett Type 1) Extreme value distributions, as the name implies, are useful to characterize the probabilistic nature of the extreme values (largest or smallest values) of some phe- nomenon over time. For example, consider n time intervals. Each interval might be 2.4 Common Random Variables 25 ¥ FIGURE 2.17 PDFs of gamma random variables. one year. During each year, there will be a maximum value of some phenomenon (such as wind speed). Suppose we want to determine the probability distribution for those largest annual wind speeds. Let Wi, ..., Wn be the largest wind speeds in years I through n. Then X = max (W}, W2, ..., W,) might be characterized as an extreme Type I random variable. The CDF and PDF for this random variable are eee Fx (x) fx (x) = ae for —00 () e (2.65) where u and k are distribution parameters. The PDF for an extreme Type II variable has the general shape shown in Figure 2.19. The mean and standard deviation can be calculated as follows: bx =uP (1 ~ z) fork > 1 (2.66) avr (-2)-0-2)] for k > 2 (2.67) Note that the coefficient of variation, Vx, is a function of k only. Graphs exist to calculate Vx for any k. (See, for example, Ang and Tang, 1984.) 2.4.7 Extreme Type Il (Weibull Distribution) The extreme Type III distribution is defined by three parameters. There is a different function for the largest and the smallest values. x00) FIGURE 2.19 PDF for an extreme Type II random variable. 2.4 Common Random Variables 2 For the largest values, the CDF is defined by Fx(x) =e7GE"forx < w (2.68) where w, u, and k are parameters. The mean and variance are nx=w-(-or (142) (2.69a) o% = (w—u)* [r (+3) = Te (1+2)] (2.69b) For the smallest values, the CDF is defined by forx >¢ (2.70) where u, ¢, and k are the parameters. For the smallest value case, the mean and variance can be calculated using the following formulas: nx=etotor (142) (2.71a) o% =(u—e)? [r (1 + z) = te (: +i) (2.71) 2.4.8 Poisson Distribution The Poisson distribution is a discrete probability distribution that can be used to calculate the PMF for the number of occurrences of a particular event in a time or space interval (0, t). For example, the Poisson distribution can be used to represent the number of earthquakes that occur within a certain time interval or the number of defects in a certain length of rod. The following important underlying assumptions tion must be considered before it is used in a probabil hind the Poisson distribu- tic analysis: The occurrences of events are independent of each other. In other words, the occurrence or nonoccurrence of events in a prior time interval has no effect on the occurrence of events in the time interval being considered. © Two or more events cannot occur simultaneously. Let N be a discrete random variable representing the number of occurrences of an event within a prescribed time (or space) interval (0, t). Let v represent the mean occurrence rate of the event. This is usually obtained from statistical data. The Poisson PMF function is defined as vty" ie PON = nin time ) = Pe n=0,1,2,...,00 (2.72) nl ‘The mean and standard deviation of the random variable N are BN = vt; oy = Vit (2.73) 28 CHAPTER 2: Random Variables An alternate parameter that is often used with the Poisson distribution is the re- turn period (or interval) c. The return period is simply the reciprocal of the mean occurrence rate v: (2.74) The return period is a deterministic number representing the average time interval between occurrences of events. The actual time interval between events is a random number itself. EXAMPLE 2.6. Suppose that the average occurrence rate of earthquakes (with mag- nitudes between 5 and 8) in a region surrounding Los Angeles, California, has been determined to be 2.14 earthquakes/year. Determine (a) The return period for earthquakes in this magnitude range. (b) The probability of exactly three earthquakes (magnitude between 5 and 8) in the next year. (c) The annual probability of an earthquake with magnitude between 5 and 8. Solution (a) The return period is calculated using Eq. 2.74: 1 1 = 77g = 047 year In other words, on average, there is one earthquake (in the defined magnitude range) about every six months. (b) The probability of exactly three earthquakes in the next year is determined using Eq. 2.72 with t = 1 andn (2.14) (DP sy Ss = 0.192 P(N = Bin 1 year) = (©) To find the annual probability of an earthquake, it is helpful to interpret the question as follows. The annual probability of an earthquake is the annual probability of at least one earthquake. Therefore, P(at least one earthquake) = 1 — P(no earthquakes) P(N > 1) =1-P(N=0) Therefore, PWS 1I)=1- 0! =] e239) = 0.88 2.5 PROBABILITY PAPER Probability paper can be used to graphically determine whether a set of experimental data follows a particular probability distribution. Probability paper for the normal 2.5 Probability Paper 29 distribution is the most common, and it is commercially available. However, it is possible to construct probability paper for many common distributions using ordinary graph paper. In this section, we discuss how to construct and use normal probability paper. Allcumulative distribution functions are nondecreasing functions of the random variable. For example, the CDF of the normal distribution has an “S-shape” as shown in Figure 2.13 and Figure 2.20. The basic idea behind normal probability paper is to redefine the vertical scale so that the normal CDF will plot as a straight line. Conversely, if a set of data plotted on normal probability paper plots as a straight line, then it is reasonable to model the data using a normal CDF. The slope and y intercept of the graph can be used to determine the mean and standard deviation of the distribution. Consider a normal random variable X with mean value jx and standard devia- tion ox. Now imagine a transformation in which the S-shaped CDF is “straightened” as shown in Figure 2.20. The transformation is such that each point on the origi- nal CDF can only move vertically up or down. In commercial normal probability paper, this transformation is accomplished by altering the scale of the vertical axis as shown in Figure 2.21. Observe that the values on the left vertical axis are not evenly spaced. If the coordinate pairs [x, Fx(x)] for a normal random variable X are plotted on normal probability paper, the graph will be a straight line. ‘Today, with the availability of spreadsheet programs and computers, it is very easy to achieve the same effect of commercial normal probability paper by per- forming a simple mathematical transformation and plotting a standard linear (xy) graph. Recall that the standardized form Z of a normal random variable X is za xT (4) x4 (=) (275) Ox Ox Ox FIGURE 2.20 The S-shaped CDF for a normal random variable. 30 CHAPTER 2: Random Variables 0.999 0.9990 0.9980 0.9900 0.9800 0.9500 0.9000 0.8000 0.7000 0.6000 0.5000 0.4000 0.3000 0.2000 syeHeA jeULIOU propueig 0.1000 0.0500 0.0200 0.0100 0.0050 0.0020 0.0010 0.0005 0.0001 FIGURE 2.21 Example of normal probability paper. For any realization x of the normal random variable X, the corresponding standard ized value is 1 ° (3) a) oe ox ox, ox The corresponding probability based on the normal CDF would be Fx) =p=® (=) (2.77) If we take the inverse of Eq. 2.77, we get olp)sz= (2) x+ () (2.78) 2.5 Probability Paper 31 Equation 2.78 represents a linear relationship between z = ©~!(p) and x, and this provides the rationale behind normal probability paper. The vertical axis on the right side of Figure 2.21 was obtained by transforming the probability values on the left scale using Eq. 2.78. Observe that the values on this scale are evenly spaced. If ~!(p) versus x is plotted on standard (linear) graph paper, a straight-line plot will result, The relationship expressed in Eq. 2.78 is further illustrated in Figure 2.22. In this figure, the uneven probability scale and the corresponding linear scale are both shown on the left side of the plot. Data points from a general normal distribution are plotted, and a straight line is obtained. Observe that the value of x corresponding to Fx(x) = 0.5 [orz = ©7'(0.5) is the mean value x. From Eq. 2.78, we note that the slope of the straight line is the inverse of the standard deviation. If we Normal 0995 T probability 0.990 Scale 0.980 0.950 0.900 0.800 0,700 0.600 oF 0.500 0.400 0.300 0.200 Standard normal variable 0.100 0.050 27 0.020 0.010 0.005, FIGURE 2.22 Interpretation of a straight-line plot on normal probability paper in terms of the mean and standard deviation of the normal random variable. 32 CHAPTER 2: Random Variables move away from the mean value by an amount nox where n is an integer and ox is the standard deviation, the corresponding value of z is equal to n. This is shown by the dotted lines in Figure 2.22. Now consider the practical application of normal probability paper to evaluate experimental data. Consider an experiment or test in which N values of some ran- dom variable X are obtained. These values will be denoted {x}. To be able to use normal probability paper (commercial or computed), it is necessary to associate a probability value with each x value. The procedure is as follows: 1. Arrange the data values {x} in increasing order. Once ordered, the first (lowest) value of x will be denoted as x), the next value as x, and so on, up to the last (largest) value xy. Do not discard repeated values. . Associate with each x; a cumulative probability p; equal to (Gumbel, 1954) nv i "N41 3. If commercial normal probability paper is being used, then plot the (xi, p;) and go to Step 6. Otherwise, go to Step 4. For each pj, determine z; = &-! (p;). Equation 2.43 can be useful in this step. . Plot the coordinates (x;, z;) on standard linear graph paper by hand or using a computer. . If the plot appears to follow a straight line, then it is reasonable to conclude that the data can be modeled using a normal distribution. Sketch a “best-fit” line for the data. The slope of the line will be equal to 1/ox, and the value of x at which the probability is 0.5 (or z = 0) will be equal to x. (Alternatively, you can plot a reference line using the sample mean X and sample standard deviation sx obtained using Eqs. 2.25 and 2.26.) If the data do not appear to follow a straight line, then a normal distribution is probably not appropriate. However, the plot can still provide some useful information, as discussed in later chapters. (2.79) as a EXAMPLE 2.7, Consider the following set of 9 data points: {x} = {6.5, 5.3, 5.5, 5.9, 6.5, 6.8, 7.2, 5.9, 6.4}. Plot the data on normal probability paper. Solution. Tis convenient to carry out Steps 1 and 2 by setting up a table as seen in Table 2.1, The values of (xi, p;) ate plotted on probability paper in Figure 2.23. We would TABLE 2.1 Data table for Example 2.7 Index value,i x; (inincreasing order) Probability, p;=i/(N+1) = @"(p) 1 353 0.1 1.282 2 55 0.2 0.842 3 59 03 0.524 4 59 04 0.253 5 64 os 0 6 65 0.6 0.253 7 65 07 0.524 8 68 08 0.842 9 72 09 1.282 2.5 Probability Paper 3 0.9999 0.9990 43 0.9980 0.9900 62, 5x = 0.62 0.9800 42 0.9500 0.9000 0.8000, a f- 0.6000. 0.5000. k ye “0 0.4000 0.3000 0.2000 T : 0.1000 A 0.0500 v [Start here, 2 0.0200 minimum data point 0.0100 0.0050 0.0020 0.0010 0.0005 Probability dine, jRULON prepung 0.0001 0 r 2 3 4 56 7 8 9 FIGURE 2.23 Data from Example 2.7 plotted on normal probability paper obtain the same graph if we plotted (x;, z:) and use the linear scale shown on the right side of Figure 2.23 The data plotted in Figure 2.23 appear to follow (at least approximately) a straight Jine and thus we might conclude that the data follow a normal distribution. For com- parison, a “reference” straight line is plotted based on the sample statistics X = 6.2 and sx = 0.62. EXAMPLE 2.8, Consider the results of a truck weight survey. The recorded values of ‘gross vehicle weight (GVW) are presented in Table 2.2. Evaluate the data using normal probability paper. Solution. First the data are entered into a spreadsheet table. Then the data are sorted and ranked in increasing order. Each value of GVW is assigned a probability using

You might also like