Reliability Analysis of Structures Under Seismic Loading
Reliability Analysis of Structures Under Seismic Loading
Rammerstorfer, J. Eberhardsteiner
Key words: reliability analysis, seismic loading conditions, Monte Carlo Simulation, space frames.
Abstract The objective of this paper is to perform reliability analysis of space frames under seismic loading. In order to produce the dynamic loads a number of random accelerograms is produced from the elastic design response spectrum of the region. Structural reliability analysis is performed using the Monte Carlo Simulation (MCS) method. The results obtained for a realistic structural problem indicate the applicability of the proposed procedure.
Introduction
Reliability analysis of large-scale structures, such as multi-storey space frames, is a computationally intensive task. The computational cost increases dramatically when dynamic loading is involved. On the other hand, reliability analysis is an essential ingredient for a successful structural design since it assists the engineer to take into account all possible structural uncertainties by calculating the probability of (local or global) failure during the design, construction and life span of a structure. Structural reliability analysis can be performed either with simulation methods, such as the Monte Carlo Simulation (MCS), or with approximation methods [1]. First and second order approximation methods (FORM and SORM) lead to formulations that require prior knowledge of the means and variances of the component random variables and the definition of a differentiable failure function. On the other hand, MCS methods require that the probability density functions of all random variables must be known prior to the reliability analysis. For small-scale problems FORM and SORM implementations have been proved very efficient, but when the number of random variables increases and the structural problems become more complex MCS based methods have been proven more reliable. In order to reduce the computational effort of MCS elaborate variance reduction techniques, such as importance sampling, have been implemented [2]. Despite the improvement in the efficiency of the reliability analysis techniques, they still require disproportionate computational effort for treating practical reliability analysis problems. This is the reason why in most cases numerical investigations in the field of structural reliability analysis are restricted to linear analysis of small-scale plane frames and trusses under static loads. Due to the fact that various dynamic loading conditions, such as seismic excitations, are encountered in practice; it is extremely important to incorporate such loading cases into the structural reliability analysis process. These problems are highly computationally intensive tasks since in order to predict the inelastic structural behavior under seismic loads, a time integration needs to be employed for the dynamic analysis of the structure. Therefore, at each MCS simulation a full non-linear dynamic analysis of the structure needs to be performed in order to calculate its probability of failure. Due to the uncertain nature of the earthquake loading, structural designs are often based on design response spectra of the region and on some simplified assumptions of the structural behavior under earthquake. In the case of a direct consideration of the earthquake loading the reliability analysis of structural systems requires the solution of the dynamic equations of motion which can be orders of magnitude more computational intensive than the case of static loading. In the context of this rigorous approach a number of artificial accelerograms is produced from the design response spectrum of the region for elastic structural response. The elastic design response spectrum can be seen as an envelope of response spectra, for a specific damping ratio, of different earthquakes most likely to occur in the region of the structure. Most often the reliability-based plastic design of frames is focused in static loading cases either for 2-D [3], or 3-D [4] structures. In a recent study the reliability-based optimization of large-scale space frames under static loads was investigated where the probability of failure was calculated using MCS and limit elasto-plastic analysis [5]. A limited number of studies has been performed for the solution of reliability analysis problems under dynamic loading conditions, and they are restricted in 2-D smallscale space frames [6,7]. In the present study the reliability analysis of multi-storey 3-D frames under seismic loading conditions is investigated. Randomness of seismic loading conditions and material properties is taken into consideration. The MCS is implemented in order to determine the structural probability of failure dynamic limit elasto-plastic analysis of the structure. The results of the proposed methodology in a characteristic test example indicate the applicability of the proposed methodology and its potential for treating realistic problems. 2
The reliability of a structure or its probability of failure is an important factor in the design procedure since it quantifies the probability that a structure will fulfill its design requirements. Structural reliability analysis is a tool that assist the design engineer to take into account all possible uncertainties during the design, construction and life of a structure in order to calculate its probability of failure p f , i.e. to estimate the level of risk against a local or a global structural failure. Considering the simple case of two basic random variables R and S, where R denotes the structures bearing capacity and S the external loads, time invariant reliability analysis produces the following relationship
p f = p[R < S] =
F (t) f
R
(t)dt = 1
F (t) f
s
(t)dt
(1)
The randomness of R and S can be computed on the basis of known probability density functions fR(t) and fS(t), with FR(t)=p[R<t], FS(t)=p[S<t] being the cumulative probability density functions of R and S, respectively. Most often a limit state function is defined as G(R,S)=S-R<0 (or G(R,S)=R-S>0, alternatively) and the probability of structural failure is given by
p f = p[G(R,S) 0] =
G 0
f R (R) fS (S)dRdS
(2)
It is practically impossible to evaluate R analytically for complex and/or large-scale structures. In such cases the integral of eq. (2) can be calculated only approximately using either simulation methods, such as the Monte Carlo Simulation (MCS), or approximation methods. For small-scale problems FORM and SORM implementations have been proved very efficient, but when the number of random variables increases and the problems become more complex MCS based methods have been proven more reliable. In large-scale complex problems, in order to reduce the computational cost of direct simulation methods, the Response Surface Method (RSM) can be used [8]. RSM method approximates the implicit limit state functions surface by using an adequate number of points that describe within an acceptable level of accuracy this particular surface. 2.1 Monte Carlo Simulation
In reliability analysis the MCS method is often employed when the analytical solution is not attainable and the failure domain cannot be expressed or approximated by an analytical form. This is mainly the case in problems of complex nature with a large number of basic variables where all other reliability analysis methods are not applicable. Although the mathematical formulation of the MCS is relatively simple and the method has the capability of handling practically every possible case regardless of its complexity, the computational effort involved in conventional MCS is excessive. With efficient computational techniques and/or parallel processing the computational load can be reduced significantly [1,5]. One basic advantage of the MCS over the other reliability analysis methods for the particular type of problems investigated in the present study is that its efficiency is not affected by the additional complexities due to non-linear analysis and the dynamic loads. The computational cost of MCS grows proportionally when the number of random variables is large or/and the magnitude of p f is small, since both cases require a huge sample size. For this reason various sampling techniques, also called variance reduction techniques, have been developed in order to improve the computational efficiency of the method by reducing the statistical error that is inherent in MCS methods and keeping the sample size to the minimum possible. Expressing the limit state function as G(x)<0, where x=(x1,x2,...,xM) is the vector of the random variables, eq. (2) can be written as
pf =
G (x ) 0
f x (x)dx
(3)
where f x (x) denotes the joint probability of failure for all random variables. Since MCS is based on the theory of large numbers ( N ) an unbiased estimator of the probability of failure is given by
pf = 1 N
I(x )
j=1 j
(4)
(5)
In order to estimate p f an adequate number of N independent random samples is produced using a specific, usually uniform, probability density function of the vector x. The value of the failure function is computed for each random sample x j and the Monte Carlo estimation of p f is given in terms of sample mean by
pf NH N
(6)
where NH is the number of unsuccessful simulations. 2.1.1 Importance Sampling In order to improve the efficiency without affecting the accuracy of the MCS method various reduction sampling size techniques have been proposed [1,6]. Among them Importance Sampling (IS) is generally recognized as one of the most efficient reduction techniques [2,9,10]. The effectiveness of MCS-IS depends on the number of random variables and the complexity of the problem. Also, the selection of an appropriate important sampling density function g x ( x ) is of critical importance for both the efficiency and the accuracy of the MCS. A successful choice of g x ( x ) yields reliable results and reduces significantly the number of simulations, while a misleading choice produces inaccurate results. The key-idea of every reduction technique, including MCS-IS, is to obtain a non-negative sampling density function located in the neighborhood of the most probable failure point, since this region contributes the most to the probability of structural failure. However, in the case of dynamic reliability analysis of large-scale structures an additional difficulty for the right choice of an efficient sampling density function occurs. This is due to the fact that in these cases importance sampling is time variant and it is more complicated to determine the failure region in each time-step of the dynamic analysis process. Thus, in order to overcome this difficulty, the choice of the sampling density function has to be based on energy criteria [6]. Using MCS-IS eq. (3) can be expressed as
pf =
(7)
where g x ( x ) is the importance sampling function. In this case eq. (4) gives 4
pf =
(8)
Therefore, in MCS-IS each random sample x j is generated using g x (x j ) and the Monte Carlo estimate of p f is
pf
j=1 N
Sj N
(9)
(10)
Structural analysis under seismic loadings with direct integration methods, such as Newmark method, requires a great amount of computations. Especially, when material and/or geometry nonlinearities are involved the computational cost increases dramatically. The resulting equilibrium equations for a finite element system in motion can be written in the standard form !! ! M(si )u t + C(si )u t + K(si )u t = R t (11) where M(si), C(si), and K(si) are the mass, damping and stiffness matrices for the i-th sampling vector ! u si; Rt is the external load vector, while u , u and !! are the displacement, velocity, and acceleration vectors of the finite element assemblage, respectively. The solution methods of direct integration of equations of motion and of response spectrum modal analysis, which is based on the mode superposition approach, will be considered in the following sections.
3.1
The Newmark integration scheme is adopted in the present study to perform the direct time integration of the equations of motion. Under this scheme the variation of velocity and displacement are given by ! ! !! u t +t = u t + [(1 )u t + !! t +t ]t u (12)
! !! u t +t = u t + u t t + [(1/ 2 )u t + !! t +t ]t 2 u
(13)
where and are parameters that control the accuracy and stability of Newmark integration. When = 1/2 and = 1/6, relations (12) and (13) correspond to the linear acceleration method. In addition to
(12) and (13), for finding the displacements, velocities, and accelerations at time t + equilibrium equations (11) at time t + t are also considered
!! ! M(si )u t +t + C(si )u t +t + K(si )u t +t = R t +t
t, the
(14)
! u Solving from (12) for !! t +t in terms of u t +t and then substituting for u t +t into (13), we obtain ! equations for !! t +t and u t +t each in terms of the unknown displacements u t +t only. These two u ! u relations for !! t +t and u t +t are substituted into Eq. (14) to solve for u t +t . Subsequently, using (12) ! and (13), !! t +t and u t +t can be also be calculated. u
3.2
The selection of the proper external loading Rt for design purposes is not an easy task due to the uncertainties involved in the seismic loading. For this reason a rigorous treatment of the seismic loading is to assume that the structure is subjected to a set of earthquakes that are more likely to occur in the region where the structure is located. The seismic excitations that are more likely to occur are produced as a series of artificial accelerograms. In order these artificial accelerograms, that will load the structure, to be representative they have to match some requirements of the seismic codes. The most demanding one is that the accelerograms have to be compatible with the elastic design response spectrum of the region where the structure is located. It is well known that each accelerogram corresponds to a single response spectrum for a given damping ratio that can be defined relatively easy. On the other hand on each response spectrum corresponds an infinite number of accelerograms.
Figure 1. Elastic design response spectrum of the region and response spectrum of the first artificial accelerogram (=2.5%) The methodology for the creation of artificial accelerograms that correspond to a specific response spectrum was introduced by Gasparini & Vanmarke [11]. In this work the implementation published by Taylor [12] for the generation of statistically independent artificial acceleration time histories is adopted. This method is based on the fact that any periodic function can be expanded into a series of
sinusoidal waves. The resulting virtual ground motion is stationary in frequency content with peak acceleration close to the target peak acceleration. In this study a trapezoidal intensity envelope function is adopted. The generated peak acceleration is artificially modified to match the target peak acceleration, which corresponds to the chosen elastic design response spectrum. An iterative procedure is implemented to smooth the calculated spectrum and improve the matching. The elastic design response spectrum considered in the current study is depicted in Figure 1 for damping ratio =2.5%. The required for MCS number of artificial uncorrelated accelerograms is produced by the previously discussed procedure. The corresponding response spectrum for one artificial accelerogram is depicted in Figure 1.
Elasto-plastic analysis
In order to perform dynamic analysis considering inelastic behavior there is a need for a more detailed simulation of the structure in those areas where plastic zones are expected to be formed. In the case of linear behavior the simulation of beams with one element is adequate even if geometric nonlinearities are taken into consideration. In the case of inelastic behavior, however, the plastic node approach has its limitations under cyclic dynamic loading conditions. For this reason each member is discretized with a number of cubic elasto-plastic 3D beam-column elements. For the evaluation of the element forces, numerical integration is performed at the two Gauss points per element. For this purpose the section at each Gaussian point is divided in a number of monitoring points (monitoring areas), the stress-strain relations of which are considered during the integration. For single-material sections 100 monitoring points are usually enough. For more complicated sections this number should be increased to about 200 or more. In our case 200 monitoring points have been considered. For accurate inelastic modelling it is advisable to use more than one cubic element per member.
Numerical results
In the present study one benchmark test example of a space frame with six storeys has been considered to illustrate the applicability of the proposed methodology. The probability of failure is estimated using the Basic MCS. Failure is considered when the total top displacement is greater than 35 cm or the inter-storey drift exceeds the 6 cm. The elastic modulus of elasticity is 200 GPa and the strain hardening parameter is equal to 0.05, while the yield stress is 250 MPa. For the case of the inelastic behavior the space frame is discretized with 413 elements with 2094 degrees of freedom as shown in Figure 3, while elastic behavior would require 63 elements with 180 degrees of freedom. The beams have length L1=7.32 m and the columns L2=3.66 m. The structure is loaded with a 19.16 kPa gravity load on all floor levels and a static lateral load of 109 kN applied at each node in the front elevation along the z direction.
Figure 3. Six storey space frame The yield stress along with seismic loading conditions are considered to be random variables. The type of probability density function, mean value and standard deviation for the yield stress are presented in Table 1. 25.0 0.1y
PDf N
The probabilities of failure as well as the CPU time required for different number of simulations are shown in Table 2, for the following design: beams: W14145, columns: W14176.
Table 2. Values of pf for different values of simulations It has to be mentioned that the CPU time required for one dynamic inelastic analysis is 100 sec while for the corresponding elastic analysis the required time is 520 sec.
6 Conclusions
In the present study the structural reliability analysis of space frames under seismic loading is performed taking into consideration randomness of the seismic loading conditions and of the material properties. The reliability analysis is performed using the Monte Carlo Simulation method, which exhibits a robust performance in realistic structures. The computational effort for performing the reliability analysis for this type of structures becomes excessive and for this reason efficient computational methods for treating the resulting equations are required.
References
[1] G.I. Schueller, Structural reliability Recent advances, 7-th International Conference on Structural Safety and Reliability (ICOSSAR 97), Kyoto, Japan (1997). [2] C.G. Bucher, Adaptive Sampling - An iterative Fast Monte Carlo Procedure, Structural Safety, 5, (1988), 119-126. [3] D.M. Frangopol, Computer-automated sensitivity analysis in reliability-based plastic design, Comp. & Struc., 22(1), (1986), 63-75. [4] M. Papadrakakis, V. Papadopoulos, A computationally efficient method for the limit elastoplastic analysis of space frames, Comp. Mech. J., 16(2), (1995), 132-141. [5] Y. Tsompanakis, M. Papadrakakis, Reliability based structural optimization using advanced computational techniques, Struct. Opt., to appear (2002). [6] H.J. Pradlwarter, G.I. Schueller, On advanced Monte Carlo simulation in stochastic structural dynamics, Int. J. Non-Linear Mech., 32(4), (1997), 735-744. [7] J. Huh, A. Haldar, Seismic reliability of non-linear frames with PR connections using systematic RSM, Prob. Eng. Mech., 17, (2002), 177-190. [8] M. Gasser, G.I. Schueller, Reliability based optimization of structural systems, Math. Meth. Of Oper. Res., 46, (1997), 287-307. [9] E. Pulido, T.L. Jacobs, Prates De Lima E.C., Structural reliability using Monte-Carlo simulation with variance reduction techniques on elastic-plastic structures, Comp. & Struct., 43, (1992), 419-430. [10] J.E. Hurtado, A.H. Barbat, Simulation methods in stochastic mechanics, in J. Marczyk, ed., Computational stochastic mechanics in a meta-computing perspective, CIMNE, Barcelona, (1997), pp. 93-116. 9
[11] D.A. Gasparini, E.H. Vanmarke, Simulated earthquake motions compatible with prescribed response spectra, Dep. of Civil Eng. Publication No. R76-4, MIT, USA (1976). [12] C.A Taylor, EQSIM - A program for generating spectrum compatible earthquake ground acceleration time histories, Reference Manual, Bristol Earthquake Engineering Data Acquisition and Processing System, UK, (1989).
10