0% found this document useful (0 votes)
63 views

Practical

This document contains three papers related to computational electromagnetics. The first paper discusses verification and validation of computational electromagnetics software and determining accuracy of numerical results. The second paper is about using MATLAB to analyze transmission line faults by simulating a virtual time domain reflectometer. The third paper describes upgrades to the Sandia Lightning Simulator facility.

Uploaded by

Hassan Khalid
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

Practical

This document contains three papers related to computational electromagnetics. The first paper discusses verification and validation of computational electromagnetics software and determining accuracy of numerical results. The second paper is about using MATLAB to analyze transmission line faults by simulating a virtual time domain reflectometer. The third paper describes upgrades to the Sandia Lightning Simulator facility.

Uploaded by

Hassan Khalid
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Practical Papers, Articles and Application Notes

Robert G. Olsen, Technical Editor

n this issue you will find three practical papers that should be of interest to members of the EMC community. The first is entitled Verification and Validation of Computational Electromagnetics Software, by Edmund K. Miller. This is on the very important topic of how to determine the accuracy of numerical results and is complementary to the paper published in the Winter 2006 Newsletter by Andy Drozd. If you enjoyed that paper, you will enjoy this one as well. The second paper is entitled, Transmission Line Fault Analysis Using a Matlab-Based Virtual Time Domain Reflectometer Tool, by Levent Sevgi. In it the author discusses the problem of estimating complex transmission line loads based on the analysis of transmission line signal in time, frequency and Laplace domains. A real plus is the availability of downloadable Matlab-Based Virtual TDRMeter Tool that can be used to teach, understand, and visualize the time-domain pulse characteristics and echoes from various discontinuities/terminations. In addition, fault types and locations can be predicted. The third paper is entitled, The Sandia Lightning Simulator: Recommissioning and Upgrades, by Michele Caldwell and Leonard E. Martinez. This facility can produce a maximum peak current of 200 kA for a single stroke, 100 kA for a subsequent stroke, and several hundred Amperes of continuing current for hundreds of

milliseconds. The paper was first presented at the 2005 IEEE EMC Symposium in Chicago and has been reprinted here by permission of the Symposium Committee. The purpose of this section is to disseminate practical information to the EMC community. In some cases, the material is entirely original. In others, the material is not new but has been made either more understandable or accessible to the community. In others, the material has been previously presented at a conference but has been deemed especially worthy of wider dissemination. Readers wishing to share such information with colleagues in the EMC community are encouraged to submit papers or application notes for this section of the Newsletter. While all material will be reviewed prior to acceptance, the criteria are different from those of Transactions papers. Specifically, while it is not necessary that the paper be archival, it is necessary that the paper be useful and of interest to readers of the Newsletter. I have been editing this section of the newsletter for six years and have decided to retire from this position. I will, however, be ably succeeded by Prof. Flavio Canavero of the Politecnico di Torino in Italy. Paper submissions, comments, or letters to the Technical Editor can be sent to him at [email protected]

Verification and Validation of Computational Electromagnetics Software


Edmund K. Miller, Los Alamos National Laboratory (retired) 597 Rustic Ranch Lane, Lincoln, CA 95648, [email protected]
Abstract--One of the most time-consuming tasks associated with developing and using computer models in electromagnetics is that of verifying software performance and validating the model results. Even now, relatively few available modeling packages offer the user substantial on-line assistance concerning verification and validation. This paper discusses the kinds of errors that most commonly occur in modeling, the need for quantitative error measures, and various validation tests such as convergence behavior and boundary-condition checks. Use of model-based parameter estimation to develop error estimates or to control uncertainty in an observable is also demonstrated. The article concludes by recommending that the Computational ElectroMagnetics community adopt a policy of requiring some minimal standards concerning the accuracy of numerical results accepted for journal articles and meeting presentations. 1. If it works for the sphere, it works for everything. 2. Confirming experiments are always valid. 3. If different methods give the same results then they are correct. 4. Independent investigators never make the same mistake. 5. Small changes imply convergence. These are cogent observations that will be touched on in one way or the other in the following. First discussed is code (or software; the two terms are used interchangeably here) development and the general issue of validation. Then considered is the central question Why Verify and Validate? One aspect of answering this question is the development of error measures as a way of determining quantitatively the accuracy, or alternatively, the uncertainty, of computed results. The principle kinds of modeling errors are considered next followed by a discussion of various checks that can be used to assess such errors. Included here are convergence and boundary-condition checks, and use of model-based parameter estimation (MBPE) to estimate data uncertainty, or otherwise control it, together with some illustrative examples. The article concludes with what should be a logical consequence of verification and validation, the inclusion by an author(s) of quantitative and appropriate statements about

I. INTRODUCTION AND OVERVIEW


It was observed by Kleinman [1] that An appropriate degree of skepticism must be brought to validation and/or error estimation, who cited statements like the following that are often-heard in Computational ElectroMagnetics (CEM):

66

2006 IEEE

the validity of the results being presented at meeting and in published material. The discussion here might be considered complementary to, and an extension of, that presented in an excellent, recent article in the EMC Newsletter by Drozd [2]. The present discussion was originally prepared for an invited presentation given at EMB 2001, Electromagnetic Computations, Methods and Applications, Uppsala University, Sweden [3].

II. CODE DEVELOPMENT AND VALIDATION


The major steps involved in developing a computer model can be summarized generically as follows: Conceptualization: Encapsulating analysis and observation in terms of elementary physical principles and their mathematical description. Formulation: Fleshing out of the elementary description into a more complete, formally solved, mathematical representation. Approximation: Simplifying the analytical description to one suitable for numerical treatment. Implementation: Transforming the formulation and approximations into a computer algorithm using various numerical techniques. Computation: Obtaining quantitative results. Validation: Determining the numerical and physical credibility of the computed results. While approximation is listed separately, each of the other steps above can also involve its own approximations, the implications of which need to be considered relative to the application intended for a particular model. Included in the conceptualization step are high-frequency methods such as physical optics and the geometrical theory of diffraction, or the compensation theorem and the Born-Rytov and Rayleigh approximations. In the formulation step are included issues such as the surface-impedance and thin-wire approximations. Two major approximations are involved in the numerical-implementation step, the numerical representation of differential operators via finite differences and finite elements, and the evaluation of integrals via numerical quadrature, together with the selection of basis functions for unknowns and testing functions for equations. The most critical aspect of the computation step is the development of the numerical model, a process that involves the representation of geometrical boundaries together with how densely the unknown(s) and the boundary condition(s) are to be sampled. Even the validation step itself will inevitably encounter approximations, not least of which is the fact that answers of arbitrary accuracy are available for very few problems. Of the three principle attributes of any computer model, accuracy, efficiency and utility [4], accuracy must be considered foremost. Model accuracy is self-evidently important for a variety of reasons. Efficiency is obviously important, but efficiently obtained inaccurate results have no value, i.e., getting wrong results fast is not generally useful. Results of unknown accuracy are also not as useful as they could be. In the best of all worlds, accuracy would be a dialable quantity, with tradeoffs between accuracy and computation cost, or efficiency, explicitly allowed to be made by the modeler. Even better would be the capability of allocating an error budget throughout the various steps in the model computation in such a way that the cost of obtaining final results to a specified uncertainty is minimized.

Figure 1. The input impedance (a), admittance (b) and input and radiated powers (c) as a function of the number of segments (unknowns) for a center-fed dipole antenna 2 wavelengths long. Results for a one-segment source appear especially non-converged for Z but are better for Y. For the constant-width source region, the results exhibit a muchreduced dependence on N.

2006 IEEE

67

Figure 2. The magnitude of the finely sampled induced tangential electric field along the axis (the current is on the surface) of a 2.5-wavelength, 50-segment wire 10-3 wavelengths in radius modeled using NEC. For the antenna case (the solid line) the two 20-V/m source segments are obvious as are the other 48 match points (the solid circles) whose values are generally on the order of 10-13 or less. For the scattering problem, the scattered E-field (the dashed line) is graphically indistinguishable from the incident 1 V/m excitation except near the wire ends. The IEMF and far-field powers for the antenna are 1.257x10-2 w and 1.2547x10-2 w, respectively. For the scattering problem, the corresponding powers are 5.35x10-4 and 5.31x10-4 watts.

III. WHY VERIFY AND VALIDATE?


It should be obvious that validation of their results is absolutely mandatory if computer models are to be used with any confidence and reliability. Validation may be attempted in a variety of ways, including via use of analytical, numerical and/or experimental tools and results. Analytical possibilities include checks such as reciprocity and energy conservation and examining how well the boundary conditions are satisfied. Among numerical checks are assessing the results for what appears to be non-physical behavior, whether the solution converges as the number of unknowns (N) is increased and whether small changes in the numerical model produce small changes in the results. These might all be characterized as internal checks. Other checks that use independent experimental, numerical and/or analytical results might comparably be called external checks. Both categories of checks will be discussed in more detail below. At this point, its worthwhile reviewing the dictionary definition of validation [5]: To validate is to--1) declare or make legally valid; 2) mark with an indication of official sanction; 3) substantiate or verify. Of which the latter is most relevant to our discussion. Note that verify is included in this definition as one aspect of validation. For computer-software purposes, however, the terms verify and validate have come to take on complementary, but different, meanings. Verification is usually considered the process of determining that a computer model or code produces results consistent with its design. Validation, on the other hand, is concerned with establishing how well the results from a given model or code conform to the physical reality of the intended applications. Note that verification is a necessary, but not sufficient, condition for acceptable code performance, while validation determines how reliably a code can

be applied to physically meaningful problems. For brevity in the following, and because the primary focus will be on validation as defined above, the term validation will be used to cover both issues. Of the code-development steps listed above, validation probably consumes the largest collective effort as computer models and problems become increasingly complex. Validation is important in general, of course, but how to achieve it may not be readily apparent nor easily accomplished. However, some specific situations can be identified where validation is not only rather straightforward but particularly useful. For example, when moving codes between computers or installing one on a new computer, its essential that the code work in the new environment in the same way and produce the same results, as it did in the old. Its also important to confirm continued valid operation of the code over time on a given computer. Among the ways this might be done are use of standard test cases that can be routinely redone for comparison with solutions stored from previously running them. Providing guidance to the user concerning the estimated validity of the computed results would also be extremely valuable, a somewhat more demanding aspect of validation. These validation requirements suggest the need for a standardized procedure for estimating modeling errors using various internal and external checks on the model results, and the development of standardized error measures. Ideally, this could lead to giving the user a quantitative error estimate generated by the code itself concerning the reliability of the numerical results it produces.

IV. THE NEED FOR ERROR MEASURES


Error measures are therefore needed, whatever the specific approach, to quantify a model with respect to its numerical accuracy and correlation with Maxwells equations, confirm its compatibility with physical reality; and last but certainty not least, to demonstrate its reliability in achieving an electromagnetic design subject to realistic specification(s). The basic idea is to devise error measures, scalar or vector, to routinely test all numerical results using whatever data and tests are relevant to the numerical results of interest. Several different kinds of error measures can be defined. They can be internal, i.e., within the code itself, or external where independent results are used for this purpose. The error measure can be numerical, to check convergence and boundary conditions, or physical where measurement and/or physical principles are invoked. Error measures can involve fields, both near and far, and source behavior, and range from being local or pointwise, such as input impedance and backscatter cross section, to integral or global, in terms of quantities such as the gain and total cross section. The primary purpose for which error measure(s) are intended is to increase CEM modeling reliability for the typical user. Equally important, however, is the generation of an error, or uncertainty, statement that would accompany results made available to others through reports, meeting presentations and journal articles, a topic that is elaborated below. Concerning the user specifically, an error analysis should be produced as a routine component of any modeling exercise. It would be desirable, when this analysis shows that the results have excessive uncertainty, that the modeling code indicate what might be the cause of the uncertainty and even better, suggest possible ways for cor-

68

2006 IEEE

recting the problem. The code should also have a built-in calibration capability that permits convenient reconfirmation that the results produced today are the same as those obtained earlier for a given set of reference problems, to therefore provide periodic reassurance that code operation has not changed. These test cases should be provided as part of the software package, including the input data for each check model, as well as intermediate and final results so that any discrepancies in the computation chain can be identified more easily. On-line documentation should also be a built-in component of a modeling code to assist a user in real time, especially when the model input is being developed. Finally, it would invaluable were all modeling codes to come with a GUI (Graphical User Interface) and designed to accept input in a variety of formats to make model sharing and results comparison more convenient. To summarize, routine implementation of appropriate error measures should reduce uncertainty and build confidence in all areas of CEM applications. The development of quantitative, reliable error measures would ideally also enable explicit cost-accuracy tradeoffs to be made, so computer models could be used for design to specification. Adaptive allocation of an error budget throughout a problem would lead to more cost-effective modeling by permitting attention to be focused on the most critical areas so that optimum use is made of the computer resources.

Figure 3. Results for the imaginary component of current on the lit side of an infinite PEC cylinder illuminated by a TE, normally incident plane wave. The noisy data (the solid circles) comes from adding uniformly distributed random noise varying between 0.1 to accurate current samples (the small open circles), and can be seen to be distributed about the average of the five MBPE fitting models used (the continuous line) whose parameters are computed from noisy data samples shown by the large, open circles. racy A, the precision P, the condition number CN, and the number of equations N, are all expressed in digits. Both experimental and independent numerical data are essential to assess overall solution accuracy or uncertainty. Experimental data provides essentially the only way of evaluating ep. On the other hand, en can be studied in several ways, one being experimental data when the physical modeling error can be made zero. Also useful is comparison with other analytical or numerical results, and evaluating the solution relative to analytical requirements such as boundary error, reciprocity, energy conservation, etc. Observing the solution with increasing N (a convergence test) is useful to establish modeling guidelines in terms of the required sampling density. The physical modeling error most often arises because of geometrical approximations that are made, for example, where curved surfaces are represented by flat facets or even cubes, or where a different structure entirely is employed, as when a wire mesh is used for a solid surface. Electrical approximations can also be made, as when a surface impedance is used as a boundary condition for a penetrable object, or an antenna source is represented by a point-sampled, tangential electric field. B. User Modeling Errors It may seem unusual to cite the user/modeler as being a significant source of modeling errors, but the fact is that there are many ways by which the modeler can cause errors in using even a well-validated and user-friendly code. Aside from the most obvious errors arising from mistakes in preparing input data or otherwise violating code requirements, a user may reject correct results through unwarranted skepticism or erroneous expectations. Another common problem is insufficient exploration of the relevant parameter space, thus missing fine, but important, details in the computed observable, a radiation pattern or frequency response, for example. Or, the user may not recognize that the physical modeling error is the controlling factor in a given application, and instead assumes that a problem in the modeling code is causing results that dont agree

V. THE PRINCIPLE MODELING ERRORS


A. Numerical and Physical Modeling Errors The modeling process is driven by two primary sources of modeling error, which are: The Physical Modeling Error (ep), which comes from replacing the real physical problem of interest by an idealized mathematical representation/approximation. The Numerical Modeling Error (en), which comes from obtaining only an approximate solution to the idealized representation, and which itself has two components-1) Solution error--The difference that can exist between the computed results and an exact solution even were the linear system of equations to be solved exactly, due to using a finite number of unknowns, given by (s)sol = Isol(s) - Itrue(s); and 2) Equation (or residual) error--The equation mismatch that can occur in the numerical solution because of roundoff due to finite-precision computations, or when using an iterative technique, because of limited solution convergence (s)eq = L(s,s)Isol(s) - Etan(s). For most numerical solutions, it will be true that eq ~ ro with ro the roundoff error, so that n ~ sol and the solution error will be of primary concern. However, in cases where the condition number (CN) of the relevant matrix (assuming a firstprinciples integral- or differential-equation-based model is being used) becomes large enough, the roundoff error can dominate the solution accuracy. In essence, the CN amplifies errors in the matrix coefficients and the right-hand-side vector of a linear system during the solution process. It increases the effect of roundoff because a limited number of bits/operation are available in the computation, the effect of which also increases with the total number of operations needed to obtain a solution. The relationship among these quantities can be expressed approximately as log(A) ~ log(P) - log(CN) - 3log(N), where the accu-

2006 IEEE

69

acceptably with measurements. Alternatively, a user may unquestionably accept the output produced by a computer model. Kleinmans admonition that appropriate skepticism be employed is highly recommended. Until validated, all results must be questioned. Furthermore, accurate numerical results dont guarantee physical relevancy nor fidelity. Also to be kept in mind is that model results might exhibit more accurate relative dependencies when conducting parameter studies than do their absolute values. Frequency, angle and other shifts are fairly common in computed observables when compared with experimental results. Convergence tests must be used with care, as the results dont always converge to the right answer as the number of unknowns is increased, and they can also be sensitive to factors other than the model accuracy, as discussed below.

VI. SOME VALIDATION POSSIBILITIES


A. Global and Local Error Measures There are two distinct kinds of error measures that can be used to quantify the accuracy or uncertainty of any kind of data. A global measure is an integrated quantity that is based on quantities such as total radiated power or radar cross section, maximum antenna directivity, the correlation between a computed and reference pattern, etc. As such, it measures the quantity of interest over a range of the observation variable(s). It would typically be stated as a single number, thus being a scalar quantity. A local measure, on the other hand, is a pointwise quantity that leads to a vector indicator or sequence of values, such as major pattern cuts and null locations, frequency transfer functions, and resonance and anti-resonance locations. A global measure typically might be derived from some sort of weighted sum or integral of a local measure. The most appropriate choice will often be application-dependent. Either kind of error measure can be used for internal and external checks, as illustrated below. Internal checks provide a measure of the models self-consistency with respect to Maxwells Equations. External checks provide independent confirmation concerning the validity of code results. B. Internal Checks Some of the specific kinds of questions that internal checks are designed to answer include confirmation that the input data is consistent with the code requirements and assumptions, and to ensure that the modeling guidelines have not been violated. Internal checks can also include determining the condition number of the impedance matrix, and evaluation of the equation error and solution error, the latter through a convergence test. Other internal checks might utilize energy and/or power conservation, i.e., is Prad = I2Rin, where Prad is the radiated power in the far field and Rin is an antennas input resistance. The reciprocity required of receiving and radiation patterns is another example. Perhaps the most convincing, at least in terms of the numerical modeling error, is a boundary-condition check, which for a perfect electrical conductor should produce |Etan|.sds << |Einc|.sds. B1. Convergence Tests--Convergence tests are a widely used kind of internal check, because they can be fairly easy to implement and can provide appropriate reassurance that a given

model has been sampled finely enough. However, convergence to the correct answer cannot be guaranteed, and furthermore the associated matrix can become more and more ill-conditioned with increasing N. In the case of the thin-wire approximation, for example, nonphysical current and charge oscillations can occur if the segment length is made less than the wire diameter. Misleading results like those shown in Fig. 1 (these and other model results presented here come from NEC [6] unless otherwise indicated) can also be obtained, due to the fact that the conductance is a more-stable quantity than is the resistance, as well as the fact that the source model can introduce an additional effect having nothing to do with solution convergence per se. Its worth pointing out that the trends seen in Fig. 1 can themselves be modeled, either from a curve-fitting viewpoint, which is less desirable although possibly beneficial, or based on the underlying physics, the latter using a general approach called model-based parameter estimation (MBPE) [7]. For example, the dependence of the susceptance on N when the excitation is confined to a single segment, can be approximated by B(N) X + Y/[Ln(L/N)], the model, which is based on knowledge of the feed-point (or gap-width dependence). The parameters, X and Y, are estimated from samples of B for two (or more) values of N. For the results shown in Fig. 1b, X = 1.025x10-4, Y = 2.x10-3. No comparable model is needed for the conductance because it is relatively insensitive to the sourceregion details, stabilizing here at about N ~ 10. If the source region is maintained at a constant width, the conductance is effectively independent of the number of unknowns, while the susceptance is nearly so beyond N ~ 20 or so. B2. Boundary-Condition Checks--Errors in the continuity of tangential fields at a material surface (boundary-condition error) probably provide the most rigorous internal check that can be made of a computer model. For a perfect conductor this might be most simply done by examining the induced or total tangential electric field on the objects surface as in Fig. 2. The total field, for example, provides the relative error in the boundary field when compared with the magnitude of the excitation field. Alternatively, the normal power flow to the surface (due to a nonzero electric field) yields an error measure that can be compared with the power supplied by the exciting field. This kind of measure can be used in a local sense for adaptive modeling by indicating where the power-flow error is larger than some acceptable value, thus indicating that additional sampling of the boundary current and tangential field is needed there. When integrated over the entire boundary, a global measure of the power-flow error is obtained. If the boundaryfield error is examined more finely over the entire surface of an object being modeled than used for the model itself, this can require more computation than that needed to fill the impedance matrix. Thus, using it may be reserved for those situations where a new problem is being modeled or where its suspected that the numerical results are unreliable. Fortunately, the boundary error wouldnt usually need to be examined over the entire object, but can, instead, be restricted to those areas where its thought to be most required, the source region of an antenna, for example. B3. Model-Based Parameter Estimation--Another applica-

70

2006 IEEE

tion of MBPE to CEM is to develop an estimate for the uncertainty of a computed observable by checking the consistency of data for that observable with Maxwells equations [8]. The approach may be compared to using linear regression to assess the accuracy of data that should fall on a straight line. In the case of an electromagnetic observable, however, a straight line is an inappropriate fitting model. Rather, the fitting model should represent the behavior expected on physical grounds. For example, an EM frequency response is well-approximated by a pole series, or more generally, a rational function. The MBPE data-uncertainty check also requires that the data is over-sampled with respect to its rank. One approach involves computing the parameters of the fitting model from a subset of the data, and then using the difference between the fitting model and the remaining data to develop an uncertainty estimate. Another involves using all of the data, and obtaining a least-squares solution for the fitting-model parameters, with the difference between the fitting model and the entire data set providing the uncertainty estimate. An example of the former approach is illustrated in Fig. 3, where the observable is the circumferential current on the front side of a PEC infinite cylinder illuminated by a normally incident plane wave. Five overlapping rational functions of frequency were used to obtain the result shown, having numerator-polynomial orders or 3, 4, 3, 4, and 3 respectively with corresponding denominator polynomials orders of 4, 5, 4, 5, and 4, requiring a total of 16 data samples (shown by the large open circles). The data samples used by each fitting model are 1-8, 110, 3-12, 5-14, and 5-16, having additive random noise relative to the maximum of 10%. Aside from the region ka 1, the unused data is seen to be rather randomly distributed about the average fitting model with the overall average absolute excursion being 0.083. Numerous other computer experiments on various kinds of data [7,8] exhibit similar results, indicating that MBPE can be used to estimate the uncertainty of CEM data with reasonable reliability. Note also that the MBPE approach not only provides an estimate for the data accuracy, but yields an estimate for the actual response from the noisy data. Model-Based Parameter Estimation (MBPE) can also be used to obtain a continuous estimate of an observable to a specified uncertainty when performing computations or making measurements. An example of using an adaptive version of MBPE to estimate the radiation pattern of a uniform aperture is shown in Fig. 4. In this particular example, the specified acceptable uncertainty in the estimated pattern was increased from 0.1 dB to a maximum of 3 dB as the pattern level reached -30 dB, but other schemes could also be used. The fitting model used here to implement MBPE consists of a small number (usually less than 10) of discrete sources whose strengths are obtained from samples of the far field over 13 overlapping observation windows. It can be seen that the estimated pattern lies between the upper- and lower-bound estimates shown by the open and closed circles, respectively. C. External Checks External checks are those that employ data not generated within a modeling code itself, using the analytical properties of Maxwells equations and solutions thereof, numerical results from other computer models, and experimental measurements.

Figure 4. MBPE is used here to develop an estimate of the radiation pattern of uniform aperture to a specified uncertainty. The data samples used are shown by the large open circles, the solid line is the average of the 13 fitting models employed, and the small solid and open circles show upperand lower-bound estimates for the average fitting model. C1. Using Analytical Results--Since there are relatively few closed-form solutions available for which results can be obtained to essentially arbitrary accuracy, analytical checks do not provide a rich source of external checks for validating model accuracy. Perhaps the most important role of analytical results in validating computer models is in such areas as energy conservation, reciprocity, etc., included in this discussion as a part of the internal checks available for validating a code. C2. Using Numerical Results--Many analytical and numerical choices must be made in developing a computer model, one being whether a time-domain or frequency-domain formulation will be used. Next, the field propagator must be chosen, among the possibilities being a Greens function, the Maxwell curl equations, a modal description or the geometrical ray theory and diffraction. As part of the numerical treatment, the basis and weight functions must be selected, as well as the numerical implementation, point sampling vs. a Galerkin procedure, for example, followed by how the resulting equations are to be solved. For these and other reasons, model inter comparability is not always straightforward. But, if two, or more, computer models are to be compared quantitatively, ways of comparing their results are needed. Since CEM models can vary in any of the ways outlined above, its often necessary that the results from one be transformed into the framework of the other, or perhaps, both transformed into a third reference system. Once comparability issues are solved, literally any quantity computed by a modeling code can serve as an external check for another code. C3. Using Experimental Data--Finding that computed results agree with experimental data to within an acceptable error band is probably the most satisfying kind of check for a modeler, and the one most convincing to others as well. But, experiments are themselves subject to uncertainty, not only with respect to the actual measurement, but in terms of whether the object under test is itself fabricated correctly. Its

2006 IEEE

71

Of course, were cost not an issue, then it would be reasonable to perform every model computation to as high an accuracy, or to the least uncertainty, as possible. But this situation is rarely, if ever, the case. The point to be made is that the accuracy sought in the model results should be commensurate with the intended use to be made of them. A. Specifying CEM Results Accuracy-The Way Things Are Now Accuracy and validation statements made in oral presentations and in written publications are inadequate when they say things like the results are in good agreement with . . . , the model is highly accurate, excellent results are obtained, etc., as is now almost universally the case, because they are not quantitative. What the author means by such subjective statements and what the reader might conclude in looking at the same data can be very different. Its fair to ask, if the author wont/cant at least estimate what the uncertainty is in the results being presented, why the material should be published in the first place? B. Specifying CEM Results Accuracy-The Way Things Ought to Be More useful objective, and quantitative, statements might instead say the error in the peak gain 0.5 dB, nulls in the scattering pattern are located to within 2 degrees, the RMS difference between the computed input impedance and experimental measurement is less than 3 dB over a 2:1 frequency range, etc. These statements should also be accompanied by an explanation about how these conclusions were reached if that is not obvious from the statement itself. Finally, some commentary should be included about what kind of sampling density is needed in the model to achieve the estimated accuracy, what kind of operation count and frequency-scaling law is associated with getting these results, the associated storage required and its frequency dependence, and the anticipated increase in the cost to decrease the error further. In this context, its also relevant to observe that a computer model can only be regarded to be validated for problems already solved. Even small changes in parameters and problem type might result in large changes in the codes performance. The tolerability to parameter changes may be sensitive to the particular application, but can also depend on the users perception, experience and expectations. Therefore, validation and reporting estimated accuracy are integral, ongoing aspects of using any modeling code, no matter how extensively used it is. One way of reporting accuracy or uncertainty is the use of error bars or their equivalent. Error bars, or a range where the correct result is estimated to lie within some specified confidence level, have been traditionally used for experimental data. The error bar provides an indication of the experimentalists confidence in the results, gives a quantitative idea of their anticipated reproducibility, and indicates the degree to which the results are thought to be reliable. What has proven to work well in the experimental world should at least be considered for its possible adaptability to the computational world. How this might be accomplished may need considerable work, but its high time that a start be made to deal with the problem. Its also preferable that error measures be related to the intended use of the results. For example, the consequences of

Figure 5. MBPE is used here to estimate the accuracy of measured data. The average of five fitting models is shown by the solid line as obtained from data samples indicated by the large, open circles. The unused data is shown by the solid points, with its deviation about the average fitting model multiplied by 3 to show the difference more clearly. highly advisable that the modeler be acquainted with the details of the experiment. A number of personal examples could be cited, one of which involved the radar cross section of a long, thin wire. Initial comparison of the measured and computed results showed the two sets of data to be qualitatively similar, but with a systematic shift in angle between them. Further inquiry disclosed that the test wire was so thin as to be threadlike, and thus was taped on a Styrofoam rod to maintain its shape. Upon repeating the computation at a higher frequency shifted by the square root of the rods relative permittivity, the difference between the two sets of results were substantially reduced. An application of MBPE [8] for estimating the accuracy of experimental data to determine its suitability for validating computed results is illustrated in Fig. 5. Here the RCS of a metal cube provides the test data [9], where the fitting model is again a rational function of frequency. The maximum estimated error can be seen to be about 5% of the maximum RCS, indicating that this data could be used to validate computed results to this order of uncertainty.

VII. ERROR STATEMENTS


Its appropriate to ask, in considering the problem of determining the accuracy of a modeling code, Of what value are numerical results of unknown accuracy or uncertainty? Numerical results of unknown accuracy are likely to be mistrusted or misused. Its risky to make expensive design decisions based on results whose accuracy is uncertain, while on the other hand its equally risky to accept unproven results as being correct. But, expecting error statements to be included with numerical results shouldnt represent an undue burden to the modeler. Observe that the sought-for numerical accuracy neednt greatly exceed the resolution or dynamic range of real-life, practical applications. For example, if the application needs only 3 dB accuracy, is it worthwhile to do modeling computations to 0.1 dB accuracy? Or, if a 2-deg null location isnt measurable, is it appropriate to demand this resolution from a computer model? Or, if impedance variations < 10% arent observable, is it necessary to seek 1% accuracy in the model results?

72

2006 IEEE

making design decisions based on less-accurate data should be balanced against the cost of increasing the model accuracy. The variational dependence of far-fields on boundary sources, as opposed to an antennas input impedance being sensitive to the the source distribution near the feed point, can reduce the accuracy needed when only a radiation pattern is of interest. Advantage should be taken of the possibility that at times order-ofmagnitude estimates may be acceptable in the early stages of developing a design, whereas the final cut may require tenthsof-dB confidence levels. A fact that CEM modelers should always keep in mind is that cost-uncertainty tradeoffs are inherent in numerical modeling. Aside from applications relevance of establishing the uncertainty of numerical results, it is important when such data is intended for distribution to others through reports, papers and articles. A community-wide policy of requiring quantitative accuracy statements is the only responsible course in the long run. It would force CEMers to confront an issue now mostly avoided. It would level the playing field, by not disadvantaging those few who now do report the uncertainty in their models. It would reassure the sponsors and customers of CEM software that the developers are tackling the problem and not leaving it to inexperienced or uniformed users alone to handle. Finally, it would return to the ethics of traditional science where quantitative results were considered unacceptable unless accompanied by error estimates. C. A Recommended Policy for Describing Model Accuracy These observations suggest the need for a new policy in the EM community that explicitly addresses accuracy and uncertainty in a flexible, uniform, consistent and fair manner. It should be flexible enough to permit a variety of acceptable choices to accommodate the varied resources available to those presenting results. It should be uniformly implemented so that equivalent information will accompany all the forms in which CEM results are communicated, to ease both the reviewers and readers evaluation of the material and to ensure consistency in the results presented. Lastly, it should be imposed fairly, in that a good-faith effort at compliance is acceptable while not requiring all authors to address the issue in the same way. Repeating a proposal originally made to ACES and the APS [10], and subsequently adopted by the AP-S Magazine, I suggest that some sort of formal validation policy be implemented in CEM. As a requirement for publication, any material including computed results must address the question of their validity/accuracy/uncertainty by including the following two statements: 1) The results presented here are estimated to be accurate to _____, where quantitative statements such as: The error in peak gain is 0.5 dB; Nulls in the scattering pattern are located to within 2 deg; Input impedance is obtained to within 5 ohms; The RMS difference between the two sets of data is 1 dB; etc. are made. 2) This estimate is based on using the following kind(s) of validation exercise(s) _____, where whatever experimental, analytical and/or computational validation that has used is summarized. or their equivalent, where point 2) could use anything the

author wishes, including citing personal experience as a justification for the accuracy claimed. Information also should be required for the modeling parameters that are used, since these are intimately related to accuracy, using statements such as: Nominal sampling densities required to achieve these estimated accuracies are _____, where wavelength-dependent and/or geometry-dependent values are given. The dependence of the operation count on frequency, f, needed to exercise the model reported here is estimated nominally to be Afx, . . . or some equivalent statement, where numerical values for A and x are given. Giving computer running times is useful, but that alone is not enough because there is such a variation in computer architectures that model-to-model comparison based on running time is not very informative. The variable storage need to exercise this model is estimated nominally to be Bfy, where again numerical values for B and y are given. Finally, articles presenting numerical results should not limit such data entirely to a graphical format. While obviously useful, graphs are not very convenient for making quantitative comparisons with other data. It should be required that the data in at least one of the graphs should also be presented in tabular form.

VIII. CONCLUDING COMMENTS


For various reasons, computer modeling remains more art than science. Approximations and/or limitations are intrinsic in the process from the conceptualization and formulation steps through the computation and validation. A numerical model usually only approximates the physical reality of interest, e.g., by replacing a curvilinear structure with a piecewise linear boundary which results in a physical modeling error, while its subsequent solution is also approximate since only a finite number of unknowns (N) can be used, resulting in a numerical modeling error. A number of practical questions thus arise in connection with employing such models. Among them are how does the matrix condition number vary with N, and how does the solution accuracy depend on the condition number, the computation precision and N? How small can N be made while achieving acceptable accuracy, how few computer operations will be required, and how are these factors affected by the formulation and numerical treatment? Of the three principle attributes of a computer model, accuracy, efficiency, and utility, accuracy remains the most important with respect to obtaining results that are useful for practical applications. The accuracy or uncertainty of computed results can be tested using internal checks within a modeling code and/or external checks using data from independent sources, with either employing local and global error measures and near- and/or farfield quantities. These error measures can utilize analytical, numerical and experimental data, the goal being to demonstrate that the numerical results exhibit behavior consistent with Maxwells equations to a degree of accuracy commensurate with the intended application. Such measures would evaluate the degree to which reciprocity, energy conservation, boundary conditions, etc. are satisfied, ideally as built-in user options in the modeling software. Finally, authors publishing results obtained from computer modeling should quantitatively address validation to let the reader know how accurate the modeler believes the results are and what computer resources are needed to get them.

2006 IEEE

73

REFERENCES
1. R. E. Kleinman, National Radio Science Meeting, Boulder, CO, 1993. 2. Andrew L. Drozd, Selected methods for validating computational electromagnetic modeling techniques, IEEE EMC Society Newsletter, Issue 208, Winter 2006, pp. 73-78. 3. E. K. Miller (2001), Verification and Validation of Computational Electromagnetics Software, invited paper in Conference Proceedings, EMB 01, Electromagnetic Computations, Methods and Applications, Uppsala University, Sweden, November 14-15, pp. 7-18. 4. E. K. Miller, Characterization, Comparison, and Validation of Electromagnetic Modeling Software, ACES Journal, special issue on EM Computer Code Validation, pp. 8-24. 1989 5. The American College Dictionary, Harper & Brothers Publishers, 1953. 6. G. J. Burke, Numerical Electromagnetics Code: NEC-4 Method of Moments, Lawrence Livermore National Laboratory, UCRL,MA-109338, 1992.

7. E. K. Miller, Model-Based Parameter Estimation in Electromagnetics: Part I. Background and Theoretical Development, IEEE Antennas and Propagation Society Magazine, Vol. 40, No. 1, pp. 42-52; Part II. Applications to EM Observables, Vol. 40, No. 2, pp. 51-64; Part III. Applications to EM Integral Equations, Vol. 40, No. 3, pp. 49-66, 1998. 8. E. K. Miller, Using Model-Based Parameter Estimation to Estimate the Accuracy of Numerical Models, in Proceedings of 12th Annual Review of Progress in Applied Computational Electromagnetics, Naval Postgraduate School, Monterey, CA, pp. 588-595, 1996. 9. S. Mishra, David Florida Laboratory, Ottawa, Ontario, Canada, private communication, 1996. 10.E. K. Miller, Requiring Quantitative Accuracy Statements in EM Data, in Proceedings of the Eleventh Annual Review of Progress in Applied Computational Electromagnetics, Naval Postgraduate School, Monterey, CA, March 20-24, pp. 1202-1210, 1995.

Edmund K. Miller Biography


Since earning his Ph.D. in Electrical Engineering at the University of Michigan, E. K. Miller has held a variety of government, academic and industrial positions. These include 15 years at Lawrence Livermore National Laboratory where he spent 7 years as a Division Leader, and 4+ years at Los Alamos National Laboratory from which he retired as a Group Leader in 1993. His academic experience includes holding a position as Regents-Distinguished Professor at Kansas University and as Stocker Visiting Professor at Ohio University. Dr. Miller has served as an AP Distinguished Lecturer, and wrote the column PCs for AP and Other EM Reflections from 1984 to 2000. He received (with others) a Certificate of Achievement from the IEEE Electromagnetic Compatibility Society for Contributions to Development of NEC (Numerical Electromagnetics Code) and was a recipient (with others) in 1989 of the best paper award given by the Education Society for Computer Movies for Education. He served as Editor or Associate Editor of IEEE Potentials Magazine from 1985 to 2005 for which he wrote a regular column On the Job, and in connection with which he was a member of the IEEE Technical Activities Advisory Committee of the Education Activities Board and a member of the IEEE Student Activities Committee. As a member of the TPC for the MTT Symposium in Albuquerque, NM, he was Guest Editor of the Special Symposium Issue of the IEEE MTT Society Transactions for that meeting. He was involved in the beginning of the IEEE Magazine Computing in Science and Engineering (originally called Computational Science and Engineering) for which he has served as Area Editor or Editor-at-Large. Dr. Miller has lectured at numerous short courses in various venues, such as ACES, AP-S, MTT-S and local IEEE chapter/section meetings, and at NATO Lecture Series and Advanced Study Institutes. Dr. Miller edited the book Time-Domain Measurements in Electromagnetics, Van Nostrand Reinhold, New York, NY, 1986 and was co-editor of the IEEE Press book Computational Electromagnetics: Frequency-Domain Moment Methods, 1991. He was the organizer and first President of the Applied Computational Electromagnetics Society for which he also served two terms on the Board of Directors. He served a term as Chairman of Commission A of US URSI and is or has been a member of Commissions B, C, and F, has been on the TPC for the URSI Electromagnetic Theory Symposia in 1992 and 2001, and was elected as a member of the US delegation to several URSI General Assemblies. He is a Life Fellow of IEEE from which he received the IEEE Third Millennium Medal in 2000. His research interests include scientific visualization, modelbased parameter estimation, the physics of electromagnetic radiation, validation of computational software, and numerical modeling. He is listed in Whos Who in the West, Whos Who in Technology, American Men and Women of Science and Whos Who in America.

74

2006 IEEE

Transmission Line Fault Analysis Using a MatlabBased Virtual Time Domain Reflectometer Tool
Levent Sevgi Do u University, Electronics and Communication Engineering Department, g s Zeamet Sok. No. 21, Acbadem / Kadky, 34722 Istanbul - Turkey
Abstract
Fault detection and identification along a finite-length transmission line, using a Matlab-based virtual time domain reflectometer that was recently introduced, is discussed. Estimation of complex loads based on the analysis of transmission line signal in time, frequency and Laplace domains are presented. Keywords Transmission lines, time domain reflectometer, FDTD, MATLAB, simulation, visualization, fault detection, Laplace transform n+1/2 (k) =
C t C t

G 2 R G 2

n1/2 (k)

C t

1 +

G 2

(in (k) in (k 1)) x


L t L

(2a)

in (k) =

1. Introduction
Investigation of the time domain (TD) responses of signals along transmission lines (TLs) requires solving the well-known TL equations derived either from Maxwells equations or by using TL circuit models (a typical two-wire TL, and a circuit model in terms of primary parameters R[ /m], L[H/m], C[F/m] and G[S/m] per unit length are sketched in Fig. 1). Such an approach should take into account physical parameters of the TL, the excitation, and the termination. The corresponding test/measurement method and the instrument are the TD reflectometry and the TD reflectometer (TDR), respectively [1]. The TDRs are used to locate and identify faults along all types of cables such as broken conductors, water damage, cuts, smashed cables, short circuits (SC), or open circuits (OC), etc. The TDR principle is simple; a generator injects a pulse down to a TL, and reflections from discontinuities and/or terminations are recorded. The distance between the generator and a fault is measured from the time delay between the incident pulse and the echo. Also, detailed analysis of the echo signal can reveal additional details of the faults or reflecting objects. Parallel to the developments in computer technology and the experience gained in programming, also because of the sharp increases in costs of electronic devices/systems, virtual labs have become very attractive in electrical engineering (EE) and education. A set of virtual electromagnetic (EM) tools has been introduced [2-6] for the last couple of years to assist engineers, educators, as well as students, from propagators to antenna solvers, radar cross section (RCS) predictors to EM compatibility (EMC) simulators, etc. The most recent virtual tool is the TDRMeter simulator [7] which solves the TD TL equations (x, t) i(x, t) +L + Ri(x, t) = 0 x t i(x, t) (x, t) +C + G(x, t) = 0 x t (1a) (1b)

t n+1/2

R 2 R 2

in1 (k)

1 + t

R 2

(k) n1/2 (k 1) . x

(2b)

The integers, k and n, respectively, represent spatial (x) and time (t) indices, so that physical space and time values are specified via (x, t + t/2) = n+1/2 (k) and i(x, t) = in (k) by using xk = k x and tn = n t ( t/2 delay between voltages and currents arises from the leap-frog scheme [2]). The FDTD derivation of the TL equations, discretizations of the source and load nodes under different termination conditions with the help of node voltage and/or loop current methods, and the design principles and details of the Matlab-based virtual TDRMeter were discussed in [7]. Here, fault detection/identification along a TL, and predicting various complex terminations by using the virtual TDRMeter tool is presented. First, the TDRMeter is reviewed in Sec. 2. The TL echo analysis based on both Fourier and Laplace transformations are summarized in Sec. 3 together with characteristic examples and fault detec-

((x, t) and i(x, t) are the space- and time-dependent voltage and current, respectively), by using the finite-difference timedomain (FDTD) approach as:

Figure 1: (a) Finite length transmission line, the time domain voltage source vs(t) and source resistor Rs, terminated by a complex ZL load, (b) its loss-free equivalent circuit. L [H/m] and C [F/m] are unit length inductance and conductance, respectively.

2006 IEEE

75

a
Figure 2: The front panel of the TDRMeter package, parallel RC termination, a rectangular pulse traveling towards the load (a 50 transmission line, Rs=100, pulse width=400ps, RL =10, CL =5pF). tion/identification tests. Finally, the conclusions are outlined in Sec. 4 (it is strongly advised to the reader to review basic TL theory before proceeding to further discussions, see for example, [2, 8, 9] and their references).

b
Figure 3: (a) Signal vs. time at 0.1m observation point on the SC terminated, 50 TL with a G-type fault at 0.3m (Gf=4 S/m), Gaussian pulse (Rs=50), (b) The sketch of the problem. ly saves signal vs. time data into a file named SigvsTime.dat for further off-line signal analysis. The counter at top-right displays the number of time steps left during the simulation. The popup menu at the right top contains a key selection. The TDRMeter can be used either as a TL simulator, or a TD Reflectometer. By default, the selection is TL simulator. The user specifies all input parameters and runs the tool to visualize time domain TL effects and or analyze the recorded signals. The TDRMeter option may be used for fault detection/identification purposes. TDRMeter is designed in a way that the user specifies only the TL and generator parameters. Once the TDRMeter option is selected, the termination and fault blocks disappear. The load is selected automatically and randomly, and a fault is introduced at an arbitrary point along the TL, so the user can find out its type and/or numerical values (if possible) by only observing/analyzing the output plots, e.g., signal vs. time on a selected TL point. The user may do blind tests and observe problems along the TL after specifying the TL and source parameters and starting the simulations, by pressing the Run button. The results of these blind tests (i.e., randomly generated parameters) re-appear when the user re-selects the TR Line option at the end of the simulations. In this way, the user can check his results against the data displayed on the front panel of the TDRMeter. The plot in Fig. 2 belongs to a 50 , 0.5m loss- and faultfree homogeneous (uniform) TL excited with a rectangular pulse having 400ps pulse-duration, internal resistor, and terminated by a parallel RC load (RL = 10 , CL = 5pF).The unit length inductance and capacitance values are 250nH / m, and 100 pF / m, respectively (the corresponding characteristic impedance is Z0 = L/C = 50 ), the speed of voltage and current waves along the line is = 2 108 m/s. The TL is divided into 100 nodes, therefore, x = 5mm. The time step t is calculated to be t = x/ = 25ps. With this choice, the voltage (or cur-

2. The Matlab-Based Virtual TDRMeter Tool1


The TDRMeter virtual tool [7] is a multi-purpose numerical package designed with Matlab 6.5. The front panel of the TDRMeter is given in Fig. 2. There are four input data blocks on top of the panel. The user supplies the unit-length TL parameters on the left (from which the characteristic impedance is automatically displayed). The mid-left block is reserved for the generator parameters. The user may select one of three different source types -- Gaussian pulse, rectangular pulse and a trapezoidal pulse -- from the pop-up menu, and supply pulse duration and rise/fall times (if the source is trapezoidal). The internal source resistor is also supplied inside this block. The pulse length of the Gaussian voltage source is automatically selected according to user-specified line-length and other discretization parameters (as explained in [7]). The user may choose long pulse durations (i.e., at least longer than the total TL travel time) and simulate a step voltage source. The third block on midright is used for the specification of the load; the selection in the pop-up menu of this block includes a resistive load, a parallel combination of resistor/capacitor, a serial combination of a resistor/inductor, serial and parallel resonance terminations. Based on the selection, the user is asked to supply RLC elements of the load in the activated data-boxes. The last block at the right is used for the fault specification. The user is allowed to change unit-length admittance and/or conductance (the user can define and add other types of faults by modifying the Mfile). The TL length, observation point, and the number of simulation steps are supplied at the right top of the front panel together with the runtime buttons. The output of the TDRMeter is given via two different plots. The incident voltage pulse and reflected echoes (if they exist) along the TL at any instant is automatically displayed on the reserved window as movie frames (see Fig. 2). Signal vs. time at the specified observation point may be displayed in another plot which becomes visible by pressing the Plot Sig. vs. Time button after the TD simulations. Pressing this button automatical-

1 Visit http://www3.dogus.edu.tr/lsevgi to download TDRMeter package (click on EMC Virtual Tools to download TR_LINE_GUI). Requires Matlab software to use.

76

2006 IEEE

Figure 4: Signal vs. time for the resistive termination showing the incident rectangular pulse and the echo (reflected pulse) along a 50, 0.5m TL (pulse length=400ps, Rs=50, RL=350). rent) pulse propagates one node at a time; therefore, it will take 100 t for the injected pulse to reach the load. The total of 400 t will result in two reflections from the load and two reflections from the source ends. Fig. 3a shows the signal vs. time plot (at 0.1m) of a typical scenario where a 50 , 0.5m loss-free SC TL excited by a Gaussian pulse generator with 50 -internal resistor (i.e., matched source) with a G-type fault at 0.3m (G f = 4 S/m). In this case, incident voltage pulse reflects not only at the load, but also from the fault point in either direction. The sketch of this scenario is drawn in Fig. 3b with the identification of the first three echoes (identification of the other 3-echoes is left to the reader).

b
Figure 5: Signal vs. time at mid point of a 50, 0.5m TL under (a) serial RL termination (RL=50, LL=250nH), (a). parallel RC termination (RL=50, CL=5pF). Matched termination is used at the source (Rs=50). Pulse length=400ps. echoes gives clues on the type of termination (e.g., inductive load basically affects the DC portion of the echo pulse, but rise and fall times are affected by the capacitive load), the amplitudes certainly cannot be used to calculate/measure the reflection coefficient. 3.1 Analysis in the Fourier (frequency) Domain The standard Fourier transform (called FFT) procedure for the complex loads is as follows: Discriminate incident and reflected pulses in time, Move to the frequency domain by applying fast Fourier transformation (FFT), Ratio the reflected pulse to the incident pulse within the source frequency band. An example of the FFT procedure is given in Fig.6. Here, a fault-free 0.5m-long 50 TL is connected to a matched generator at one end and to a parallel RC load (RL = 50 , CL = 10pF) at the other. The plot in Fig. 6a shows signal vs. time recorded at mid point of the TL where one can easily discriminate incident and load-reflected pulses. Applying the FFT procedure yields the voltage reflection coefficient vs. frequency as given in Fig. 6b. The solid and dashed lines in the figure represent the results of the FFT procedure and the analytical exact solution obtained from (1), respectively. Excellent agreement shows the power of the procedure. As expected, the capacitor is OC at DC and low frequencies and reflection coefficient is zero (since RL = Z0 = 50 ). On the other hand, the capacitor acts as SC and the modulus of the reflection coefficient approaches to 1 (actually, taking the real part after the FFT yields = 1).

3. The TL Echo Analysis


The TDRMeter tool can be used to teach, understand and visualize the TD pulse characteristics and echoes from various discontinuities/terminations, and fault types and locations can be predicted. The simplest method is to predict the length of the TL (for the case of resistive discontinuities/termination) by measuring the transit time between the incident pulse and the echo (if separated in time). Distinguishing the end- and faultreflected pulses in time, measuring delays among them and marking the maximum amplitudes are enough for this purpose. An example is given in Fig. 4. A50 , 0.5m loss- and faultfree uniform TL, matched generator at the left end, and RL = 350 resistive load at the right end, is excited with a rectangular pulse having 400ps pulse-duration. The figure shows signal vs. time recorded at mid point of the TL. The time-delay between the two pulses gives the distance from the observation point to the end of the TL. The transit time to the first pulse gives the distance from the generator to the observation point. The ratio of the pulse amplitudes is 0.75 and corresponds to the modulus of the voltage reflection coefficient, which can easily be verified from
L

ZL Z0 350 50 = = 0.75 ZL + Z0 350 + 50

(3)

Unfortunately, this method does not work for complex faults and/or terminations as given in Figs. 5a and 5b, which show signal vs. time variations for serial RL (RL = 50 , LL = 250nH) and parallel RC (RL = 50 , CL = 5pF) complex loads, respectively (other parameters are: Rs = Z0 = 50 , TL length=0.5m, rectangular pulse duration=700ps). Although the shape of the

2006 IEEE

77

b b
Figure 6: (a) Signal vs. time at mid point of a 50, 0.5m TL under parallel RC termination (RL=50, CL=10pF). Matched termination is used at the source (Rs=50). Pulse length=400ps, (b) Voltage reflection coefficient vs. frequency obtained with the FFT procedure; Solid: TD simulation result, Dashed: Analytical exact solution. Figure 7: (a) Signal vs. time at mid point of a 50, 0.5m TL under parallel RLC termination (RL=50, CL=5pF, LL=10nH). Matched termination is used at the source (Rs=50). Pulse length=400ps, (b) Voltage reflection coefficient vs. frequency obtained with the FFT procedure; Solid: TD simulation result, Dashed: Analytical exact solution.

Another example is given in Fig.7. The same fault-free, the step voltage Vi ). 0.5m-long, 50 TL is connected to a matched generator at one Write down the voltage reflection coefficient in the s-domain end and to a parallel resonance circuit at the other (RL = 50 , (for example, ( (s) = RL + sL L Z0 ) / (RL + sL L + Z0 ) CL = 5pF, LL = 10nH). Fig. 7a shows signal vs. time recorded for a serial RL termination). at mid point of the TL. The voltage reflection coefficient vs. fre- Multiply these two, take the inverse Laplace transform of quency obtained after the procedure explained above is shown the product, and derive sr (t) analytically. in Fig. 7b. The solid and dashed lines in the figure represent offFor example, sr (t) for the serial RL termination is derived as line FFT results and analytical exact solution, respectively. As RL Z0 RL Z0 t/ expected, the reflection is minimal at the resonance frequency of + 1 e , sr (t) = Vi 1 + 1 RL + Z0 RL + Z0 about 711 MHz calculated from fr = (2 LL CL ) . LL It should be noted that one can still apply the FFT procedure = (4) even when the pulses cannot be discriminated in time. In this RL + Z0 case, the TD simulations should be repeated twice: first along the TL with the unknown termination, then along the TL with Here is the time constant. Similarly, the sr (t) for the parmatched termination. The incident plus reflected pulses are allel RC termination is recorded in the first simulation, while only the incident pulse RL Z0 exists and is recorded in the second run. Subtracting the second sr (t) = Vi 1 + + 1 e t/ , RL + Z0 from the first will yield the reflected-only pulse, and then the LL FFT procedure may be applied. = CL . (5) RL + Z0 3.2 Analysis in the Laplace Domain The shape of the echo may be used to predict the nature of the Another method -- without resorting to the Laplace transmismatch along the TL. One method is to use the Laplace trans- form -- is to observe the echo signal in time at the time limits, formation to obtain time variation of the echo Sr (t) analytically i.e., at t = 0 and when t (here, t = 0 corresponds to the instant that the incident pulse hits the load). For example, the (see [9] for details). The procedure is as follows: Derive the Laplace transform of the generator (e.g., Vi /s for inductance of a RL load combination, initially acts as an infi-

78

2006 IEEE

b
Figure 8: (a) The sketch of the Laplace procedure; prediction of the inductive load from the time signature of the echo by measuring the time constant , (b) Signal vs. time (step response) at mid point of a 50, 0.5m TL under serial RL termination (RL=50, LL=27nH). Matched termination is used at the source (Rs=50). The value of the inductor calculated from the plot is 26.2nH. nite-impedance (OC) and full reflection occurs at . Its current builds up exponentially with time and acts SC as t . Fig. 8a sketches this scenario. A step voltage with amplitude Vi becomes 2 Vi when hits the serial RL pair connected parallel to the TL marked as t = 0 in the sketch. Then, the voltage exponentially decays and approaches to Vi 1 + RL Z0 RL + Z0

b
Figure 9: (a) The sketch of the Laplace procedure; prediction of the capacitive load from the time signature of the echo by measuring the time constant , (b) Signal vs. time (step response) at mid point of a 50, 0.5m TL under parallel RC termination (RL=50, CL=50pF). Matched termination is used at the source (Rs=50). The value of the capacitor calculated from the plot is 49.4pF. drops to 0V when hits the parallel RC pair (marked as t = 0 in the sketch) (note that Vi = Rs /(Rs + Z0 )). The voltage increases exponentially and approaches to Vi 1 + RL Z0 RL + Z0

as t . An example obtained with the TDRMeter is given in Fig. 8b. Here, signal vs. time of a fault-free, uniform, 0.5mlong 50 TL, connected to an ideal generator (Rs = 0 ) at the left end, and a serial RL load at the other with RL = 50 and LL = 27nH, is shown. The response of the unit-amplitude step voltage at mid point of the TL is recorded and is plotted in the figure. Since RL = Z0 = 50 , the final voltage value is also 1V. The time delay between 2V (at 3.8 ns) and 1.368V (at 4.062ns) is read from the Sigvs.Time.dat file, and found to be = 0.22ns. Using the time constant equation given inset of Fig. 8a, the inductance value is calculated to be LL = 26.2nH. The capacitor value for the parallel RC combination may also be predicted from the time signature of the echo. A parallel capacitor load initially acts as a SC termination so full reflection with 180 phase difference occurs at t = 0. The capacitor voltage builds up exponentially with time and acts OC as t . Fig. 9a sketches this scenario. A step voltage with amplitude Vi

as t . Another TDRMeter result is given in Fig. 9b. Here, signal vs. time of a fault-free, uniform, 0.5m-long 50 TL, connected to a Rs = 50 -generator, and a parallel RC load (RL = 50 , CL = 50pF), is shown at mid point. Since RL = Z0 = 50 , the initial and final voltages are both 0.5V. The time instants marked with t = 0 and t = in the figure are extracted from the Sigvs.Time.dat file as 3.775ns and 5.01ns, respectively. This results in a time constant of = 0.24ns. Using the equation in Fig. 9a inset, the value of the capacitor is calculated to be CL = 49.4pF. Very often, the TLs suffer more than individual inductive or capacitive termination effects, and resonances occur. Fig. 10 and 11 illustrate signal vs. time variations of terminations with serial and parallel resonance circuits, respectively. For these kinds of terminations, the Laplace transformation procedure may also be applied and analytical expressions of the echo signal may be derived. Although the values of LC elements may not be predicted from the exponential variations directly, as done for the serial RL and parallel RC terminations above, the values may still be predicted by the application of different types of curve

2006 IEEE

79

Figure 10: Signal vs. time at mid point of a 50, 0.5m TL for a step voltage source under serial resonance termination (ZL=50, LL=5nH, CL=5pF, Rs=50). fitting methods, and/or novel algorithms such as artificial intelligence, genetic algorithms, etc.

Figure 11: Signal vs. time at mid point of a 50, 0.5m TL for a step voltage source under parallel resonance termination (ZL=50, LL=5nH, CL=5pF, Rs=50). time domain, J. Phys. Chem. Vol. 73, pp. 613623, 1969 [2]L. Sevgi, Complex Electromagnetic Problems and Numerical Simulation Approaches, IEEE Press John Wiley and Sons, NY 2003 [3]L. Sevgi, A Ray Shooting Visualization Matlab Package for 2D Groundwave Propagation Simulations, IEEE Antennas and Propagation Magazine, Vol. 46, No 4, pp.140-145, Aug 2004 [4]L. B. Felsen, F. Akleman, L. Sevgi, Wave Propagation Inside a Two-dimensional Perfectly Conducting Parallel Plate Waveguide: Hybrid Ray-Mode Techniques and Their Visualisations, IEEE Antennas and Propagation Magazine, Vol. 46, No.6, pp.69-89, Dec 2004 [5]L. Sevgi, . Uluiik, F. Akleman, A Matlab-based Twos dimensional Parabolic Equation Radiowave Propagation Package, IEEE Antennas and Propagation Magazine, Vol. 47, No.4, pp. 164-175, Aug 2005. [6]L. Sevgi, . Uluiik, A Matlab-based Visualization Package s for Planar Arrays of Isotropic Radiators, IEEE Antennas and Propagation Magazine, Vol. 47, No. 1, pp. 156-163, Feb 2005 [7]L. Sevgi, . Uluiik, A Matlab-based Transmission Line s Virtual Tool: Finite-Difference Time-Domain Reflectometer, Vol. 48, No. 1, pp. 141-145, Feb 2006, IEEE Antennas and Propagation Magazine. [8]S. Gedney, EE699 FDTD Solution of the TR Line Equations, Lecture Notes, (visit http://www.engr.uky.edu/~gedney) [9]Agilent Technologies, Time Domain Reflectometry Theory, Application Notes, Note 1304-2 (http://www.agilent.com)

4. Conclusions
Fault detection/identification along TLs with the use of time domain reflectometer method is reviewed. Recently introduced Matlab-based virtual TDRMeter tool is used for characteristic illustrations. The TDRMeter virtual tool can be used to teach, understand, test and visualize the TD TL characteristics, echoes from various discontinuities and terminations. Inversely, the types and locations of fault can be predicted from the recorded data. Any electronic device and/or circuit includes TL lines (i.e., coaxial, twowire cables, parallel plate lines, microstrip lines, etc.), therefore they have transient EM characteristics. A wide range of EMC problems may be simulated with the TDRMeter virtual tool, such as, conducted emissions, common and differential mode effects, etc., both in time and frequency domains. Also, reflections due to various types of transmission structures, impedance effects of vias, signal and power line couplings, etc., on printed circuits with smaller and smaller dimensions make signal integrity one of the most important and complex EMC problems. Therefore, the virtual tool may be particularly helpful for a student or circuit designer in the evaluation of signal integrity.

References
[1]H. Fellner-Feldegg, The measurement of dielectrics in

Authors Biography
Levent Sevgi (BS 1982, MS 1984, PhD 1991) is with the Electronics and Communication Department of Engineering Faculty at Do u University in Istanbul. He was a visg s iting scientist at Weber Research Institute, Polytechnic University in New York between 1998 1990. He has been involved in defense system development projects for nearly 15 years, including the design and installation of the Vessel Traffic System along the Turkish Straits. He worked with Raytheon Systems Canada in 1999 for HF Radar-based Integrated Maritime Surveillance System trials. He served as the Chair of the Electronic Systems Department of TUBITAK in 2000. His research study has focused on propagation in complex environments, analytical and numerical methods in electromagnetics, EMC/EMI/BEM modeling and measurements, antenna and RCS modeling, and novel radar systems. He is the author or co-author of many books and more than 100 journal and international conference papers.

80

2006 IEEE

The Sandia Lightning Simulator


Recommissioning and Upgrades
Michele Caldwell and Leonard E. Martinez Applied Accelerator and Electromagnetic Technologies Sandia National Laboratories Albuquerque, New Mexico, USA [email protected], [email protected]
AbstractThe Sandia Lightning Simulator at Sandia National Laboratories can provide up to 200 kA for a simulated single lightning stroke, 100 kA for a subsequent stroke, and hundreds of Amperes of continuing current. It has recently been re-commissioned after a decade of inactivity and the single-stroke capability demonstrated. The simulator capabilities, basic design components, upgrades, and diagnostic capabilities are discussed in this paper. Keywords-lightning, full-scale lightning testing TABLE I. SLS OPERATING PARAMETERS

I. INTRODUCTION
The Sandia Lightning Simulator (SLS) at Sandia National Laboratories simulates severe lightning strikes. It can produce a maximum peak current of 200 kA for a single stroke, 100 kA for a subsequent stroke, and several hundred Amperes of continuing current for hundreds of milliseconds. The SLS is currently being re-commissioned and refurbished after a decade of non-use. The single-stroke capability has been demonstrated up to 200 kA. The double-stroke and continuing current capabilities have been refurbished but not demonstrated at this time. This paper explains the capabilities and basic design of the SLS, its upgrades, and its diagnostic capabilities.

TABLE II. TYPICAL STS REQUIREMENTS AND KNOWN LIGHTNING PARAMETERS [1,2]

II. THE SANDIA LIGHTNING SIMULATOR CAPABILITIES


The SLS can be operated in the single-stroke or doublestroke mode, with or without continuing current. The operating parameters are listed in Table 1, and a SLS single stroke is shown in Fig. 1. Test environments include directattachment lightning, (where the simulator is connected to or arcs to the test object), burn-through (which incorporates continuing current), and nearby magnetic fields due to the strokes. Since the SLS was built largely to qualify nuclear weapon safety components and systems, the operating parameters were chosen to satisfy the more severe nuclear weapon requirements when practical. The parameters were based on a compilation of various nuclear weapon Stockpile-to-Target Sequence (STS) specified lightning environments, which are listed in Table 2. Typical lightning environments known at the time of the original design, also shown in Table 2, were also considered. Because lightning parameters are statistical in nature, the most severe and average values are shown in Table 2.
1 Values for first strokes, subsquent strokes have lower peak currents.

2006 IEEE

81

Figure 2. The Sandia Lightning Simulator. replicate a real lightning stroke. Marx banks are used to generate the high peak current of a lightning stroke. To achieve an overdamped waveform into essentially a short circuit load, a crowbar switch is used to short out the erected capacitance of each tank at approximately the time of peak current, separating the capacitance of the Marx banks from the load circuit. This creates a decaying (instead of oscillating) waveform into the load (or test object), assuming the load is inductive and resistive. The pulse width of the load current is largely determined by the inductance and resistance of the load. The energy delivered to the load is dependent on the load resistance versus the total resistance of the output circuit, including the crowbar switch. The basic simulator circuit for each tank can be seen in Figure 3. In reality, some resistance, capacitance, and inductance are associated with the crowbar switch and its connections. In addition, the crowbar switch is not closed exactly at the peak current, but at some time before. Before the crowbar switch closes, the simulator acts as an underdamped circuit (assuming the load resistance is relatively small) and the Marx voltage is approximately 90 out of phase with the Marx current. At peak current, the output voltages across the Marx banks are zero, or close to it. Some voltage needs to be present across the crowbar switch electrodes to trigger the switch. Therefore, the crowbar switch is typically triggered at a time corresponding to approximately 80% of peak current. This allows sufficient voltage to be present across the crowbar switch to reliably trigger it. Unfortunately, this also causes some energy to be retained in the Marx bank and not delivered to the load. The ringing that can be seen at the current peak in Fig. 1 is due to the discontinuity introduced by the crowbar switching.

Figure 1. Typical Sandia Lightning Simulator singlestroke output.

III. THE SANDIA LIGHTNING SIMULATOR DESIGN


Fig. 2 shows the major components of the SLS. The left oil tank contains the 200 kA Marx bank and the right oil tank contains the 100 kA Marx bank. Each tank holds approximately 16,000 gallons of transformer oil for high voltage insulation. For a single stroke shot, the 200 kA bank is fired into the center section and through the output terminal into a test object. For a double stroke, the 100 kA bank is fired at some predetermined time after the 200 kA into the center section and output terminal as well. Each tank uses two Marx capacitor banks in parallel. The 200 kA tank has an erected capacitance of 325 nF. The 100 kA tank can be configured in several different ways depending on the amount of current desired. For the maximum peak current of 100 kA, the erected capacitance is 163 nF. When fired, both banks typically erect to approximately 1 MV. At 1 MV, the stored energy is 176 kJ in the 200 kA bank and 88 kJ in the 100 kA bank. The actual energy delivered to a test object is dependent on the timing of the crowbar switch firing and the load characteristics. For continuing current, a motor / generator set is spun up and released, generating hundreds of Amperes for hundreds of milliseconds. The simulator outputs a unipolar, overdamped waveform to

Figure 3. Basic simulator circuit per tank.

82

2006 IEEE

IV. RECENT UPGRADES


There are two crowbar switches, one in each oil tank, and they are triggered with lasers. Previously, a large Krypton-Fluoride ultraviolet laser was used to fire both crowbar switches. The laser light was split and routed to each tank with mirrors. This laser was replaced with two much smaller, less hazardous YAG lasers. The previous laser took up a small room, required handling and venting of toxic gas, and routing of exposed high energy laser light. Now, each oil tank has its own laser contained in an electromagnetically shielded box on the side of each tank, without exposed laser light or toxic gas. The original low-voltage trigger system was replaced with up-to-date trigger generators that are remotely set and adjusted through a custom Labview program. The low-voltage trigger system initiates the firing of the high-voltage Marx banks, the lasers for the crowbar switches, and the continuing current generator. The data acquisition system was modernized to include Tektronix TDS 7054 oscilloscopes that have multi-frame capability for the double-pulse mode and a custom Labview program to set the scopes and retrieve data. The building that houses the simulator was updated to meet current environmental and safety regulations. Upgrades are planned for the extensive gas system which supplies high-voltage insulating gas to the many switches in the simulator and for automating the high-voltage control console. The high-voltage control console sets and monitors the gas system pressures, the high voltage power and trigger supplies, the continuing current generator operating parameters, and interfaces to building safety interlocks. Future upgrades include replacing the continuing current generator and installing an electromagnetically shielded video system to monitor test objects during testing.

mitters can be seen in Figs. 4 - 6. In the screen room, the optical signal is converted back to an electrical signal and fed to an oscilloscope channel. Typical diagnostics are current viewing resistors and transformers, Rogoswki coils (for current derivative measurements), and common and differential mode voltage dividers. Other compatible diagnostics are pressure transducers, temperature sensors, and electric and magnetic field sensors (DDot and B-Dot, respectively) which may be desired when exposing a test object to indirect lightning electromagnetic fields.

Figure 4. Simulator output and Instrumentation Barrel.

V. DIAGNOSTICS
Current and voltage measurements are taken for each shot to monitor and diagnose the lightning simulator. These signals are sent back to a screen room via shielded coaxial cables. Three current viewing resistors in line with the simulators return path measure the total current for each shot. The current inside each tank is measured with current viewing resistors. The high-voltage trigger generator signals are monitored with a combination of current transformers and current viewing resistors. The crowbar voltage is measured with a resistive divider, and the trigger signal to the crowbar switch lasers is monitored. It is important to monitor these signals because the timing between the peak Marx bank currents and the triggering of the crowbar switches is critical. If the crowbar switch fires too early into the rising edge of the Marx bank current, less current than desired is transmitted downstream to the test object, resulting in an undertest. If the crowbar switch fires too late or not at all, the Marx bank capacitors may be damaged due to large oscillations in the current pulse. To minimize the electromagnetic noise generated during a shot from interfering with test object diagnostic data, diagnostics are shielded and fed through a fiber optic system back to the screen room. Typically, the diagnostics are shielded in a metal instrumentation barrel. Within the barrel, fiber optic transmitters convert the analog diagnostic signals into optical signals, which are sent to the screen room via optical fibers. Pictures of the instrumentation barrel, diagnostics, and fiber optic trans-

Figure 5. Test object and diagnostics inside Instrumentation Barrel.

Figure 6. Fiber optic transmitters inside the Insturmentation Barrel.

2006 IEEE

83

VI. TYPICAL TEST OBJECTS


The Sandia Lightning Simulator can be used to certify or evaluate hardware or to perform research. Historically, it has been mostly used to perform safety qualification testing of nuclear weapon components and weapon systems. However, it has also been used for basic research such as burn-through studies of different materials [3]. The main limitations of the facility are that it is not portable and it is designed to operate essentially into a short circuit. Operating the simulator into an open circuit forces all the current that would be delivered to the test object to oscillate in the Marx banks, risking damage to the simulator. Therefore, if test items are insulating or have a large inductance thereby producing a large inductive voltage drop, care must be exercised in operating the simulator. The next suite of tests at the Sandia Lightning Simulator include evaluating Lightning Arrestor Connectors, which are safety components used in nuclear weapons. It is also planned in the near future to test hazardous waste containers in a burnthrough environment, to evaluate a lightning detection system, and to conduct concrete and rebar behavior research.

objects can be subjected to direct lightning attachment, burnthrough, or coupling of magnetic fields due to nearby strikes. A variety of diagnostics can be used in conjunction with fiber optic transmitting systems, the use of which minimizes electromagnetic noise coupling from firing the simulator.

ACKNOWLEDGMENTS
The authors would like to thank and acknowledge the contributions of Parris Holmes and Matthew Kiesling in refurbishing, maintaining, and operating the Sandia Lightning Simulator. Their efforts and dedication have been key elements to the success of this endeavor. We would also like to thank Mike Dinallo for his technical assistance and knowledge.

REFERENCES
[1]F. W. Neilson, An Extreme-Lightning Test Facility, internal report, Sandia National Laboratories. [2]N. Cianos, E. T. Pierce, A Ground Lighting Environment for Engineering Usage, Technical Report 1, Contract L.S. 2817A3, SRI Project 1834, for McDonnell-Douglas Astronautics Corp., Stanford Research Institute, Menlo Park, California, August 1972. [3]G. H. Schnetzer, R. J. Fisher, and M. A. Dinallo, Measured Responses of Internal Enclosures and Cables Due to Burnthrough Penetration of Weapon Cases by Lightning, SAND94-0312, Sandia National Laboratories, August 1994.

VII. CONCLUSIONS
In conclusion, the Sandia Lightning Simulator has been refurbished and upgraded after almost a decade of non-use. Currently, the ability to simulate a severe single lightning stroke has been demonstrated. The double-stroke and continuing current capabilities have been refurbished but not yet verified. Test

Author biographies
Michele Caldwell Michele is currently the manager of the Electromagnetic Qualification and Engineering Department at Sandia National Laboratories. She has worked in the department for the last four years on a variety of programs, including the recommissioning of the Lightning Simulator and a large TEM cell for pulsed testing. Other jobs at Sandia have been as a safety analyst and systems engineer. Leonard Martinez Leonard is currently the operator of the Sandia Lightning Simulator and Electromagnetic Environments Simulator for the Electromagnetic Qualification and Engineering Department at Sandia National Laboratories. As the operator, Leonard is responsible for the maintenance and operation of the facilities, as well as coordinating tests. He has over 20 years experience with designing, building, and operating a variety of pulsed power systems.

84

2006 IEEE

You might also like