Practical
Practical
n this issue you will find three practical papers that should be of interest to members of the EMC community. The first is entitled Verification and Validation of Computational Electromagnetics Software, by Edmund K. Miller. This is on the very important topic of how to determine the accuracy of numerical results and is complementary to the paper published in the Winter 2006 Newsletter by Andy Drozd. If you enjoyed that paper, you will enjoy this one as well. The second paper is entitled, Transmission Line Fault Analysis Using a Matlab-Based Virtual Time Domain Reflectometer Tool, by Levent Sevgi. In it the author discusses the problem of estimating complex transmission line loads based on the analysis of transmission line signal in time, frequency and Laplace domains. A real plus is the availability of downloadable Matlab-Based Virtual TDRMeter Tool that can be used to teach, understand, and visualize the time-domain pulse characteristics and echoes from various discontinuities/terminations. In addition, fault types and locations can be predicted. The third paper is entitled, The Sandia Lightning Simulator: Recommissioning and Upgrades, by Michele Caldwell and Leonard E. Martinez. This facility can produce a maximum peak current of 200 kA for a single stroke, 100 kA for a subsequent stroke, and several hundred Amperes of continuing current for hundreds of
milliseconds. The paper was first presented at the 2005 IEEE EMC Symposium in Chicago and has been reprinted here by permission of the Symposium Committee. The purpose of this section is to disseminate practical information to the EMC community. In some cases, the material is entirely original. In others, the material is not new but has been made either more understandable or accessible to the community. In others, the material has been previously presented at a conference but has been deemed especially worthy of wider dissemination. Readers wishing to share such information with colleagues in the EMC community are encouraged to submit papers or application notes for this section of the Newsletter. While all material will be reviewed prior to acceptance, the criteria are different from those of Transactions papers. Specifically, while it is not necessary that the paper be archival, it is necessary that the paper be useful and of interest to readers of the Newsletter. I have been editing this section of the newsletter for six years and have decided to retire from this position. I will, however, be ably succeeded by Prof. Flavio Canavero of the Politecnico di Torino in Italy. Paper submissions, comments, or letters to the Technical Editor can be sent to him at [email protected]
66
2006 IEEE
the validity of the results being presented at meeting and in published material. The discussion here might be considered complementary to, and an extension of, that presented in an excellent, recent article in the EMC Newsletter by Drozd [2]. The present discussion was originally prepared for an invited presentation given at EMB 2001, Electromagnetic Computations, Methods and Applications, Uppsala University, Sweden [3].
Figure 1. The input impedance (a), admittance (b) and input and radiated powers (c) as a function of the number of segments (unknowns) for a center-fed dipole antenna 2 wavelengths long. Results for a one-segment source appear especially non-converged for Z but are better for Y. For the constant-width source region, the results exhibit a muchreduced dependence on N.
2006 IEEE
67
Figure 2. The magnitude of the finely sampled induced tangential electric field along the axis (the current is on the surface) of a 2.5-wavelength, 50-segment wire 10-3 wavelengths in radius modeled using NEC. For the antenna case (the solid line) the two 20-V/m source segments are obvious as are the other 48 match points (the solid circles) whose values are generally on the order of 10-13 or less. For the scattering problem, the scattered E-field (the dashed line) is graphically indistinguishable from the incident 1 V/m excitation except near the wire ends. The IEMF and far-field powers for the antenna are 1.257x10-2 w and 1.2547x10-2 w, respectively. For the scattering problem, the corresponding powers are 5.35x10-4 and 5.31x10-4 watts.
be applied to physically meaningful problems. For brevity in the following, and because the primary focus will be on validation as defined above, the term validation will be used to cover both issues. Of the code-development steps listed above, validation probably consumes the largest collective effort as computer models and problems become increasingly complex. Validation is important in general, of course, but how to achieve it may not be readily apparent nor easily accomplished. However, some specific situations can be identified where validation is not only rather straightforward but particularly useful. For example, when moving codes between computers or installing one on a new computer, its essential that the code work in the new environment in the same way and produce the same results, as it did in the old. Its also important to confirm continued valid operation of the code over time on a given computer. Among the ways this might be done are use of standard test cases that can be routinely redone for comparison with solutions stored from previously running them. Providing guidance to the user concerning the estimated validity of the computed results would also be extremely valuable, a somewhat more demanding aspect of validation. These validation requirements suggest the need for a standardized procedure for estimating modeling errors using various internal and external checks on the model results, and the development of standardized error measures. Ideally, this could lead to giving the user a quantitative error estimate generated by the code itself concerning the reliability of the numerical results it produces.
68
2006 IEEE
recting the problem. The code should also have a built-in calibration capability that permits convenient reconfirmation that the results produced today are the same as those obtained earlier for a given set of reference problems, to therefore provide periodic reassurance that code operation has not changed. These test cases should be provided as part of the software package, including the input data for each check model, as well as intermediate and final results so that any discrepancies in the computation chain can be identified more easily. On-line documentation should also be a built-in component of a modeling code to assist a user in real time, especially when the model input is being developed. Finally, it would invaluable were all modeling codes to come with a GUI (Graphical User Interface) and designed to accept input in a variety of formats to make model sharing and results comparison more convenient. To summarize, routine implementation of appropriate error measures should reduce uncertainty and build confidence in all areas of CEM applications. The development of quantitative, reliable error measures would ideally also enable explicit cost-accuracy tradeoffs to be made, so computer models could be used for design to specification. Adaptive allocation of an error budget throughout a problem would lead to more cost-effective modeling by permitting attention to be focused on the most critical areas so that optimum use is made of the computer resources.
Figure 3. Results for the imaginary component of current on the lit side of an infinite PEC cylinder illuminated by a TE, normally incident plane wave. The noisy data (the solid circles) comes from adding uniformly distributed random noise varying between 0.1 to accurate current samples (the small open circles), and can be seen to be distributed about the average of the five MBPE fitting models used (the continuous line) whose parameters are computed from noisy data samples shown by the large, open circles. racy A, the precision P, the condition number CN, and the number of equations N, are all expressed in digits. Both experimental and independent numerical data are essential to assess overall solution accuracy or uncertainty. Experimental data provides essentially the only way of evaluating ep. On the other hand, en can be studied in several ways, one being experimental data when the physical modeling error can be made zero. Also useful is comparison with other analytical or numerical results, and evaluating the solution relative to analytical requirements such as boundary error, reciprocity, energy conservation, etc. Observing the solution with increasing N (a convergence test) is useful to establish modeling guidelines in terms of the required sampling density. The physical modeling error most often arises because of geometrical approximations that are made, for example, where curved surfaces are represented by flat facets or even cubes, or where a different structure entirely is employed, as when a wire mesh is used for a solid surface. Electrical approximations can also be made, as when a surface impedance is used as a boundary condition for a penetrable object, or an antenna source is represented by a point-sampled, tangential electric field. B. User Modeling Errors It may seem unusual to cite the user/modeler as being a significant source of modeling errors, but the fact is that there are many ways by which the modeler can cause errors in using even a well-validated and user-friendly code. Aside from the most obvious errors arising from mistakes in preparing input data or otherwise violating code requirements, a user may reject correct results through unwarranted skepticism or erroneous expectations. Another common problem is insufficient exploration of the relevant parameter space, thus missing fine, but important, details in the computed observable, a radiation pattern or frequency response, for example. Or, the user may not recognize that the physical modeling error is the controlling factor in a given application, and instead assumes that a problem in the modeling code is causing results that dont agree
2006 IEEE
69
acceptably with measurements. Alternatively, a user may unquestionably accept the output produced by a computer model. Kleinmans admonition that appropriate skepticism be employed is highly recommended. Until validated, all results must be questioned. Furthermore, accurate numerical results dont guarantee physical relevancy nor fidelity. Also to be kept in mind is that model results might exhibit more accurate relative dependencies when conducting parameter studies than do their absolute values. Frequency, angle and other shifts are fairly common in computed observables when compared with experimental results. Convergence tests must be used with care, as the results dont always converge to the right answer as the number of unknowns is increased, and they can also be sensitive to factors other than the model accuracy, as discussed below.
model has been sampled finely enough. However, convergence to the correct answer cannot be guaranteed, and furthermore the associated matrix can become more and more ill-conditioned with increasing N. In the case of the thin-wire approximation, for example, nonphysical current and charge oscillations can occur if the segment length is made less than the wire diameter. Misleading results like those shown in Fig. 1 (these and other model results presented here come from NEC [6] unless otherwise indicated) can also be obtained, due to the fact that the conductance is a more-stable quantity than is the resistance, as well as the fact that the source model can introduce an additional effect having nothing to do with solution convergence per se. Its worth pointing out that the trends seen in Fig. 1 can themselves be modeled, either from a curve-fitting viewpoint, which is less desirable although possibly beneficial, or based on the underlying physics, the latter using a general approach called model-based parameter estimation (MBPE) [7]. For example, the dependence of the susceptance on N when the excitation is confined to a single segment, can be approximated by B(N) X + Y/[Ln(L/N)], the model, which is based on knowledge of the feed-point (or gap-width dependence). The parameters, X and Y, are estimated from samples of B for two (or more) values of N. For the results shown in Fig. 1b, X = 1.025x10-4, Y = 2.x10-3. No comparable model is needed for the conductance because it is relatively insensitive to the sourceregion details, stabilizing here at about N ~ 10. If the source region is maintained at a constant width, the conductance is effectively independent of the number of unknowns, while the susceptance is nearly so beyond N ~ 20 or so. B2. Boundary-Condition Checks--Errors in the continuity of tangential fields at a material surface (boundary-condition error) probably provide the most rigorous internal check that can be made of a computer model. For a perfect conductor this might be most simply done by examining the induced or total tangential electric field on the objects surface as in Fig. 2. The total field, for example, provides the relative error in the boundary field when compared with the magnitude of the excitation field. Alternatively, the normal power flow to the surface (due to a nonzero electric field) yields an error measure that can be compared with the power supplied by the exciting field. This kind of measure can be used in a local sense for adaptive modeling by indicating where the power-flow error is larger than some acceptable value, thus indicating that additional sampling of the boundary current and tangential field is needed there. When integrated over the entire boundary, a global measure of the power-flow error is obtained. If the boundaryfield error is examined more finely over the entire surface of an object being modeled than used for the model itself, this can require more computation than that needed to fill the impedance matrix. Thus, using it may be reserved for those situations where a new problem is being modeled or where its suspected that the numerical results are unreliable. Fortunately, the boundary error wouldnt usually need to be examined over the entire object, but can, instead, be restricted to those areas where its thought to be most required, the source region of an antenna, for example. B3. Model-Based Parameter Estimation--Another applica-
70
2006 IEEE
tion of MBPE to CEM is to develop an estimate for the uncertainty of a computed observable by checking the consistency of data for that observable with Maxwells equations [8]. The approach may be compared to using linear regression to assess the accuracy of data that should fall on a straight line. In the case of an electromagnetic observable, however, a straight line is an inappropriate fitting model. Rather, the fitting model should represent the behavior expected on physical grounds. For example, an EM frequency response is well-approximated by a pole series, or more generally, a rational function. The MBPE data-uncertainty check also requires that the data is over-sampled with respect to its rank. One approach involves computing the parameters of the fitting model from a subset of the data, and then using the difference between the fitting model and the remaining data to develop an uncertainty estimate. Another involves using all of the data, and obtaining a least-squares solution for the fitting-model parameters, with the difference between the fitting model and the entire data set providing the uncertainty estimate. An example of the former approach is illustrated in Fig. 3, where the observable is the circumferential current on the front side of a PEC infinite cylinder illuminated by a normally incident plane wave. Five overlapping rational functions of frequency were used to obtain the result shown, having numerator-polynomial orders or 3, 4, 3, 4, and 3 respectively with corresponding denominator polynomials orders of 4, 5, 4, 5, and 4, requiring a total of 16 data samples (shown by the large open circles). The data samples used by each fitting model are 1-8, 110, 3-12, 5-14, and 5-16, having additive random noise relative to the maximum of 10%. Aside from the region ka 1, the unused data is seen to be rather randomly distributed about the average fitting model with the overall average absolute excursion being 0.083. Numerous other computer experiments on various kinds of data [7,8] exhibit similar results, indicating that MBPE can be used to estimate the uncertainty of CEM data with reasonable reliability. Note also that the MBPE approach not only provides an estimate for the data accuracy, but yields an estimate for the actual response from the noisy data. Model-Based Parameter Estimation (MBPE) can also be used to obtain a continuous estimate of an observable to a specified uncertainty when performing computations or making measurements. An example of using an adaptive version of MBPE to estimate the radiation pattern of a uniform aperture is shown in Fig. 4. In this particular example, the specified acceptable uncertainty in the estimated pattern was increased from 0.1 dB to a maximum of 3 dB as the pattern level reached -30 dB, but other schemes could also be used. The fitting model used here to implement MBPE consists of a small number (usually less than 10) of discrete sources whose strengths are obtained from samples of the far field over 13 overlapping observation windows. It can be seen that the estimated pattern lies between the upper- and lower-bound estimates shown by the open and closed circles, respectively. C. External Checks External checks are those that employ data not generated within a modeling code itself, using the analytical properties of Maxwells equations and solutions thereof, numerical results from other computer models, and experimental measurements.
Figure 4. MBPE is used here to develop an estimate of the radiation pattern of uniform aperture to a specified uncertainty. The data samples used are shown by the large open circles, the solid line is the average of the 13 fitting models employed, and the small solid and open circles show upperand lower-bound estimates for the average fitting model. C1. Using Analytical Results--Since there are relatively few closed-form solutions available for which results can be obtained to essentially arbitrary accuracy, analytical checks do not provide a rich source of external checks for validating model accuracy. Perhaps the most important role of analytical results in validating computer models is in such areas as energy conservation, reciprocity, etc., included in this discussion as a part of the internal checks available for validating a code. C2. Using Numerical Results--Many analytical and numerical choices must be made in developing a computer model, one being whether a time-domain or frequency-domain formulation will be used. Next, the field propagator must be chosen, among the possibilities being a Greens function, the Maxwell curl equations, a modal description or the geometrical ray theory and diffraction. As part of the numerical treatment, the basis and weight functions must be selected, as well as the numerical implementation, point sampling vs. a Galerkin procedure, for example, followed by how the resulting equations are to be solved. For these and other reasons, model inter comparability is not always straightforward. But, if two, or more, computer models are to be compared quantitatively, ways of comparing their results are needed. Since CEM models can vary in any of the ways outlined above, its often necessary that the results from one be transformed into the framework of the other, or perhaps, both transformed into a third reference system. Once comparability issues are solved, literally any quantity computed by a modeling code can serve as an external check for another code. C3. Using Experimental Data--Finding that computed results agree with experimental data to within an acceptable error band is probably the most satisfying kind of check for a modeler, and the one most convincing to others as well. But, experiments are themselves subject to uncertainty, not only with respect to the actual measurement, but in terms of whether the object under test is itself fabricated correctly. Its
2006 IEEE
71
Of course, were cost not an issue, then it would be reasonable to perform every model computation to as high an accuracy, or to the least uncertainty, as possible. But this situation is rarely, if ever, the case. The point to be made is that the accuracy sought in the model results should be commensurate with the intended use to be made of them. A. Specifying CEM Results Accuracy-The Way Things Are Now Accuracy and validation statements made in oral presentations and in written publications are inadequate when they say things like the results are in good agreement with . . . , the model is highly accurate, excellent results are obtained, etc., as is now almost universally the case, because they are not quantitative. What the author means by such subjective statements and what the reader might conclude in looking at the same data can be very different. Its fair to ask, if the author wont/cant at least estimate what the uncertainty is in the results being presented, why the material should be published in the first place? B. Specifying CEM Results Accuracy-The Way Things Ought to Be More useful objective, and quantitative, statements might instead say the error in the peak gain 0.5 dB, nulls in the scattering pattern are located to within 2 degrees, the RMS difference between the computed input impedance and experimental measurement is less than 3 dB over a 2:1 frequency range, etc. These statements should also be accompanied by an explanation about how these conclusions were reached if that is not obvious from the statement itself. Finally, some commentary should be included about what kind of sampling density is needed in the model to achieve the estimated accuracy, what kind of operation count and frequency-scaling law is associated with getting these results, the associated storage required and its frequency dependence, and the anticipated increase in the cost to decrease the error further. In this context, its also relevant to observe that a computer model can only be regarded to be validated for problems already solved. Even small changes in parameters and problem type might result in large changes in the codes performance. The tolerability to parameter changes may be sensitive to the particular application, but can also depend on the users perception, experience and expectations. Therefore, validation and reporting estimated accuracy are integral, ongoing aspects of using any modeling code, no matter how extensively used it is. One way of reporting accuracy or uncertainty is the use of error bars or their equivalent. Error bars, or a range where the correct result is estimated to lie within some specified confidence level, have been traditionally used for experimental data. The error bar provides an indication of the experimentalists confidence in the results, gives a quantitative idea of their anticipated reproducibility, and indicates the degree to which the results are thought to be reliable. What has proven to work well in the experimental world should at least be considered for its possible adaptability to the computational world. How this might be accomplished may need considerable work, but its high time that a start be made to deal with the problem. Its also preferable that error measures be related to the intended use of the results. For example, the consequences of
Figure 5. MBPE is used here to estimate the accuracy of measured data. The average of five fitting models is shown by the solid line as obtained from data samples indicated by the large, open circles. The unused data is shown by the solid points, with its deviation about the average fitting model multiplied by 3 to show the difference more clearly. highly advisable that the modeler be acquainted with the details of the experiment. A number of personal examples could be cited, one of which involved the radar cross section of a long, thin wire. Initial comparison of the measured and computed results showed the two sets of data to be qualitatively similar, but with a systematic shift in angle between them. Further inquiry disclosed that the test wire was so thin as to be threadlike, and thus was taped on a Styrofoam rod to maintain its shape. Upon repeating the computation at a higher frequency shifted by the square root of the rods relative permittivity, the difference between the two sets of results were substantially reduced. An application of MBPE [8] for estimating the accuracy of experimental data to determine its suitability for validating computed results is illustrated in Fig. 5. Here the RCS of a metal cube provides the test data [9], where the fitting model is again a rational function of frequency. The maximum estimated error can be seen to be about 5% of the maximum RCS, indicating that this data could be used to validate computed results to this order of uncertainty.
72
2006 IEEE
making design decisions based on less-accurate data should be balanced against the cost of increasing the model accuracy. The variational dependence of far-fields on boundary sources, as opposed to an antennas input impedance being sensitive to the the source distribution near the feed point, can reduce the accuracy needed when only a radiation pattern is of interest. Advantage should be taken of the possibility that at times order-ofmagnitude estimates may be acceptable in the early stages of developing a design, whereas the final cut may require tenthsof-dB confidence levels. A fact that CEM modelers should always keep in mind is that cost-uncertainty tradeoffs are inherent in numerical modeling. Aside from applications relevance of establishing the uncertainty of numerical results, it is important when such data is intended for distribution to others through reports, papers and articles. A community-wide policy of requiring quantitative accuracy statements is the only responsible course in the long run. It would force CEMers to confront an issue now mostly avoided. It would level the playing field, by not disadvantaging those few who now do report the uncertainty in their models. It would reassure the sponsors and customers of CEM software that the developers are tackling the problem and not leaving it to inexperienced or uniformed users alone to handle. Finally, it would return to the ethics of traditional science where quantitative results were considered unacceptable unless accompanied by error estimates. C. A Recommended Policy for Describing Model Accuracy These observations suggest the need for a new policy in the EM community that explicitly addresses accuracy and uncertainty in a flexible, uniform, consistent and fair manner. It should be flexible enough to permit a variety of acceptable choices to accommodate the varied resources available to those presenting results. It should be uniformly implemented so that equivalent information will accompany all the forms in which CEM results are communicated, to ease both the reviewers and readers evaluation of the material and to ensure consistency in the results presented. Lastly, it should be imposed fairly, in that a good-faith effort at compliance is acceptable while not requiring all authors to address the issue in the same way. Repeating a proposal originally made to ACES and the APS [10], and subsequently adopted by the AP-S Magazine, I suggest that some sort of formal validation policy be implemented in CEM. As a requirement for publication, any material including computed results must address the question of their validity/accuracy/uncertainty by including the following two statements: 1) The results presented here are estimated to be accurate to _____, where quantitative statements such as: The error in peak gain is 0.5 dB; Nulls in the scattering pattern are located to within 2 deg; Input impedance is obtained to within 5 ohms; The RMS difference between the two sets of data is 1 dB; etc. are made. 2) This estimate is based on using the following kind(s) of validation exercise(s) _____, where whatever experimental, analytical and/or computational validation that has used is summarized. or their equivalent, where point 2) could use anything the
author wishes, including citing personal experience as a justification for the accuracy claimed. Information also should be required for the modeling parameters that are used, since these are intimately related to accuracy, using statements such as: Nominal sampling densities required to achieve these estimated accuracies are _____, where wavelength-dependent and/or geometry-dependent values are given. The dependence of the operation count on frequency, f, needed to exercise the model reported here is estimated nominally to be Afx, . . . or some equivalent statement, where numerical values for A and x are given. Giving computer running times is useful, but that alone is not enough because there is such a variation in computer architectures that model-to-model comparison based on running time is not very informative. The variable storage need to exercise this model is estimated nominally to be Bfy, where again numerical values for B and y are given. Finally, articles presenting numerical results should not limit such data entirely to a graphical format. While obviously useful, graphs are not very convenient for making quantitative comparisons with other data. It should be required that the data in at least one of the graphs should also be presented in tabular form.
2006 IEEE
73
REFERENCES
1. R. E. Kleinman, National Radio Science Meeting, Boulder, CO, 1993. 2. Andrew L. Drozd, Selected methods for validating computational electromagnetic modeling techniques, IEEE EMC Society Newsletter, Issue 208, Winter 2006, pp. 73-78. 3. E. K. Miller (2001), Verification and Validation of Computational Electromagnetics Software, invited paper in Conference Proceedings, EMB 01, Electromagnetic Computations, Methods and Applications, Uppsala University, Sweden, November 14-15, pp. 7-18. 4. E. K. Miller, Characterization, Comparison, and Validation of Electromagnetic Modeling Software, ACES Journal, special issue on EM Computer Code Validation, pp. 8-24. 1989 5. The American College Dictionary, Harper & Brothers Publishers, 1953. 6. G. J. Burke, Numerical Electromagnetics Code: NEC-4 Method of Moments, Lawrence Livermore National Laboratory, UCRL,MA-109338, 1992.
7. E. K. Miller, Model-Based Parameter Estimation in Electromagnetics: Part I. Background and Theoretical Development, IEEE Antennas and Propagation Society Magazine, Vol. 40, No. 1, pp. 42-52; Part II. Applications to EM Observables, Vol. 40, No. 2, pp. 51-64; Part III. Applications to EM Integral Equations, Vol. 40, No. 3, pp. 49-66, 1998. 8. E. K. Miller, Using Model-Based Parameter Estimation to Estimate the Accuracy of Numerical Models, in Proceedings of 12th Annual Review of Progress in Applied Computational Electromagnetics, Naval Postgraduate School, Monterey, CA, pp. 588-595, 1996. 9. S. Mishra, David Florida Laboratory, Ottawa, Ontario, Canada, private communication, 1996. 10.E. K. Miller, Requiring Quantitative Accuracy Statements in EM Data, in Proceedings of the Eleventh Annual Review of Progress in Applied Computational Electromagnetics, Naval Postgraduate School, Monterey, CA, March 20-24, pp. 1202-1210, 1995.
74
2006 IEEE
Transmission Line Fault Analysis Using a MatlabBased Virtual Time Domain Reflectometer Tool
Levent Sevgi Do u University, Electronics and Communication Engineering Department, g s Zeamet Sok. No. 21, Acbadem / Kadky, 34722 Istanbul - Turkey
Abstract
Fault detection and identification along a finite-length transmission line, using a Matlab-based virtual time domain reflectometer that was recently introduced, is discussed. Estimation of complex loads based on the analysis of transmission line signal in time, frequency and Laplace domains are presented. Keywords Transmission lines, time domain reflectometer, FDTD, MATLAB, simulation, visualization, fault detection, Laplace transform n+1/2 (k) =
C t C t
G 2 R G 2
n1/2 (k)
C t
1 +
G 2
(2a)
in (k) =
1. Introduction
Investigation of the time domain (TD) responses of signals along transmission lines (TLs) requires solving the well-known TL equations derived either from Maxwells equations or by using TL circuit models (a typical two-wire TL, and a circuit model in terms of primary parameters R[ /m], L[H/m], C[F/m] and G[S/m] per unit length are sketched in Fig. 1). Such an approach should take into account physical parameters of the TL, the excitation, and the termination. The corresponding test/measurement method and the instrument are the TD reflectometry and the TD reflectometer (TDR), respectively [1]. The TDRs are used to locate and identify faults along all types of cables such as broken conductors, water damage, cuts, smashed cables, short circuits (SC), or open circuits (OC), etc. The TDR principle is simple; a generator injects a pulse down to a TL, and reflections from discontinuities and/or terminations are recorded. The distance between the generator and a fault is measured from the time delay between the incident pulse and the echo. Also, detailed analysis of the echo signal can reveal additional details of the faults or reflecting objects. Parallel to the developments in computer technology and the experience gained in programming, also because of the sharp increases in costs of electronic devices/systems, virtual labs have become very attractive in electrical engineering (EE) and education. A set of virtual electromagnetic (EM) tools has been introduced [2-6] for the last couple of years to assist engineers, educators, as well as students, from propagators to antenna solvers, radar cross section (RCS) predictors to EM compatibility (EMC) simulators, etc. The most recent virtual tool is the TDRMeter simulator [7] which solves the TD TL equations (x, t) i(x, t) +L + Ri(x, t) = 0 x t i(x, t) (x, t) +C + G(x, t) = 0 x t (1a) (1b)
t n+1/2
R 2 R 2
in1 (k)
1 + t
R 2
(k) n1/2 (k 1) . x
(2b)
The integers, k and n, respectively, represent spatial (x) and time (t) indices, so that physical space and time values are specified via (x, t + t/2) = n+1/2 (k) and i(x, t) = in (k) by using xk = k x and tn = n t ( t/2 delay between voltages and currents arises from the leap-frog scheme [2]). The FDTD derivation of the TL equations, discretizations of the source and load nodes under different termination conditions with the help of node voltage and/or loop current methods, and the design principles and details of the Matlab-based virtual TDRMeter were discussed in [7]. Here, fault detection/identification along a TL, and predicting various complex terminations by using the virtual TDRMeter tool is presented. First, the TDRMeter is reviewed in Sec. 2. The TL echo analysis based on both Fourier and Laplace transformations are summarized in Sec. 3 together with characteristic examples and fault detec-
((x, t) and i(x, t) are the space- and time-dependent voltage and current, respectively), by using the finite-difference timedomain (FDTD) approach as:
Figure 1: (a) Finite length transmission line, the time domain voltage source vs(t) and source resistor Rs, terminated by a complex ZL load, (b) its loss-free equivalent circuit. L [H/m] and C [F/m] are unit length inductance and conductance, respectively.
2006 IEEE
75
a
Figure 2: The front panel of the TDRMeter package, parallel RC termination, a rectangular pulse traveling towards the load (a 50 transmission line, Rs=100, pulse width=400ps, RL =10, CL =5pF). tion/identification tests. Finally, the conclusions are outlined in Sec. 4 (it is strongly advised to the reader to review basic TL theory before proceeding to further discussions, see for example, [2, 8, 9] and their references).
b
Figure 3: (a) Signal vs. time at 0.1m observation point on the SC terminated, 50 TL with a G-type fault at 0.3m (Gf=4 S/m), Gaussian pulse (Rs=50), (b) The sketch of the problem. ly saves signal vs. time data into a file named SigvsTime.dat for further off-line signal analysis. The counter at top-right displays the number of time steps left during the simulation. The popup menu at the right top contains a key selection. The TDRMeter can be used either as a TL simulator, or a TD Reflectometer. By default, the selection is TL simulator. The user specifies all input parameters and runs the tool to visualize time domain TL effects and or analyze the recorded signals. The TDRMeter option may be used for fault detection/identification purposes. TDRMeter is designed in a way that the user specifies only the TL and generator parameters. Once the TDRMeter option is selected, the termination and fault blocks disappear. The load is selected automatically and randomly, and a fault is introduced at an arbitrary point along the TL, so the user can find out its type and/or numerical values (if possible) by only observing/analyzing the output plots, e.g., signal vs. time on a selected TL point. The user may do blind tests and observe problems along the TL after specifying the TL and source parameters and starting the simulations, by pressing the Run button. The results of these blind tests (i.e., randomly generated parameters) re-appear when the user re-selects the TR Line option at the end of the simulations. In this way, the user can check his results against the data displayed on the front panel of the TDRMeter. The plot in Fig. 2 belongs to a 50 , 0.5m loss- and faultfree homogeneous (uniform) TL excited with a rectangular pulse having 400ps pulse-duration, internal resistor, and terminated by a parallel RC load (RL = 10 , CL = 5pF).The unit length inductance and capacitance values are 250nH / m, and 100 pF / m, respectively (the corresponding characteristic impedance is Z0 = L/C = 50 ), the speed of voltage and current waves along the line is = 2 108 m/s. The TL is divided into 100 nodes, therefore, x = 5mm. The time step t is calculated to be t = x/ = 25ps. With this choice, the voltage (or cur-
1 Visit http://www3.dogus.edu.tr/lsevgi to download TDRMeter package (click on EMC Virtual Tools to download TR_LINE_GUI). Requires Matlab software to use.
76
2006 IEEE
Figure 4: Signal vs. time for the resistive termination showing the incident rectangular pulse and the echo (reflected pulse) along a 50, 0.5m TL (pulse length=400ps, Rs=50, RL=350). rent) pulse propagates one node at a time; therefore, it will take 100 t for the injected pulse to reach the load. The total of 400 t will result in two reflections from the load and two reflections from the source ends. Fig. 3a shows the signal vs. time plot (at 0.1m) of a typical scenario where a 50 , 0.5m loss-free SC TL excited by a Gaussian pulse generator with 50 -internal resistor (i.e., matched source) with a G-type fault at 0.3m (G f = 4 S/m). In this case, incident voltage pulse reflects not only at the load, but also from the fault point in either direction. The sketch of this scenario is drawn in Fig. 3b with the identification of the first three echoes (identification of the other 3-echoes is left to the reader).
b
Figure 5: Signal vs. time at mid point of a 50, 0.5m TL under (a) serial RL termination (RL=50, LL=250nH), (a). parallel RC termination (RL=50, CL=5pF). Matched termination is used at the source (Rs=50). Pulse length=400ps. echoes gives clues on the type of termination (e.g., inductive load basically affects the DC portion of the echo pulse, but rise and fall times are affected by the capacitive load), the amplitudes certainly cannot be used to calculate/measure the reflection coefficient. 3.1 Analysis in the Fourier (frequency) Domain The standard Fourier transform (called FFT) procedure for the complex loads is as follows: Discriminate incident and reflected pulses in time, Move to the frequency domain by applying fast Fourier transformation (FFT), Ratio the reflected pulse to the incident pulse within the source frequency band. An example of the FFT procedure is given in Fig.6. Here, a fault-free 0.5m-long 50 TL is connected to a matched generator at one end and to a parallel RC load (RL = 50 , CL = 10pF) at the other. The plot in Fig. 6a shows signal vs. time recorded at mid point of the TL where one can easily discriminate incident and load-reflected pulses. Applying the FFT procedure yields the voltage reflection coefficient vs. frequency as given in Fig. 6b. The solid and dashed lines in the figure represent the results of the FFT procedure and the analytical exact solution obtained from (1), respectively. Excellent agreement shows the power of the procedure. As expected, the capacitor is OC at DC and low frequencies and reflection coefficient is zero (since RL = Z0 = 50 ). On the other hand, the capacitor acts as SC and the modulus of the reflection coefficient approaches to 1 (actually, taking the real part after the FFT yields = 1).
(3)
Unfortunately, this method does not work for complex faults and/or terminations as given in Figs. 5a and 5b, which show signal vs. time variations for serial RL (RL = 50 , LL = 250nH) and parallel RC (RL = 50 , CL = 5pF) complex loads, respectively (other parameters are: Rs = Z0 = 50 , TL length=0.5m, rectangular pulse duration=700ps). Although the shape of the
2006 IEEE
77
b b
Figure 6: (a) Signal vs. time at mid point of a 50, 0.5m TL under parallel RC termination (RL=50, CL=10pF). Matched termination is used at the source (Rs=50). Pulse length=400ps, (b) Voltage reflection coefficient vs. frequency obtained with the FFT procedure; Solid: TD simulation result, Dashed: Analytical exact solution. Figure 7: (a) Signal vs. time at mid point of a 50, 0.5m TL under parallel RLC termination (RL=50, CL=5pF, LL=10nH). Matched termination is used at the source (Rs=50). Pulse length=400ps, (b) Voltage reflection coefficient vs. frequency obtained with the FFT procedure; Solid: TD simulation result, Dashed: Analytical exact solution.
Another example is given in Fig.7. The same fault-free, the step voltage Vi ). 0.5m-long, 50 TL is connected to a matched generator at one Write down the voltage reflection coefficient in the s-domain end and to a parallel resonance circuit at the other (RL = 50 , (for example, ( (s) = RL + sL L Z0 ) / (RL + sL L + Z0 ) CL = 5pF, LL = 10nH). Fig. 7a shows signal vs. time recorded for a serial RL termination). at mid point of the TL. The voltage reflection coefficient vs. fre- Multiply these two, take the inverse Laplace transform of quency obtained after the procedure explained above is shown the product, and derive sr (t) analytically. in Fig. 7b. The solid and dashed lines in the figure represent offFor example, sr (t) for the serial RL termination is derived as line FFT results and analytical exact solution, respectively. As RL Z0 RL Z0 t/ expected, the reflection is minimal at the resonance frequency of + 1 e , sr (t) = Vi 1 + 1 RL + Z0 RL + Z0 about 711 MHz calculated from fr = (2 LL CL ) . LL It should be noted that one can still apply the FFT procedure = (4) even when the pulses cannot be discriminated in time. In this RL + Z0 case, the TD simulations should be repeated twice: first along the TL with the unknown termination, then along the TL with Here is the time constant. Similarly, the sr (t) for the parmatched termination. The incident plus reflected pulses are allel RC termination is recorded in the first simulation, while only the incident pulse RL Z0 exists and is recorded in the second run. Subtracting the second sr (t) = Vi 1 + + 1 e t/ , RL + Z0 from the first will yield the reflected-only pulse, and then the LL FFT procedure may be applied. = CL . (5) RL + Z0 3.2 Analysis in the Laplace Domain The shape of the echo may be used to predict the nature of the Another method -- without resorting to the Laplace transmismatch along the TL. One method is to use the Laplace trans- form -- is to observe the echo signal in time at the time limits, formation to obtain time variation of the echo Sr (t) analytically i.e., at t = 0 and when t (here, t = 0 corresponds to the instant that the incident pulse hits the load). For example, the (see [9] for details). The procedure is as follows: Derive the Laplace transform of the generator (e.g., Vi /s for inductance of a RL load combination, initially acts as an infi-
78
2006 IEEE
b
Figure 8: (a) The sketch of the Laplace procedure; prediction of the inductive load from the time signature of the echo by measuring the time constant , (b) Signal vs. time (step response) at mid point of a 50, 0.5m TL under serial RL termination (RL=50, LL=27nH). Matched termination is used at the source (Rs=50). The value of the inductor calculated from the plot is 26.2nH. nite-impedance (OC) and full reflection occurs at . Its current builds up exponentially with time and acts SC as t . Fig. 8a sketches this scenario. A step voltage with amplitude Vi becomes 2 Vi when hits the serial RL pair connected parallel to the TL marked as t = 0 in the sketch. Then, the voltage exponentially decays and approaches to Vi 1 + RL Z0 RL + Z0
b
Figure 9: (a) The sketch of the Laplace procedure; prediction of the capacitive load from the time signature of the echo by measuring the time constant , (b) Signal vs. time (step response) at mid point of a 50, 0.5m TL under parallel RC termination (RL=50, CL=50pF). Matched termination is used at the source (Rs=50). The value of the capacitor calculated from the plot is 49.4pF. drops to 0V when hits the parallel RC pair (marked as t = 0 in the sketch) (note that Vi = Rs /(Rs + Z0 )). The voltage increases exponentially and approaches to Vi 1 + RL Z0 RL + Z0
as t . An example obtained with the TDRMeter is given in Fig. 8b. Here, signal vs. time of a fault-free, uniform, 0.5mlong 50 TL, connected to an ideal generator (Rs = 0 ) at the left end, and a serial RL load at the other with RL = 50 and LL = 27nH, is shown. The response of the unit-amplitude step voltage at mid point of the TL is recorded and is plotted in the figure. Since RL = Z0 = 50 , the final voltage value is also 1V. The time delay between 2V (at 3.8 ns) and 1.368V (at 4.062ns) is read from the Sigvs.Time.dat file, and found to be = 0.22ns. Using the time constant equation given inset of Fig. 8a, the inductance value is calculated to be LL = 26.2nH. The capacitor value for the parallel RC combination may also be predicted from the time signature of the echo. A parallel capacitor load initially acts as a SC termination so full reflection with 180 phase difference occurs at t = 0. The capacitor voltage builds up exponentially with time and acts OC as t . Fig. 9a sketches this scenario. A step voltage with amplitude Vi
as t . Another TDRMeter result is given in Fig. 9b. Here, signal vs. time of a fault-free, uniform, 0.5m-long 50 TL, connected to a Rs = 50 -generator, and a parallel RC load (RL = 50 , CL = 50pF), is shown at mid point. Since RL = Z0 = 50 , the initial and final voltages are both 0.5V. The time instants marked with t = 0 and t = in the figure are extracted from the Sigvs.Time.dat file as 3.775ns and 5.01ns, respectively. This results in a time constant of = 0.24ns. Using the equation in Fig. 9a inset, the value of the capacitor is calculated to be CL = 49.4pF. Very often, the TLs suffer more than individual inductive or capacitive termination effects, and resonances occur. Fig. 10 and 11 illustrate signal vs. time variations of terminations with serial and parallel resonance circuits, respectively. For these kinds of terminations, the Laplace transformation procedure may also be applied and analytical expressions of the echo signal may be derived. Although the values of LC elements may not be predicted from the exponential variations directly, as done for the serial RL and parallel RC terminations above, the values may still be predicted by the application of different types of curve
2006 IEEE
79
Figure 10: Signal vs. time at mid point of a 50, 0.5m TL for a step voltage source under serial resonance termination (ZL=50, LL=5nH, CL=5pF, Rs=50). fitting methods, and/or novel algorithms such as artificial intelligence, genetic algorithms, etc.
Figure 11: Signal vs. time at mid point of a 50, 0.5m TL for a step voltage source under parallel resonance termination (ZL=50, LL=5nH, CL=5pF, Rs=50). time domain, J. Phys. Chem. Vol. 73, pp. 613623, 1969 [2]L. Sevgi, Complex Electromagnetic Problems and Numerical Simulation Approaches, IEEE Press John Wiley and Sons, NY 2003 [3]L. Sevgi, A Ray Shooting Visualization Matlab Package for 2D Groundwave Propagation Simulations, IEEE Antennas and Propagation Magazine, Vol. 46, No 4, pp.140-145, Aug 2004 [4]L. B. Felsen, F. Akleman, L. Sevgi, Wave Propagation Inside a Two-dimensional Perfectly Conducting Parallel Plate Waveguide: Hybrid Ray-Mode Techniques and Their Visualisations, IEEE Antennas and Propagation Magazine, Vol. 46, No.6, pp.69-89, Dec 2004 [5]L. Sevgi, . Uluiik, F. Akleman, A Matlab-based Twos dimensional Parabolic Equation Radiowave Propagation Package, IEEE Antennas and Propagation Magazine, Vol. 47, No.4, pp. 164-175, Aug 2005. [6]L. Sevgi, . Uluiik, A Matlab-based Visualization Package s for Planar Arrays of Isotropic Radiators, IEEE Antennas and Propagation Magazine, Vol. 47, No. 1, pp. 156-163, Feb 2005 [7]L. Sevgi, . Uluiik, A Matlab-based Transmission Line s Virtual Tool: Finite-Difference Time-Domain Reflectometer, Vol. 48, No. 1, pp. 141-145, Feb 2006, IEEE Antennas and Propagation Magazine. [8]S. Gedney, EE699 FDTD Solution of the TR Line Equations, Lecture Notes, (visit http://www.engr.uky.edu/~gedney) [9]Agilent Technologies, Time Domain Reflectometry Theory, Application Notes, Note 1304-2 (http://www.agilent.com)
4. Conclusions
Fault detection/identification along TLs with the use of time domain reflectometer method is reviewed. Recently introduced Matlab-based virtual TDRMeter tool is used for characteristic illustrations. The TDRMeter virtual tool can be used to teach, understand, test and visualize the TD TL characteristics, echoes from various discontinuities and terminations. Inversely, the types and locations of fault can be predicted from the recorded data. Any electronic device and/or circuit includes TL lines (i.e., coaxial, twowire cables, parallel plate lines, microstrip lines, etc.), therefore they have transient EM characteristics. A wide range of EMC problems may be simulated with the TDRMeter virtual tool, such as, conducted emissions, common and differential mode effects, etc., both in time and frequency domains. Also, reflections due to various types of transmission structures, impedance effects of vias, signal and power line couplings, etc., on printed circuits with smaller and smaller dimensions make signal integrity one of the most important and complex EMC problems. Therefore, the virtual tool may be particularly helpful for a student or circuit designer in the evaluation of signal integrity.
References
[1]H. Fellner-Feldegg, The measurement of dielectrics in
Authors Biography
Levent Sevgi (BS 1982, MS 1984, PhD 1991) is with the Electronics and Communication Department of Engineering Faculty at Do u University in Istanbul. He was a visg s iting scientist at Weber Research Institute, Polytechnic University in New York between 1998 1990. He has been involved in defense system development projects for nearly 15 years, including the design and installation of the Vessel Traffic System along the Turkish Straits. He worked with Raytheon Systems Canada in 1999 for HF Radar-based Integrated Maritime Surveillance System trials. He served as the Chair of the Electronic Systems Department of TUBITAK in 2000. His research study has focused on propagation in complex environments, analytical and numerical methods in electromagnetics, EMC/EMI/BEM modeling and measurements, antenna and RCS modeling, and novel radar systems. He is the author or co-author of many books and more than 100 journal and international conference papers.
80
2006 IEEE
I. INTRODUCTION
The Sandia Lightning Simulator (SLS) at Sandia National Laboratories simulates severe lightning strikes. It can produce a maximum peak current of 200 kA for a single stroke, 100 kA for a subsequent stroke, and several hundred Amperes of continuing current for hundreds of milliseconds. The SLS is currently being re-commissioned and refurbished after a decade of non-use. The single-stroke capability has been demonstrated up to 200 kA. The double-stroke and continuing current capabilities have been refurbished but not demonstrated at this time. This paper explains the capabilities and basic design of the SLS, its upgrades, and its diagnostic capabilities.
TABLE II. TYPICAL STS REQUIREMENTS AND KNOWN LIGHTNING PARAMETERS [1,2]
2006 IEEE
81
Figure 2. The Sandia Lightning Simulator. replicate a real lightning stroke. Marx banks are used to generate the high peak current of a lightning stroke. To achieve an overdamped waveform into essentially a short circuit load, a crowbar switch is used to short out the erected capacitance of each tank at approximately the time of peak current, separating the capacitance of the Marx banks from the load circuit. This creates a decaying (instead of oscillating) waveform into the load (or test object), assuming the load is inductive and resistive. The pulse width of the load current is largely determined by the inductance and resistance of the load. The energy delivered to the load is dependent on the load resistance versus the total resistance of the output circuit, including the crowbar switch. The basic simulator circuit for each tank can be seen in Figure 3. In reality, some resistance, capacitance, and inductance are associated with the crowbar switch and its connections. In addition, the crowbar switch is not closed exactly at the peak current, but at some time before. Before the crowbar switch closes, the simulator acts as an underdamped circuit (assuming the load resistance is relatively small) and the Marx voltage is approximately 90 out of phase with the Marx current. At peak current, the output voltages across the Marx banks are zero, or close to it. Some voltage needs to be present across the crowbar switch electrodes to trigger the switch. Therefore, the crowbar switch is typically triggered at a time corresponding to approximately 80% of peak current. This allows sufficient voltage to be present across the crowbar switch to reliably trigger it. Unfortunately, this also causes some energy to be retained in the Marx bank and not delivered to the load. The ringing that can be seen at the current peak in Fig. 1 is due to the discontinuity introduced by the crowbar switching.
82
2006 IEEE
mitters can be seen in Figs. 4 - 6. In the screen room, the optical signal is converted back to an electrical signal and fed to an oscilloscope channel. Typical diagnostics are current viewing resistors and transformers, Rogoswki coils (for current derivative measurements), and common and differential mode voltage dividers. Other compatible diagnostics are pressure transducers, temperature sensors, and electric and magnetic field sensors (DDot and B-Dot, respectively) which may be desired when exposing a test object to indirect lightning electromagnetic fields.
V. DIAGNOSTICS
Current and voltage measurements are taken for each shot to monitor and diagnose the lightning simulator. These signals are sent back to a screen room via shielded coaxial cables. Three current viewing resistors in line with the simulators return path measure the total current for each shot. The current inside each tank is measured with current viewing resistors. The high-voltage trigger generator signals are monitored with a combination of current transformers and current viewing resistors. The crowbar voltage is measured with a resistive divider, and the trigger signal to the crowbar switch lasers is monitored. It is important to monitor these signals because the timing between the peak Marx bank currents and the triggering of the crowbar switches is critical. If the crowbar switch fires too early into the rising edge of the Marx bank current, less current than desired is transmitted downstream to the test object, resulting in an undertest. If the crowbar switch fires too late or not at all, the Marx bank capacitors may be damaged due to large oscillations in the current pulse. To minimize the electromagnetic noise generated during a shot from interfering with test object diagnostic data, diagnostics are shielded and fed through a fiber optic system back to the screen room. Typically, the diagnostics are shielded in a metal instrumentation barrel. Within the barrel, fiber optic transmitters convert the analog diagnostic signals into optical signals, which are sent to the screen room via optical fibers. Pictures of the instrumentation barrel, diagnostics, and fiber optic trans-
2006 IEEE
83
objects can be subjected to direct lightning attachment, burnthrough, or coupling of magnetic fields due to nearby strikes. A variety of diagnostics can be used in conjunction with fiber optic transmitting systems, the use of which minimizes electromagnetic noise coupling from firing the simulator.
ACKNOWLEDGMENTS
The authors would like to thank and acknowledge the contributions of Parris Holmes and Matthew Kiesling in refurbishing, maintaining, and operating the Sandia Lightning Simulator. Their efforts and dedication have been key elements to the success of this endeavor. We would also like to thank Mike Dinallo for his technical assistance and knowledge.
REFERENCES
[1]F. W. Neilson, An Extreme-Lightning Test Facility, internal report, Sandia National Laboratories. [2]N. Cianos, E. T. Pierce, A Ground Lighting Environment for Engineering Usage, Technical Report 1, Contract L.S. 2817A3, SRI Project 1834, for McDonnell-Douglas Astronautics Corp., Stanford Research Institute, Menlo Park, California, August 1972. [3]G. H. Schnetzer, R. J. Fisher, and M. A. Dinallo, Measured Responses of Internal Enclosures and Cables Due to Burnthrough Penetration of Weapon Cases by Lightning, SAND94-0312, Sandia National Laboratories, August 1994.
VII. CONCLUSIONS
In conclusion, the Sandia Lightning Simulator has been refurbished and upgraded after almost a decade of non-use. Currently, the ability to simulate a severe single lightning stroke has been demonstrated. The double-stroke and continuing current capabilities have been refurbished but not yet verified. Test
Author biographies
Michele Caldwell Michele is currently the manager of the Electromagnetic Qualification and Engineering Department at Sandia National Laboratories. She has worked in the department for the last four years on a variety of programs, including the recommissioning of the Lightning Simulator and a large TEM cell for pulsed testing. Other jobs at Sandia have been as a safety analyst and systems engineer. Leonard Martinez Leonard is currently the operator of the Sandia Lightning Simulator and Electromagnetic Environments Simulator for the Electromagnetic Qualification and Engineering Department at Sandia National Laboratories. As the operator, Leonard is responsible for the maintenance and operation of the facilities, as well as coordinating tests. He has over 20 years experience with designing, building, and operating a variety of pulsed power systems.
84
2006 IEEE