0% found this document useful (0 votes)
101 views

Error Budgets For Optomechanical Modeling: Control Systems Stray Light Zernike Analysis

The document discusses error budgets for optomechanical modeling. It recommends performing top-down error budgets to design the most cost-effective analysis for a project. Linear approximations in heat transfer, elasticity, and optics are adequate for solving most optomechanical engineering problems and minimize errors by assuming linearity. An ISO standard is suggested to guide the error budgeting process.

Uploaded by

Abhishek Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

Error Budgets For Optomechanical Modeling: Control Systems Stray Light Zernike Analysis

The document discusses error budgets for optomechanical modeling. It recommends performing top-down error budgets to design the most cost-effective analysis for a project. Linear approximations in heat transfer, elasticity, and optics are adequate for solving most optomechanical engineering problems and minimize errors by assuming linearity. An ISO standard is suggested to guide the error budgeting process.

Uploaded by

Abhishek Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Error Budgets for Optomechanical Modeling

Alson E. Hatheway
Alson E. Hatheway Inc., 595 East Colorado Blvd., Pasadena, CA 91101

ABSTRACT

The author has found that linear approximations in heat transfer, elasticity and optics are powerful tools that are ad-
equate for solving the great majority of optomechanical engineering design problems. In his experience the analytical
errors are minimized by assuming linearity in the equations and unifying the solution into a single analytical code. He
suggest that organizations planning optomechanical analyses perform top-down error budgets of the processes involved
in order to design the most cost-effective analysis for the project. An ISO standard is recommended as a guide to the
error budgeting process.

Key words: integrated analysis, unified analysis, optomechanical analysis

1. INTRODUCTION

Over the last twenty years the industry has been developing software and data base managers to facilitate the more
distributed forms of multidisciplinary analysis by automating the re-formatting of data files, developing interpolation
routines and manipulating high-precision data bases to avoid truncation and round-off associated with reading limited-
precision output files. Some of these routines will also expand and reduce the sizes of data sets to match the require-
ments of existing analysis codes: expand a Sinda thermal analysis output of 200 temperatures to 10,000 temperatures
that may be required by a NASTRAN model or reduce the 60,000 NASTRAN displacements to just those (perhaps 600)
that are associated with the deformed shapes of the optical surfaces. A typical flow diagram of such an analysis process
is shown in Figure 1.

Zernike Analysis
Fringe
Stray Light
FAP
Control Systems
APART/PADE DISCOS
0Poly STRAY
Optics MATIRX X
SigFit OARDAS
CODE V MATLAB Process Manager
Oslo
Zemax

Structures
Data Bases and
NASTRAN
Translator software
ANSYS
Cosmos
IAC
OptiOpt Computer Resources
Heat Transfer IMOS/MACOS
Other Disciplines
Sinda
TAP Fluid Mechanics
MITAS
CAD/CAM Acoustics
Intergraphic Diffraction
ProE Graphic Post-Processing
AutoCad etc.

Figure 1. Integrated analysis flow diagram.

Optical Modeling and Performance Predictions, edited by Mark A. Kahan, Proceedings of SPIE 1
Vol. 5178 (SPIE, Bellingham, WA, 2004) · 0277-786X/04/$15 · doi: 10.1117/12.503539

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


During this same time the author has performed multidisciplinary analyses and been solving engineering problems on
whole optical systems ranging from laser battle stations through deep UV lithographic lenses to fiber optic sensors and
processors. In virtually all of these analyses the solutions relied entirely on the linearized versions of the equations for
each of the disciplines used: heat, elasticity and optics are nearly always involved. The reasons for this choice were
entirely pragmatic: economy and accuracy. By linearizing the equations the solution often required only one analytical
code (economical) and avoided the many interpolations, extrapolations, format changes, truncations, data expansions
and data reductions that are required to move a problem back and forth among analysis codes in order to achieve the
final result (accuracy).

The result is that today the analyst of optomechanical systems may be confronted with a vast array of options and
choices that are unique to the discipline of optomechanics. A successful analysis or simulation requires the analyst to
make informed choices among these and often the information is very sparse or nonexistent. The fact that desired
information is not available does not mean that it should be ignored: in the process of making partly-informed decisions
the analyst should take note of the missing pieces and at least make educated guesses at them, if only as place-holders to
inspire the future development of the material. To this end the author has found top-down error budgeting to be an
essential element in planning and performing his multidisciplinary analyses.

The field of metrology (in which the author is also active) has faced a similar problem in establishing the accuracy of
measurements, especially in approaching the nanometer-scale of some of today’s manufacturing (semiconductors, micro
electro-mechanical systems and disk drives come to mind). In response, the ISO has developed a standard for express-
1
ing uncertainty in measurement . Although it is expressly aimed at the metrology industry its principles may be carried
over into any field where errors and uncertainty are of concern. Optomechanical analysis and simulation certainly
qualifies. In addition to guiding the structure of a top-down error analysis it also offers rigorous guidance in combining
all the contributors (correlated, un-correlated and perhaps partially correlated) into an over all estimate of the accuracy
of a number. The author has found it to be a valuable source of guidance in estimating errors in analyses. The optical
industry may wish to adopt it (or adapt it) to guide the engineering of some of our projected optical systems.

In the following sections the author presents error functions and error assessments for some of the steps he uses in his
more unified analyses, including linear assumptions for various non-linear phenomena. They have been used success-
fully and repeatedly in his analyses and simulations. Some of these are used in of the distributed analysis schemes
presently in wide use and the author hopes they will be useful to all. Others are common to the more unified methods of
analysis. They may fill in some
of the current information gaps
in error budgeting. Hopefully,
others will contribute pieces and
we may someday be able to
perform better more informed
error budgeting in optomechani-
cal analysis.

To illuminate his recommenda-


tions the author will discuss a
thermo-elasto-optical analysis
that he has reported on previ-
9
ously . The analysis was
performed on a fiber optic
encoder system and the me-
chanical design is shown in
Figure 2. The optical design
includes 16 optical elements
with 29 optical surfaces
including lenses, mirrors and Figure 2. Mechanical layout of spread-spectrum encoder.

2 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


diffraction gratings. Typical test data
Mirror Mount Stability Tests
are shown in Figure 3. This figure plots
the of intensity in the radiation coupled
1
to the output fiber as a function of
increasing temperature for several

Coupling Efficiency
different configurations of components. 0.8

The problem was to determine the


sources of thermal instability in the 0.6 Kinematic
output coupling efficiency. Elastic

0.4 Elastic
2. MAGNITUDES OF SOME OF
THE CONTRIBUTIONS 0.2

Many of these contributors have been


2,3,4 0
discussed in detail elsewhere . Of 20 25 30 35 40 45 50
particular concern has been the uncer-
tainty introduced by the expansions and Temperature, °C
contractions of the data set, for which
there are no physical justifications or Figure 3. Thermal instability in spread-spectrum encoder ouptut.
guidance and the associated reformat-
ting and interpolations. These processes are indigenous to the distributed form of integrated analysis.

Areas that have not been explored are the principal contributors to the more unified forms of simulation and the influ-
ence of assuming linearity in the phenomena being modeled. In the following discussions of linearity influences the
deviation of the linear assumption from the full non-linear behavior of a phenomenon will be quantified by the “devia-
tion fraction,” e, such that,

deviation fraction = e = linear value/non-linear value - 1. (1)

The deviation fraction quantifies the relative error of a linear assumption compared to the true non-linear value. The
author has found the evaluation of deviation fractions to be a reliable and rapid way to estimate the error associated with
linear assumptions.

2.1 Finite element convergence

The most important property of the finite element method is that it converges to the true solution of the linear elastic
equations as the mesh density (number of nodal points per unit volume) is increased without limit in the model. Note:
A single modeling variable under control of the analyst determines the accuracy of the solution. This property provides
the analyst a direct, if not always easy, method to assure himself of the accuracy of his work.

The convergence property is demonstrated in the simple problem of an aluminum cantilever beam that is 16 inches
square and 32 inches long and loaded by its own mass density with a transverse gravitational load. The proportions of
the beam were selected to require a solution for the shear deflections as well as the bending deflections. A simple finite
element model of this problem, using shell elements, is shown in Figure 4 with five different mesh densities.

Figure 4. Five models for a convergence study of a cantilever beam.

Proc. of SPIE Vol. 5178 3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


The simplest model contains only 6 nodal points and the most detailed one contains 561 nodal points. Each of these
models was analyzed for end deflection and the results are plotted in Figure 5.

Mesh Refinement FEM Convergence

0.1
7.6

Deviation Fraction (-)


y = -0.9557x 3 - 0.3606x 2 - 0.4532x + 7.5138
7.5
Deflection, E-5 in.

7.4 Z Gravity
7.3 Y Gravity Z Gravity
7.2 0.01
Linear Y Gravity
7.1
7 Poly. (Z Gravity)
6.9 Poly. (Y Gravity)
6.8
y = 9.498x 3 - 6.8815x 2 - 0.2462x + 7.4963 0.001
6.7
1000 100 10 1
0 5 10 15 20
Nodes in model
Element Size, in.

Figure 5. Convergence of the calculated deflection as the elements get smaller and the number of nodes increases.

As can be seen in the left figure, as the size of elements in the model decreases without limit the deflection converges to
-5
a fixed value. The linear elastic solution to this problem is 7.51 x 10 inches (shown as the open triangle at the left-hand
extremity of the figure). This allows for some of the sections near the free end to deform from the planar constraint at
the fixed end. Note that the cantilever beam modeled with shell elements offers two directions for gravity, one with
gravity normal to the plane of the elements (Z gravity) and the other with gravity in the plane of the elements (Y
gravity). The two configurations show slightly different behavior in the coarse models but converge to the same value
in the limit.

The figure on the right is a more conventional plot of the tendency of the error (deviation fraction) to be reduced by
about a decade for each one and one-half decades of increase in the number of nodes in the model.
8
The property of convergence has also been demonstrated for a variety of other model configurations, with similar
results. As the number of nodal points in the models is increased the deviation from linear theory is reduced without
limit.

The corollary is also true: all finite element analyses have finite-sized errors in their results. But because the conver-
gence of the finite element method depends only upon mesh density an experienced analyst can readily estimate the
accuracy of his analysis. This deviation of the finite element solution from the true linear elastic solution must be
factored into the planning of the analysis.

2.2 Linear elasticity


th 5
The full theory of elasticity as assembled at the end of the 19 century is a set of non-linear differential equations. The
5
general solution to these equations has proved intractable except in a very limited number of special configurations.
The great success of structural mechanics over the past century has been due to the success at reducing the equations to a
linear set of equation that assume that all displacements are small and that all materials exhibit linear stress vs. strain
curves. The small displacement assumption has had minimal influence on the calculation of stresses since they are
mostly dependent upon the applied load, not the attendant deflections.

Since structural mechanics in general and the finite element method in particular rely on the linear sub-set of elastic
theory an open question is: What is the magnitude of the error in deflections calculated using linear elastic theory?
6
It has been shown that the contributors to non-linearity in elasticity may be controlled to very small numbers, less than
one part in ten thousand, in some deflections of structural members experiencing substantial deformation. The particular

4 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


member studied was a slender cantilever beam deflected at its free end by a displacement equal to one-seventh of its
length. A study (Figure 6) of the resulting strains and displacements at the fixed end showed that the transverse force vs.
displacement curve was linear to about one part in 10,000 (e=0.0001).

Figure 6. The linearity of transverse bending of a cantilever beam.

The general opinion has held that linear elastic solutions are good enough. The author cautions that non-linearities can
be substantial. The beam demonstrated in Figure 6 above also had about 0.5% (e=0.005) of foreshortening that was not
predicted by the linear theory. Also, he author recently designed a coil spring assuming its spring rate to be a constant
only to find that the slope of the force displacement curve varied 15% (e=0.15) over the operating range of the displace-
ments. In short, the effects on non-linearities in elasticity (both equations and materials) need to be considered by the
analyst when planning an analysis and the associated uncertainties counted.

2.3 Heat transfer modeling

Optical instruments are sensitive to the thermal environment in which they operate. The dimensions of the instrument
may change as the bulk temperature goes slowly up or down and indexes of refraction may change as well. Materials of
different thermal expansion rates may cause bowing or bending of long members. These effects will not only change
the position, orientation and size of the image but they may also change the balance of the aberrations at the image and
materially reduce its resolution.

The instruments are also sensitive to transient thermal effects that may be caused by changes in the direction of sunlight,
the warmth of the hands of a user or the changing winds in the field. Thermal gradients will cause bowing and bending
of structural members, even those with uniform thermal expansion properties, and aggravate the image quality problems
if they get into the optical elements, either refractive or reflective.

In general heat transfer analysis must accommodate the non-linearities of thermal radiation and temperature dependent
properties as well as temperature and time dependent boundary conditions. For this reason very sophisticated software
has been developed to solve these problems. These non-linear equations solvers (usually applying finite difference
techniques) are generally efficient in solving moderate-sized problems (a few tens of nodes to a few hundred nodes).

Most finite element structural analysis codes also perform heat transfer analysis, but only in the linear domain such as
conduction or forced convection. Some finite element codes implement radiation heat transfer solutions (including view
factors), with varying degrees of success. The finite element codes are efficient at solving problems with thousands of

Proc. of SPIE Vol. 5178 5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


nodes. Some of these codes also offer temperature dependent properties in heat transfer in which case the analyst must
specify an initial temperature estimate for each of the nodes. In this latter case the analyst may attempt non-linear
solutions by using the temperature results of one analysis as the initial temperature estimate for a repeat analysis.

A major source of error in thermo-elastic analysis is the translation of thermal analysis data from a finite difference code
for use as temperature loads in a finite element code. The biggest source for these errors is that thermal analyst doesn’t
use the same nodes as the structural analyst and often the thermal models are considerably smaller (by factors of 10 to
1000). There are no physical laws to guide an expansion of the data set from, say 100 nodes to 10,000 nodes: the
process is pure mathematical extrapolation. The result is that although the general level of absolute temperatures may be
about right the process always generates local thermal gradients (and resulting deflections) that are bogus and often
overlooks or minimizes other thermal gradients that may be important.

One work-around for these problems is to perform the thermal analysis on the finite element model. The biggest
handicap is the non-linearity of the thermal radiation. However, a careful analysis of the errors introduced by linearizing
the radiant heat transfer and a comparison with other alternatives may show it to have some advantages.

First, one needs a finite element code that permits temperature dependent thermal conductivity in thermal analysis. Such
a code will look up the conductivity based upon the mean temperature of the element. If we take the radiation heat
transfer equation,
4 4
q=FAεσ(Th – Tc ), (2)

and rearrange it to formulate a radiation heat transfer coefficient (conductance for unit of area and unit of temperature
difference),
2 2
hr = q/A(Th – Tc) = Fεσ(Th + Tc )(Th + Tc), (3)

we can then determine the error (deviation fraction, e) induced by calculating the conductance based upon the mean
temperature, the average of Th and Tc, rather than using both temperatures. It will of course depend upon the difference
between the two temperatures also. On this basis the deviation fraction becomes
2 2
e = (∆T/Tmean) /[4 + (∆T/Tmean) ] . (4)

The deviation fraction tells us the relative size of the deviation of the mean-temperature solution from the two-tempera-
ture solution and is plotted in Figure 7.

-0.14
Deviation Fraction, e

-0.12
-0.1
-0.08
-0.06
-0.04
-0.02 0 0.2 0.4 0.6 0.8
0
Temperature Ratio, ∆ T/Tmean

Figure 7. Error (as a deviation fraction) in the calculated radiation conductance using mean temperature only.

6 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


The abscissa is the ratio of the temperature difference to the mean temperature and the ordinate indicates the correspond-
ing error caused by calculating the conductance only on the mean temperature. One may see from this figure that if one
wishes to keep the error in the conductance below 1% and the mean temperature is about room temperature (300° K)
then the temperature difference must be less than 0.2 times the mean, about 60 K°. In the author’s experience, very few
optical instruments that operate at room temperature can tolerate such a large temperature difference during operation
and those that do take very special steps to isolate and contain the extreme temperatures. The vast majority of thermal
optomechanical problems involve small temperature differences of 1 to 5 K° for which the conductance based upon the
mean temperature is in error by less than 0.01% (1 part in 10,000). Since the conductance is proportional to the cube of
the temperature if follows that the temperature error is one-third of the conductance error. The finite element method
can calculate steady-state nodal temperatures in just a few iterations and in the great majority of cases the error in the
calculated temperatures can be less than 1 part in 10,000.

The above error analysis assumed that both the hot and cold surface temperatures were independent variables. Many of
the strongest radiant influences such as the sun, deep space and room temperature may be represented as fixed tempera-
ture sources or sinks. This radiant exchange conductance is dependent on a single-temperature value so there is no
approximation in calculating the conductance. Consequently, the conductance error will converge to zero as iterations
proceed and they are not a limit on the accuracy of the final nodal temperature calculations.

A similar analysis of transient thermal conditions leads to similar conclusions. The vast majority of thermal transients
that are debilitating to optical instruments involve thermal excursions of only a few degrees, typically much less than
about 6 K° (2% of the absolute temperature). These small temperature excursions at one of two surfaces (the typical
case) will have a 1.5% influence on the radiant conductance. One or two iterations may reduce the error by one or two
orders of magnitude if desired.

Most thermal problems in optomechanical design involve a small subset of the generalized heat transfer universe:
relatively uniform steady state temperatures in the instrument and relatively small temperature excursions during thermal
transients. Within these limits linearized equations can provide solutions with residual errors of less than 0.01% with a
few iterations to accommodate the temperature dependent properties. Many finite element codes may be set up to solve
these equations iteratively for large models (10,000 or more nodes).

2.4 Linear Optical Calculations

2.4.1 Optical Path Length

Optical systems may be evaluated by calculating the difference in optical length of a number of rays transiting the
system. The optical path difference, OPD, is a useful quantity for evaluating the quality of the image. The optical
length of the rays may be several meters but a small (sub-micron) difference among them may be desired for high
quality imaging performance. An OPD of “zero” is perfect imaging. Thermal and structural influences may create
displacements and deflections of the optical surfaces that change the optical length of the rays. These mechanical
motions may affect the OPD.

The optical path length for a ray is calculated by calculating the geometric length from one ray intercept point to the
next, multiplying the length by the index of refraction and summing all of the segments for each ray transiting the
system. The change in optical path length is normally determined by calculating the length twice, once for the un-
deformed system and again for the deformed system, and taking the difference between them for each ray. The change
in optical path length for a ray will also be a change in the OPD (optical path difference) for the same ray, ∆OPD.

A simplified way to calculate the optical path length changes caused by mechanical motions in an optical systems
assumes that the ray motions are small and that the ray intercept point will move in a plane that is tangent to the initial
ray intercept point. The geometry is shown in Figure 8.

The true ray intercept point will be on the spherical surface that passes through the initial ray intercept point. We may

Proc. of SPIE Vol. 5178 7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


calculate the difference between the spherical (true) ray
12345678901234567890123456789012123
12345678901234567890123456789012123
intercept and the planar (linear) ray intercept. This 12345678901234567890123456789012123
difference is the error that is generated when the analysis ∆OPD
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
uses only the linear equations. The deviation fraction, e, 12345678901234567890123456789012123
is the ∆OPD error divided by the true ∆OPD, ∆OPD error
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
e = ∆OPDerror/∆OPD = rid/(2Rcos φ sinφ + rid),
3
12345678901234567890123456789012123
(5) rid
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
which for a given amount of ray intercept motion, rid, is 12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
dependent upon the local radius of curvature of the
surface, R, and the angle of incidence, φ. Figure 9 shows
φ
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
the acceptable combinations of surface radius and angle 12345678901234567890123456789012123
12345678901234567890123456789012123
of incidence for relative ∆OPD errors of 1:100 and 12345678901234567890123456789012123 R
12345678901234567890123456789012123
12345678901234567890123456789012123
1:1,000 (deviation fractions of 0.01 and 0.001 respec- 12345678901234567890123456789012123
12345678901234567890123456789012123
tively) assuming a ray intercept motion, rid, of 10 12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
microns. 12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
Figure 8. Ray intercept geometry.
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
Ray Intercept Motion = ±10 µ 12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
1 12345678901234567890123456789012123
12345678901234567890123456789012123
12345678901234567890123456789012123
Radius of surface, R,

12345678901234567890123456789012123
Deviation
12345678901234567890123456789012123
12345678901234567890123456789012123
Fraction
12345678901234567890123456789012123
Acceptable OPD error 12345678901234567890123456789012123
12345678901234567890123456789012123
meters

12345678901234567890123456789012123
0.001
0.01

0
0 0.5 1 1.
1.5
Angle of Incidence, φ , radians

Figure 9. Relative ∆OPD error (as deviation fraction).

Note that the radius of curvature of the surface dominates the relative ∆OPD error chart only at very large and very
small angles of incidence. At very small angles of incidence the linear ∆OPD change tends toward zero faster that the
true ∆OPD, causing a singularity at angles of incidence equal to zero; the relative ∆OPD error goes to unity. As may be
seen in Figure 10 the absolute values of the ∆OPD error remain very small as the angle of incidence approaches zero.

The data presented here are for ray intercept motions of 10 microns on the spherical surface. Similar charts may be
prepared for different values of this variable. As may be seen in the above figures there is a very large design space in

8 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


Ray Intercept Motion = ±10 µ

1
Radius of Surface, R,
meters OPD error
Acceptable OPD error 10 nm
100 nm

0
0 0.5 1 1.5
.5
Angle of Incidence, φ , radians

Figure 10. ∆OPD absolute errors.

which linear simplifications can produce calculated OPD changes that are accurate to 99%. And accuracy of 99.9% is
achievable except at very high angles of incidence.

2.4.2 Image motion

If the analysis is concerned with the control of image motions it is often useful to assume a linear relationship between
the position, orientation and size of the image and the position and orientation of each of the optical elements in the
system. If we assume a flat object (XY) plane mapped onto a flat image (XY) plane as shown in Figure 11 the trans-
form from the object plane to the image plane is indeed linear (the magnification, M) as is the transform between the

∆s = Tzo ∆s' = Tzi

object Z
image

s s'
Figure 11. The image with respect to the lens and the object.

element plane and the image plane (one minus the magnification, 1-M).

The transforms in the longitudinal (Z) direction are not linear, however. The influence function between the object and
7
the image is
2
Image motion/Object motion = Tzi/Tzo = M /(1-MTzi/f) (6)

Proc. of SPIE Vol. 5178 9

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


2
where f is the focal length of the optical element whose image is being analyzed. The numerator, M , is the influence
7
coefficient (used in linear analysis). The quantity in the denominator, MTZi/f, is the deviation fraction, e,

e = MTZi/f. (7)

The deviation fraction tells just how much the linear assumption will be in error from the nonlinear solution. Figure 12
shows the acceptable magnification and focal length combinations for deviation fractions of 0.01 and 0.001. The figure
assumes an object displacement of 10 microns.

Object Motion = 0.01 millimeters

0.1
0.09
Lens Focal Length, meters

0.08 Deviation
Fraction
0.07
0.06
Acceptable error 0.01
0.05
0.001
0.04
0.03
0.02
0.01
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Lens Magnification

Figure 12. Acceptable combinations of focal length and magnifications for object motions for 10 microns.

7
The influence function between the lens and the image is
2 3 2
Image motion/Lens motion = Tzi/Tzl = (1-M )/[1-M Tzl/(f-MTzl-fM )] (8)
2
In this case the numerator (1-M ) is the influence coefficient used in linear analysis and the deviation fraction that tells
7
the magnitude of the error due to a linear assumption (from the denominator again) is
3 2
e=M Tzl/(f-MTzl-fM ). (9)

Figure 13 shows the acceptable combinations of magnification and focal length for deviation fractions (that is, error due
to linearity) of 0.01 and 0.001. Again the figure assumes a lens displacement of 10 microns.

Note that the general shape of the curves in this figure (for lens motions) is similar to the curves in the preceding figure
(for object motions) with the exception of a singularity that occurs in the latter figure at a magnification of 1.0. This is
caused by the fact that the image motion goes to zero at unity magnification so the image is insensitive to lens motions, a
condition that rarely causes difficulty in practice even though large relative errors may exist in the prediction; the
absolute magnitude of the predicted displacements are also going to zero but at a slower rate than the actual displace-
ments, both becoming very small as the magnification approaches unity.

Additional charts may be prepared from the above equations covering different values of magnification, focal length and
motion (lens or object) but the salient features may be seen here. A very large design space is available in which linear

10 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


Lens Motion = 0.01 millimeters

0.1
0.09

Lens Focal Length, meters


0.08 Deviation
0.07 Fraction
0.06
Acceptable error 0.01
0.05
0.001
0.04
0.03
0.02
0.01
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Lens Magnification

Figure 13. Acceptable combinations of focal length and magnifications for lens motions of 10 microns.

analysis will give answers that are better that 99% accurate, and a significant space offers answers better than 99.9%
accurate.

3. EXAMPLE

The utility of error budgeting is illustrated by an analysis the author performed on a fiber-optic spread-spectrum en-
coder9. It was a thermo-elasto-optical analysis to establish the sources of a thermal instability in the output power (see
Figure 3 above). The flow of the analysis is shown in Figure 14 and relies on the optomechanical constraint equations7
to predict registration errors at the output fiber.

Heat Translate Elasticity Translate Optics Translate Coupling

Figure 14. Flow diagram for analysis of fiber-optic spread-spectrum encoder.

The heat transfer code was MSC cal and the elasticity code was MSC pal2. These codes are intended to work together
to do thermo-elastic analysis and MSC cal produces an output file that is already formatted for use by MSC pal2 as a
nodal temperature load. This nodal temperature file is printed to a text file so the accuracy is limited by the truncation
and round off to three decimal places. There are no other sources of error or uncertainty in the translation of the thermal
data for use in the elastic solution. The displacements from the elastic code are also output to three-decimal-place
accuracy.

The linearized optical equations, in the form of optomechanical constraint equations, are written into a computational
spreadsheet (Microsoft Excel) to machine accuracy (15 decimal places) and do not contribute a significant computa-
tional error other than their linearity, which is counted separately. The optomechanical constraint equations calculate the
registration errors (position, orientation and size) between the exit pupil of the output light (the image of the input fiber)
and the exit aperture at the output fiber. The registration errors are used to calculate the coupling efficiency at the output
fiber. This final calculation is highly non-linear and is performed in the spreadsheet as well.

Proc. of SPIE Vol. 5178 11

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


The error budget for this analysis is shown in Figure 15. Several features of the analysis are worth noting: First, no
interpolation was necessary because all of the optical geometry was included in the thermal and elastic models; Second,

F-O Enc
System

MSC/cal MSC/pal2 Microsoft Excel


Thermal Translator Structural Translator Optics Translator Coup Loss

Linear 0.001 Trunc/rnd 0.003 Linear 0.001 Trunc/rnd 0.003 Interpol 0 Trunc/rnd 0 Trunc/rnd 0
FEM 0.015 Interpol 0 FEM 0.01 Interpol 0 Linear 0.001 Interpol 0 Linear 0.05

Net, rss 0.0150 0.003 0.01005 0.003 0.001 0 0.05

Net, rss per element 0.018601


Number of elements: 16
Net, rss of elemens: 0.074404

Net at coup loss: 0.089644

Figure 15. Error budget for analysis of a fiber-optic encoder system.

the largest individual error contributors were the thermal and elastic models and these are under the direct control of the
thermal and elastic analysts; Third, all of the errors are assumed to be uncorrelated and combined by root-sum-square
methods; Fourth, the error or uncertainty associated with each element is multiplied by the square-root of the number of
elements to determine the uncertainty in the registration errors. The analysis predicts an error of less than 10% in the
value of the coupling loss.

The analytical data are compared to the test data for the corresponding mechanical design in Figure 16. The figure

1
0.9
Coupling Efficiency, η

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
20 30 40 50
Temperature, °C

Figure 16. Comparison of thermo-elasto-optical analysis to test data.

12 Proc. of SPIE Vol. 5178

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx


shows reasonable agreement between the two. If more accurate analytical predictions were desired the finite element
models could be refined. However, the test data were not considered to be much better than 90% accurate (10% in
error) so refining the analysis probably would not materially improve the correlation with the test data. In the form
shown, and at only 90% accuracy, the analysis disclosed several contributors to the thermal instability that corrected
most of the thermal instability problem when they were eliminated.

4. CONCLUSIONS

The author has performed numerous optomechanical analyses and simulations in environments ranging from deep space
to deep oceans. Most have relied on linearized equations of the disciplines being analyzed (heat, elasticity, controls,
optics, fluid mechanics, etc.) to provide predictable accuracy at a predictable price and schedule. This is made possible
largely because optomechanical engineering exists in the “small displacement domain,” small elastic deflections, small
temperature excursions and small optical ray motions.

As shown in the text the errors are determinate. The various parts of the analysis process may be designed to give the
desired accuracy with available resources. In most cases the largest error contributor (in the author’s work) has been the
mesh refinement (coarseness) of the finite element model. Central to the effort, however, is a top-down error budget that
1
guides the development of the entire analysis process. The author has been using ISO standards to guide the budgeting
process. The optical industry may want to develop its own standards that are tailored to its needs.

REFERENCES

1. Guide to the Expression of Uncertainty in Measurement (Geneva, Switzerland: International Organization for
Standardization, 1995).

2. Hatheway, A. E., “An overview of the finite element method in optical systems,” Analysis of Optical Structures,
Volume 1532 (Bellingham: SPIE, July, 1991).

3. Hatheway, A. E., “A Review of Finite Element Analysis Techniques: Capabilities and Limitations,” Optomechani-
cal Design, Volume CR43 (Bellingham: SPIE, July, 1992).

4. Genberg, V. and Michels, G., “Orthogonality of Zernike Polynomials,” Optomechanical Desing and Engineering
2002, Volume 4771 (Bellingham: SPIE, July 2002) pp 276-286.

5. Love, A. E. H., A Treatise on the Mathematical Theory of Elasticity, 4th ed. (New York: Dover Books, 1944).

6. Hatheway, A. E., “Controlling Non-linearities in Elastic Actuators,” Optomechanical and Precision Instrument
Design, Volume 2452 (Bellingham: SPIE, July, 1995).

7. Hatheway, A. E., Optomechanics and the Tolerancing of Instruments, a tutorial (Pasadena: Alson E. Hatheway Inc.,
2003).

8. McNeal, R. H. and Harder, R. L., “A Proposed Standard Set of Problems to Test Finite Element Accuracy” (New
York: AIAA, 1984).

9. Hatheway, A. E., “Thermo-elastic stability of a fiber optic spread-spectrum encoder,” Optomechanical Design and
Engineering 2001, Volume 4444, ed. Alson E. Hatheway (Bellingham: SPIE, 2001). (93640, in preparation]

Proc. of SPIE Vol. 5178 13

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/23/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx

You might also like