3. Digital to Analog Converters Testing
3. Digital to Analog Converters Testing
where DIN is some integer value, vOUT is a real-valued output value, and
GDAC is some real-valued proportionality constant. Because the input DIN
is typically taken from a digital system, it may come in the form of a D-
bit-wide base-2 unsigned integer number expressed as
where the coefficients b0…bD-1 have either a 0 or 1 value.
A commonly used symbol for a DAC.
Coefficient bD–1 is regarded as the most significant bit (MSB), because it
has the largest effect on the number and the coefficient b0 is known as
the least signifi cant bit (LSB), as it has the smallest effect on the
number.
For a single LSB change at the input (i.e., Δ Din=1 LSB), smallest
voltage change at the output is Δv out = GDAC ×1 LSB. Because this
quantity is called upon frequently, it is designated as VLSB and is
referred to as the least significant bit step size. The transfer
characteristic of a 4-bit DAC with decoding equation
Here the DAC output ranging from 0 to 1.5 V is plotted as a function of The
digital input.
For each input digital word, a single analog voltage level is produced,
reflecting the oneto-one input–output mapping of the DAC.
Moreover, the LSB step size, VLSB, is equal to 0.1 V.
Alternatively, we can speak about the gain of the DAC (GDAC) as the
ratio of the range of output values to the range of input values as
follows
If we denote the full-scale output range as V FSR = VOUT ,max − VOUT ,min and
the input integer range as 2D–1 (for above example, 15–0 = 24–1),
then the DAC gain becomes
expressed in terms of volts per bit. Consequently, the LSB step size for
the ideal DAC in volts can be written as
Interesting enough, if the terms V OUT ,max , VOUT ,min covers some arbitrary
voltage range corresponding to an arbitrary range of digital inputs,
then the DAC input-output behavior can be described in identical terms
to the ideal DAC described above except that an offset term is added
as follows
Analog-to-digital converters, denoted as ADCs, can be considered as encoding devices that
map an input analog level into a digital word of fixed bit length, as captured by the symbol.
Mathematically, we can represent the input–output encoding process in general terms with
the equation
Returning to our number line analogy, we recognize the ADC process is one that maps an
input analog value that is represented on a real number line to a value that lies on an
integer number line.
However, not all numbers on a real number line map directly to a value
on the integer number line.
Herein lies the challenge with the encoding process.
One solution to this problem is to divide the analog input full-scale
range (VFSR) into 2D equal-sized intervals according to eqn and and
assign each interval a code number.
where VFS− and VFS + V defined the ADC full-scale range of
operation, that is,
The transfer characteristic for a 4-bit ADC for a full-scale input range
between 0 and 1.5 V and a LSB step size V LSB of 0.09375 V.
The transfer characteristic of an ADC is not the same across all ADCs,
unlike the situation that one finds for DACs.
The reason for this comes back to the many-to-one mapping issue
described above.
Two common approaches used by ADC designers to define ADC
operation are based on the mathematical principle of rounding or
truncating fractional real numbers.
The transfer characteristics of these two types of ADCs are shown in
Figure for a 4-bit example with VFS- =0V and V FS+ = 1.5 V .
If the ADC is based on the rounding principle, then the ADC transfer
characteristics can be described in general terms as
If the ADC is based on the rounding principle, then the ADC transfer
characteristics can be described in general terms as
the truncating principle as follows
In both of these two cases, the full-scale range is no longer divided into
equal segments. These new defi nitions lead to a different value for the
LSB step size as that described earlier. For these two cases, it is given
by
There are several different encoding formats for ADCs and DACs
including unsigned binary, sign/ magnitude, two’s complement, one’s
complement, mu-law, and A-law.
One common omission in device data sheets is DAC or ADC data
format. The test engineer should always make sure the data format
has been clearly defined in the data sheet before writing test code.
The most straightforward data format is unsigned binary written as
The least signifi cant bit (LSB) step size is defi ned as the average step
size of the DAC transfer curve.
It is equal to the gain of the DAC, in volts per bit.
Although it is possible to measure the approximate LSB size by simply
dividing the full-scale range by the number of code transitions, it is
more accurate to measure the gain of the best-fi t line to calculate the
average LSB size.
Using the results from the previous example, the 4-bit DAC’s LSB step
size is equal to 109.35 mV.
DC PSS
DAC DC power supply sensitivity (PSS) is easily measured by applying
a fi xed code to the DAC’s input and measuring the DC gain from one
of its power supply pins to its output. PSS for a DAC is therefore
identical to the measurement of PSS in any other circuit, as described
The only difference is that a DAC may have different PSS performance
depending on the applied digital code.
Usually, a DAC will exhibit the worst PSS performance at its full-scale
and/or minus full-scale settings because these settings tie the DAC
output directly to a voltage derived from the power supply.
Worst-case conditions should be used once they have been determined
through characterization of the DAC.
TRANSFER CURVE TESTS:ABSOLUTE ERROR
The ideal DAC transfer characteristic or transfer curve is one in which
the step size between each output voltage and the next is exactly
equal to the desired LSB step size.
Also, the offset error of the transfer curve should be zero. Of course,
physical DACs do not behave in an ideal manner; so we have to defi ne
fi gures of merit for their actual transfer curves.
One of the simplest, least ambiguous fi gures of merit is the DAC’s
maximum and minimum absolute error.
An absolute error curve is calculated by subtracting the ideal DAC
output curve from the actual measured DAC curve.
The values on the absolute error curve can be converted to LSBs by
dividing each voltage by the ideal LSB size, VLSB.
Mathematically, if we denote the ith value on the ideal and actual
transfer curves as SIDEAL(i) and S(i), respectively, then we can write
the normalized absolute error transfer curve ΔS(i) as
MONOTONICITY
A monotonic DAC is one in which each voltage in the transfer curve is larger
than the previous voltage, assuming a rising voltage ramp for increasing codes.
(If the voltage ramp is expected to decrease with increasing code values, we
simply have to make sure that each voltage is less than the previous one.)
While the 4-bit DAC in the previous examples has a terrible set of absolute
errors, it is nevertheless monotonic.
Monotonicity testing requires that we take the discrete first derivative of the
transfer curve, denoted here as S′(i), according to
If the derivatives are all positive for a rising ramp input or negative for a
falling ramp input, then the DAC is said to be monotonic.
DIFFERENTIAL NONLINEARITY
Notice that in the monotonicity example the step sizes are not uniform.
In a perfect DAC, each step would be exactly 100 mV corresponding to
the ideal LSB step size.
Differential nonlinearity (DNL) is a figure of merit that describes the
uniformity of the LSB step sizes between DAC codes.
DNL is also known as differential linearity error or DLE for short.
The DNL curve represents the error in each step size, expressed in
fractions of an LSB.
DNL is computed by calculating the discrete first derivative of the DACs
transfer curve, subtracting one LSB (i.e., VLSB) from the derivative
result, and then normalizing the result to one LSB
As previously mentioned, we can define the average LSB size in one of
three ways.
We can define it as the actual full-scale range divided by the number of
code transitions (number of codes minus 1) or we can define the LSB
as the slope of the best-fit line.
Alternatively, we can define the LSB size as the ideal DAC step size.
The choice of LSB calculations depends on what type of DNL
calculation we want to perform.
There are four basic types of DNL calculation method: best-fi t,
endpoint, absolute, and beststraight- line.
Best-fi t DNL uses the best-fi t line’s slope to calculate the average LSB
size.
This is probably the best technique, since it accommodates gain errors
in the DAC without relying on the values of a few individual voltages.
Endpoint DNL is calculated by dividing the full-scale range by the
number of transitions.
This technique depends on the actual values for the maximum fullscale
(VFS+) and minimum full-scale (VFS–) levels.
As such it is highly sensitive to errors in these two values and is
therefore less ideal than the best-fi t technique.
The absolute DNL technique uses the ideal LSB size derived from the
ideal maximum and minimum full-scale values.
This technique is less commonly used, since it assumes the DAC’s gain
is ideal.
The best-straight-line method is similar to the best-fi t line method. The
difference is that the best-straight-line method is based on the line that
gives the best answer for integral nonlinearity (INL) rather than the line
that gives the least squared errors. Integral nonlinearity will be
discussed later in this chapter.
Since the best-straight-line method is designed to yield the best
possible answer, it is the most relaxed specifcation method of the four.
It is used only in cases where the DAC or ADC linearity performance is
not critical.
Thus the order of methods from most relaxed to most demanding is
best-straight line, best-fi t, endpoint, and absolute.
The choice of technique is not terribly important in DNL calculations.
Any of the three techniques will result in nearly identical results, as
long as the DAC does not exhibit grotesque gain or linearity errors.
DNL values of ±1/2 LSB are usually specifi ed, with typical DAC
performance of ±1/4 LSB for reasonably good DAC designs. A 1% error
in the measurement of the LSB size would result in only a 0.01 LSB
error in the DNL results, which is tolerable in most cases.
The choice of technique is actually more important in the integral
INTEGRAL NONLINEARITY
Rise and fall time can also be measured from the digitized waveform collected
during a settling time test.
Rise and fall times are typically defi ned as the time between two markers, one
of which is 10% of the way between the initial value and the final value and the
other of which is 90% of the way between these values as depicted in Figure.
Other common marker definitions are 20% to 80% and 30% to 70%.
DAC-TO-DAC SKEW
Some types of DACs are designed for use in matched groups. For example,
a color palette RAM DAC is a device that is used to produce colors on video
monitors.
A RAM DAC uses a random access memory (RAM) lookup table to turn a
single color value into a set of three DAC output values, representing the
red, green, and blue intensity of each pixel.
These DAC outputs must change almost simultaneously to produce a clean
change from one pixel color to the next.
The degree of timing mismatch between the three DAC outputs is called
DAC-to-DAC skew.
It is measured by digitizing each DAC output and comparing the timing of
the 50% point of each output to the 50% point of the other outputs. There
are three skew values (R-G, G-B, and B-R), as illustrated in Figure 6.15.
Skew is typically specifi ed as an absolute time value, rather than a signed
GLITCH
ENERGY
Glitch energy, (GLITCH
or glitch impulse, isIMPULSE)
another specification common to high-
frequency DACs.
It is defined as the total area under the voltage-time curve of the glitches
in a DAC’s output as it switches across the largest major transition (i.e.,
01111111 to 10000000 in an 8-bit DAC) and back again.
As shown in Figure the glitch area is defi ned as the area that falls outside
the rated error band.
These glitches are caused by a combination of capacitive/inductive ringing
in the DAC output and skew between the timing of the digital bits feeding
the binary-weighted DAC circuits.
The parameter is commonly expressed in picosecond-volts (ps-V) or
equivalently, picovolt-seconds (pV-s). (These are not actually units of
energy, despite the term glitch energy.)
The area under the negative glitches is considered positive area, and
CLOCK AND
Clock and data DATA FEEDTHROUGH
feedthrough is another common dynamic DAC specifi
cation.
It measures the crosstalk from the various clocks and data lines in a
mixed-signal circuit that couple into a DAC output.
There are many ways to define this parameter; so it is difficult to list a
specific test technique.
However, clock and data feedthrough can be measured using a technique
similar to all the other tests in this section.
The output of the DAC is digitized with a high-bandwidth digitizer. Then the
various types of digital signal feedthrough are analyzed to make sure they
are below the defined test limits.
The exact test conditions and defi nition of clock and data feedthrough
should be provided in the data sheet.
This measurement may require time-domain analysis, frequency-domain
TESTS FOR COMMON DAC APPLICATIONS: DC
REFERENCES
As previously mentioned, the test list for a given DAC often depends on
its intended functionality in the system-level application. Many DACs
are used as simple DC references.
An example of this type of DAC usage is the power level control in a
cellular telephone.
As the cellular telephone user moves closer or farther away from a
cellular base station (the radio antenna tower), the transmitted signal
level from the cellular telephone must be adjusted.
The transmitted level may be adjusted using a transmit level DAC so
that the signal is just strong enough to be received by the base station
without draining the cellular telephone’s battery unnecessarily.
If a DAC is only used as a DC (or slow-moving) voltage or current
reference, then its AC transmission parameters are probably
unimportant. It would probably be unnecessary to measure the 1-kHz
signal to total harmonic distortion ratio of a DAC whose purpose is to
set the level of a cellular telephone’s transmitted signal.
However, the INL and DNL of this DAC would be extremely important,
as would its absolute errors,monotonicity, full-scale range, and output
drive capabilities (output impedance).
Notable exceptions are signal-to-noise ratio and idle channel noise
(ICN).
These may be of importance if the DC level must exhibit low noise.
For example, the cellular telephone’s transmitted signal might be
corrupted by noise on the output of the transmit level control DAC, and
Dynamic tests are not typically performed on DC reference DACs, with
the exception of settling time.
The settling time of typical DACs is often many times faster than that
required in DC reference applications; so even this parameter is
frequently guaranteed by design rather than being tested in
production.
AUDIO RECONSTRUCTION
Audio reconstruction DACs are those used to reproduce digitized
sound. Examples include the voice-band DAC in a cellular telephone
and the audio DAC in a PC sound card.
These DACs are more likely to be tested using the transmission
parameters of Chapter 11, since their purpose is to reproduce arbitrary
audio signals with minimum noise and distortion.
The intrinsic parameters (i.e., INL and DNL) of audio reconstruction
DACs are typically measured only during device characterization.
Linearity tests can help track down any transmission parameter
failures caused by the DAC.
It is often possible to eliminate the intrinsic parameter tests once the
device is in production, keeping only the transmission tests.
Dynamic tests are not typically specifi ed or performed on audio DACs.
Any failures in settling time, glitch energy, and so on, will usually
manifest themselves as failures in transmission parameters such as
signal-to-noise, signal-to-distortion, and idle channel noise.
DATA MODULATION
Data modulation is another purpose to which DACs are often applied.
The cellular telephone again provides an example of this type of DAC
application.
The IF section of a cellular telephone base-band modulator converts
digital data into an analog signal suitable for transmission, similar to
those used in modems
Like the audio reconstruction DACs, these DACs are typically tested
using sine wave or multitone transmission parameter tests.
Again, the intrinsic tests like INL and DNL may be added to a
characterization test program to help debug the design.
However, the intrinsic tests are often removed after the transmission
parameters have been verified.
Dynamic tests such as settling time may or may not be necessary for
Data modulation DACs also have very specifi c parameters such as
error vector magnitude (EVM) or phase trajectory error (PTE).
Parameters such as these are very application-specifi c.
They are usually defi ned in standards documents published by the
IEEE, NIST, or other government or industry organization.
The data sheet should provide references to documents defining
applicationspecific tests such as these.
The test engineer is responsible for translating the measurement
requirements into ATE-compatible tests that can be performed on a
production tester.
ATE vendors are often a good source of expertise and assistance in
developing these application-specifi c tests.
VIDEO SIGNAL GENERATORS
As discussed earlier, DACs can be used to control the intensity and
color of pixels in video cathode ray tube (CRT) displays.
However, the type of testing required for video DACs depends on the
nature of their output. There are two basic types of video DAC
application, RGB and NTSC.
An RGB (red-green-blue) output is controlled by three separate DACs.
Each DAC controls the intensity of an electron beam, which in turn
controls the intensity of one of the three primary colors of each pixel as
the beam is swept across the CRT.
In this application, each DAC’s output voltage or current directly control
the intensity of the beam. RGB DACs are typically used in computer
monitors.
The NTSC format is used in transmission of standard (i.e., non-HDTV) analog
television signals.
It requires only a single DAC, rather than a separate DAC for each color.
The picture intensity, color, and saturation information is contained in the
time-varying offset, amplitude, and phase of a 3.54-MHz sinusoidal waveform
produced by the DAC.
Clearly this is a totally different DAC application than the RGB DAC
application.
These two seemingly similar video applications require totally different testing
approaches.
RGB DACs are tested using the standard intrinsic tests like INL and
DNL, as well as the dynamic tests like settling time and DAC-to-DAC
skew.
These parameters are important because the DAC outputs directly
control the rapidly changing beam intensities of the red, green, and
blue electron beams as they sweep across the computer monitor.
Any settling time, rise time, fall time, undershoot, or overshoot
problems show up directly on the monitor as color or intensity
distortions, vertical lines, ghost images, and so on.
The quality of the NTSC video DAC, by contrast, is determined by its
ability to produce accurate amplitude and phase shifts in a 3.54-MHz
sine wave while changing its offset.
This type of DAC is tested with transmission parameters like gain,
signal-to-noise, differential gain, and differential phase