0% found this document useful (0 votes)
22 views105 pages

3. Digital to Analog Converters Testing

Uploaded by

jayanthisrees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views105 pages

3. Digital to Analog Converters Testing

Uploaded by

jayanthisrees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 105

DAC TESTING

BASICS OF DATA CONVERTERS


 Testing the intrinsic parameters of a digital-to-analog converter (DAC).
 Intrinsic parameters are those parameters that are intrinsic to the
circuit itself and whose parameters are not dependent on the nature of
the stimulus. This includes such measurements as absolute error,
integral nonlinearity (INL), and differential nonlinearity (DNL).
 For the most part, intrinsic measurements are related to the DC
behavior of the device.
 In contrast, the AC or transmission parameters, such as gain, gain
tracking, signal-to-noise ratio, and signal to harmonic distortion, are
strongly dependent on the nature of the stimulus signal.
 When testing a DAC or ADC, it is common to measure both intrinsic
parameters and transmission parameters for characterization.
 However, it is often unnecessary to perform the full suite of
transmission tests and intrinsic tests in production. The production
testing strategy is often determined by the end use of the DAC or ADC.
 For example, if a DAC is to be used as a programmable DC voltage
reference, then we probably do not care about its signal-to-distortion
ratio at 1 kHz. We care more about its worst-case absolute voltage
error.
 On the other hand, if that same DAC is used in a voice-band codec to
reconstruct voice signals, then we have a different set of concerns. We
do not care as much about the DAC’s absolute errors as we care about
their end effect on the transmission parameters of the composite audio
channel, comprising the DAC, lowpass fi lter, output buffer amplifi ers,
 Unlike digital circuits, which can be tested based on what they are
(NAND gate, fl ip-fl op, counter, etc.), mixed-signal circuits are often
tested based on what they do in the system-level application (precision
voltage reference, audio signal reconstruction circuit, video signal
generator, etc.).
 Therefore, a particular analog or mixed-signal subcircuit may be copied
from one design to another without change, but it may require a totally
different suite of tests depending on its intended functionality in the
system-level application.
BASICS OF DATA CONVERTERS- PRINCIPLES OF DAC
AND
 ADC CONVERSION
Digital-to-analog converters, denoted as DACs, can be considered as
decoding devices that accept some input value in the form of an
integer number and produces as output a corresponding analog
quantity in the form of a voltage, current or other physical quantity.
 In most engineering applications such analog quantities are conceived
as approximation to real numbers. Adapting this view, we can model
the DAC decoding process involving a voltage output with an equation
of the form

 where DIN is some integer value, vOUT is a real-valued output value, and
GDAC is some real-valued proportionality constant. Because the input DIN
is typically taken from a digital system, it may come in the form of a D-
bit-wide base-2 unsigned integer number expressed as
 where the coefficients b0…bD-1 have either a 0 or 1 value.
 A commonly used symbol for a DAC.
 Coefficient bD–1 is regarded as the most significant bit (MSB), because it
has the largest effect on the number and the coefficient b0 is known as
the least signifi cant bit (LSB), as it has the smallest effect on the
number.
 For a single LSB change at the input (i.e., Δ Din=1 LSB), smallest
voltage change at the output is Δv out = GDAC ×1 LSB. Because this
quantity is called upon frequently, it is designated as VLSB and is
referred to as the least significant bit step size. The transfer
characteristic of a 4-bit DAC with decoding equation
 Here the DAC output ranging from 0 to 1.5 V is plotted as a function of The
digital input.
 For each input digital word, a single analog voltage level is produced,
reflecting the oneto-one input–output mapping of the DAC.
 Moreover, the LSB step size, VLSB, is equal to 0.1 V.
 Alternatively, we can speak about the gain of the DAC (GDAC) as the
ratio of the range of output values to the range of input values as
follows

 If we denote the full-scale output range as V FSR = VOUT ,max − VOUT ,min and
the input integer range as 2D–1 (for above example, 15–0 = 24–1),
then the DAC gain becomes

 expressed in terms of volts per bit. Consequently, the LSB step size for
the ideal DAC in volts can be written as
 Interesting enough, if the terms V OUT ,max , VOUT ,min covers some arbitrary
voltage range corresponding to an arbitrary range of digital inputs,
then the DAC input-output behavior can be described in identical terms
to the ideal DAC described above except that an offset term is added
as follows
 Analog-to-digital converters, denoted as ADCs, can be considered as encoding devices that
map an input analog level into a digital word of fixed bit length, as captured by the symbol.
Mathematically, we can represent the input–output encoding process in general terms with
the equation

 Returning to our number line analogy, we recognize the ADC process is one that maps an
input analog value that is represented on a real number line to a value that lies on an
integer number line.
 However, not all numbers on a real number line map directly to a value
on the integer number line.
 Herein lies the challenge with the encoding process.
 One solution to this problem is to divide the analog input full-scale
range (VFSR) into 2D equal-sized intervals according to eqn and and
assign each interval a code number.
 where VFS− and VFS + V defined the ADC full-scale range of
 operation, that is,
 The transfer characteristic for a 4-bit ADC for a full-scale input range
between 0 and 1.5 V and a LSB step size V LSB of 0.09375 V.
 The transfer characteristic of an ADC is not the same across all ADCs,
unlike the situation that one finds for DACs.
 The reason for this comes back to the many-to-one mapping issue
described above.
 Two common approaches used by ADC designers to define ADC
operation are based on the mathematical principle of rounding or
truncating fractional real numbers.
 The transfer characteristics of these two types of ADCs are shown in
Figure for a 4-bit example with VFS- =0V and V FS+ = 1.5 V .
 If the ADC is based on the rounding principle, then the ADC transfer
characteristics can be described in general terms as
 If the ADC is based on the rounding principle, then the ADC transfer
characteristics can be described in general terms as
 the truncating principle as follows

 In both of these two cases, the full-scale range is no longer divided into
equal segments. These new defi nitions lead to a different value for the
LSB step size as that described earlier. For these two cases, it is given
by

 Finally, to complete our discussion on ideal ADCs, we like to point out


that the proportionality constant GADC in can be expressed in terms of
bits per volt as
 For both the truncating- and rounding-based ADC, its gain is equal to
the reciprocal of the LSB step size as is evident when compared
DATA FORMATS

 There are several different encoding formats for ADCs and DACs
including unsigned binary, sign/ magnitude, two’s complement, one’s
complement, mu-law, and A-law.
 One common omission in device data sheets is DAC or ADC data
format. The test engineer should always make sure the data format
has been clearly defined in the data sheet before writing test code.
 The most straightforward data format is unsigned binary written as

equivalent base-10 integer value is found from


 Unsigned binary format places the lowest voltage at code 0 and the
highest voltage at the code with all 1’s. For example, an 8-bit DAC with
a full-scale voltage range of 1.0 to 3.0 V would have the code-to-
voltage relationship shown
 One LSB step size is equal to the full-scale voltage range, VFS+ – VFS-,
divided by the number of DAC codes (e.g., 2D) minus one

 In this example, the voltage corresponding to one LSB step size is


equal to (3.0 V – 1.0 V)/255 = 7.843 mV.
 Sometimes the full-scale voltage is defined with one an additional
imaginary code above the maximum code (i.e., code 256 in our 8-bit
example). If so, then the LSB size would be (3.0 V – 1.0 V)/256 =
7.8125 mV. This source of ambiguity should be clarifi ed in the data
sheet.
 Another common data format is two’s complement; written exactly the
same as an unsigned binary number, for example,
 A two’s complement binary representation is converted to its
equivalent base-10 integer value using the equation

 A two’s complement binary formatted number can be used to express


both positive and negative integer values.
 Positive numbers are encoded the same as an unsigned binary in two’s
complement, except that the most significant bit must always be zero.
When the most significant bit is one, the number is negative.
 To multiply a two’s complement number by –1, all bits are inverted and
one is added to the result. The two’s complement encoding scheme for
 As is evident from the table, all outputs are made relative to the DAC’s
midscale value of 2.0 V.
 This level corresponds to input digital code 0. Also evident from this
table is the LSB is equal to 5 mV. The midscale (MS) value is computed
from either of the following two expressions using knowledge of the
lower and upper limits of the DAC’s full-scale range, denoted VFS– and
VFS+, respectively, together with the LSB step size obtained from Eq.
(6.16), as follows:
 Note that the two’s complement encoding scheme is slightly
asymmetrical since there are more negative codes than positive ones.
 One’s complement format is similar to two’s complement, except that
it eliminates the asymmetry by defi ning 11111111 as minus zero
instead of minus one, thereby making 11111111 a redundant code.
 One’s complement format is not commonly used in data converters
because it is not quite as compatible with mathematical computations
as two’s complement or unsigned binary formats.
 Sign/magnitude format is occasionally used in data converters.
 In sign/magnitude format, the most significant bit is zero for positive
values and one for negative values.
 A sign/magnitude formatted binary number expressed in the form

has the following base-10 equivalent integer value:


 Like one’s complement, sign/magnitude format also has a redundant negative
zero value.
 Table 6.3 shows sign/magnitude format for the 8-bit DAC example.
 The midscale level corresponding to input code 0 for this type of converter is
 Two other data formats, mu-law and A-law, were developed in the early
days of digital telephone equipment. Mu-law is used in North American
and related telephone systems, while A-law is used in European
telephone systems.
 Today the mu-law and A-law data formats are sometimes found not
only in telecommunications equipment but also in digital audio
applications, such as PC sound cards.
 These two data formats are examples of companded encoding
schemes.
 Companding is the process of compressing and expanding a signal as it
is digitized and reconstructed.
 The idea behind companding is to digitize or reconstruct large
amplitude signals with coarse converter resolution while digitizing or
 The companding process results in a signal with a fairly constant signal
to quantization noise ratio, regardless of the signal strength.
 Compared with a traditional linear converter having the same number
of bits, a companding converter has worse signal-to-noise ratio when
signal levels are near full scale, but better signalto- noise ratios when
signal levels are small.
 This tradeoff is desirable for telephone conversations, since it limits the
number of bits required for transmission of digitized voice.
 Companding is therefore a simple form of lossy data compression.
 Figure 6.5 shows the transfer curve of a simple 4-bit companded ADC
followed by a 4-bit DAC.
 In a true logarithmic companding process such as the one in Figure 6.5,
the analog signal is passed through a linear-to-logarithmic conversion
before it is digitized.
 The logarithmic process compresses the signal so that small signals
and large signals appear closer in magnitude.
 Then the compressed signal may be digitized and reconstructed using
an ADC and DAC.
 The reconstructed signal is then passed through a logarithmic-to-linear
conversion to recover a companded version of the original signal.
 The mu-law and A-law encoding and decoding rules are a
sign/magnitude format with a piecewise linear approximation of a true
logarithmic encoding scheme.
 They defi ne a varying LSB size that is small near 0 and larger as the
voltage approaches plus or minus full scale. Each of the piecewise
linear sections is called a chord.
 The steps in each chord are of a constant size. The piecewise
approximation was much easier to implement in the early days of
digital telecommunications than a true logarithmic companding
scheme, since the piecewise linear sections could be implemented with
traditional binary weighted ADCs and DACs.
 Today, the A-law and mu-law encoding and decoding process is often
performed using lookup tables combined with linear sigma-delta ADCs
and DACs having at least 13 bits of resolution.
 A more complete discussion of A-law and mu-law codec testing can be
COMPARISON OF DACS AND ADCS

 it is very important to note that a DAC represents a one-to-one


mapping function whereas an ADC represents a many-to-one mapping.
 For each digital input code, a DAC produces only one output voltage.
 An ADC, by contrast, produces the same output code for many
different input voltages.
 In fact, because an ADC’s circuits generate random noise and because
any input signal will include a certain amount of noise, the ADC
decision levels represent probable locations of transitions from one
code to the next.
 While DACs also generate random noise, this noise can be removed
through averaging to produce a single, unambiguous voltage level for
each DAC code.
 Therefore, the DAC transfer characteristic is truly a one-to-one
mapping of codes to voltages.
 The difference between DAC and ADC transfer characteristics prevents
us from using complementary testing techniques on DACs and ADCs.
 For example, a DAC is often tested by measuring the output voltage
corresponding to each digital input code.
 The test engineer might be tempted to test an ADC using the
complementary approach, applying the ideal voltage levels for each
code and then comparing the actual output code against the expected
code.
 Unfortunately, this approach is completely inappropriate in most ADC
testing cases, since it does not characterize the location of each ADC
decision level.
 Furthermore, this crude testing approach will often fail perfectly good
ADCs simply because of gain and offset variations that are within
acceptable limits.
 Although there are many differences in the testing of DACs and ADCs,
there are enough similarities that we have to treat the two topics as
one.
 Current output DACs are tested using the same techniques, using
either a current mode DC voltmeter or a calibrated current-to-voltage
translation circuit on the device interface board (DIB).
DAC FAILURE
 The novice MECHANISMS
test engineer may be inclined to think that all N-bit DACs are
created equal and are therefore tested using the same techniques.
 As we will see, this is not the case. There are many different types of
DACs, including binary-weighted architectures, resistive divider
architectures, pulse-width-modulated (PWM) architectures, and pulse-
density-modulated (PDM) architectures (commonly known as sigma- delta
DACs).
 Furthermore, there are hybrids of these architectures, such as the multibit
sigma-delta DAC and segmented resistive divider DACs.
 Each of these DAC architectures has a unique set of strengths and
weaknesses.
 Each architecture’s weaknesses determines its likely failure mechanisms,
and these in turn drive the testing methodology.
 As previously noted, the requirements of the DAC’s system-level
 Before we discuss testing methodologies for each type of DAC, we first
need to outline the DC and dynamic tests commonly performed on
DACs.
 The DC tests include the usual specifications like gain, offset, power
supply sensitivity, and so on.
 They also include converter-specific tests such as absolute error,
monotonicity, integral nonlinearity (INL), and differential nonlinearity
(DNL), which measure the overall quality of the DAC’s code-to-voltage
transfer curve.
 The dynamic tests are not always performed on DACs, especially those
whose purpose is to provide DC or low-frequency signals. However,
dynamic tests are common in applications such as video DACs, where
fast settling times and other high-frequency characteristics are key
specifications.
BASIC
 DC cations
DAC specifi TESTS:CODE-SPECIFI C PARAMETERS
sometimes call for specifi c voltage levels
corresponding to specifi c digital codes.
 For instance, an 8-bit two’s complement DAC may specify a voltage level
of 1.360 V ± 10 mV at digital code –128 and a voltage level of 2.635 V ±
10 mV at digital code +127.
 Alternatively, DAC code errors can be specifi ed as a percentage of the
DAC’s full-scale range rather than an absolute error.
 In this case, the DAC’s full-scale range must fi rst be measured to
determine the appropriate test limits. Common code-specifi c parameters
include the maximum full-scale (VFS+) voltage, minimum full-scale (VFS–)
voltage, and midscale (VMS) voltage.
 The midscale voltage typically corresponds to 0 V in bipolar DACs or a
center voltage such as VDD/2 in unipolar (single power supply) DACs.
 It is important to note that although the minimum full-scale voltage is
FULL-SCALE RANGE

 Full-scale range (VFSR) is defi ned as the voltage difference between


the maximum voltage and minimum voltage that can be produced by a
DAC. This is typically measured by simply measuring the DAC’s positive
full-scale voltage, VFS+, and then measuring the DAC’s negative full-
scale voltage, VFS–, and subtracting
DC GAIN, GAIN ERROR, OFFSET, AND OFFSET ERROR
 It is tempting to say that the DAC’s offset is equal to the measured
midscale voltage, VMS.
 It is also tempting to defi ne the gain of a DAC as the full-scale range
divided by the number of spaces, or steps, between codes.
 These defi nitions of offset and gain are approximately correct, and in
fact they are sometimes found in data sheets specifi ed exactly this
way.
 They are quite valid in a perfectly linear DAC.
 However, in an imperfect DAC, these defi nitions are inferior because
they are very sensitive to variations in the VFS–, VMS, and VFS+
voltage outputs while being completely insensitive to variations in all
other voltage outputs
 shows a simulated DAC transfer curve for a rather bad 4-bit DAC.
 Notice that code 0 does not produce 0 V, as it should.
 However, the overall curve has an offset near 0 V. Also, notice that the gain, if defi ned as
the full-scale range divided by the number of spaces between codes, does not match the
general slope of the curve.
 The problem is that the VFS+, VFS–, and VMS voltages are not in line with the general
shape of the transfer curve.
 A less ambiguous defi nition of gain and offset can be found by
computing the best-fi t line for these points and then computing the
gain and offset of this line.
 For high-resolution DACs with reasonable linearity, the errors between
these two techniques become very small.
 Nevertheless, the best-fi t line approach is independent of DAC
resolution; thus it is the preferred technique.
 A best-fi t line is commonly defi ned as the line having the minimum
squared errors between its ideal, evenly spaced samples and the
actual DAC output samples. For a sample set S(i), where I ranges from
0 to N – 1 and N is the number of samples in the sample set, the best-fi
t line is defi ned by its slope (DAC gain) and offset using a standard
linear equation having the form
 The equations for slope and offset can be derived using various
techniques.
 One technique minimizes the partial derivatives with respect to slope
and offset of the squared errors between the sample set S and the
best-fi t line. Another technique is based on linear regression.2
 The equations derived from the partial derivative technique are
 These equations translate very easily into a computer program.
 The values in the array Best_ fi t_line represent samples falling on the
least-squared-error line.
 The program variable Gain represents the gain of the DAC, in volts per
bit. This gain value is the average gain across all DAC samples. Unlike
the gain calculated from the full-scale range divided by the number of
code transitions, the slope of the best-fi t line represents the true gain
of the DAC.
 It is based on all samples in the DAC transfer curve and therefore is not
especially sensitive to any one code’s location. Gain error, ΔG,
expressed as a percent, is defi ned as
 Likewise, the best-fi t line’s calculated offset is not dependent on a
single code as it is in the midscale code method. Instead, the best-fi t
line offset represents the offset of the total sample set.
 The DAC’s offset is defi ned as the voltage at which the best-fi t line
crosses the y axis.
 The DAC’s offset error is equal to its offset minus the ideal voltage at
this point in the DAC transfer curve.
 The y axis corresponds to DAC code 0.
 In unsigned binary DACs, this voltage corresponds to Best_fi t_line(1) in
the MATLAB routine.
 However, in two’s complement DACs, the value of Best_fi t_line(1)
corresponds to the DAC’s VFS– voltage, and therefore does not
correspond to DAC code 0.
 In an 8-bit two’s complement DAC, for example, the 0 code point is
located at i = 128. Therefore, the value of the program variable
 Offset does not correspond to the DAC’s offset. This discrepancy arises
simply because we cannot use negative index values in MATLAB code
arrays such as Best_fi t_line(–128).
 Therefore, to fi nd the DAC’s offset, one must determine which sample
in vector Best_fi t_line corresponds to the DAC’s 0 code.
 The value at this array location is equal to the DAC’s offset. The ideal
LSB STEP SIZE

 The least signifi cant bit (LSB) step size is defi ned as the average step
size of the DAC transfer curve.
 It is equal to the gain of the DAC, in volts per bit.
 Although it is possible to measure the approximate LSB size by simply
dividing the full-scale range by the number of code transitions, it is
more accurate to measure the gain of the best-fi t line to calculate the
average LSB size.
 Using the results from the previous example, the 4-bit DAC’s LSB step
size is equal to 109.35 mV.
DC PSS
 DAC DC power supply sensitivity (PSS) is easily measured by applying
a fi xed code to the DAC’s input and measuring the DC gain from one
of its power supply pins to its output. PSS for a DAC is therefore
identical to the measurement of PSS in any other circuit, as described
 The only difference is that a DAC may have different PSS performance
depending on the applied digital code.
 Usually, a DAC will exhibit the worst PSS performance at its full-scale
and/or minus full-scale settings because these settings tie the DAC
output directly to a voltage derived from the power supply.
 Worst-case conditions should be used once they have been determined
through characterization of the DAC.
TRANSFER CURVE TESTS:ABSOLUTE ERROR
 The ideal DAC transfer characteristic or transfer curve is one in which
the step size between each output voltage and the next is exactly
equal to the desired LSB step size.
 Also, the offset error of the transfer curve should be zero. Of course,
physical DACs do not behave in an ideal manner; so we have to defi ne
fi gures of merit for their actual transfer curves.
 One of the simplest, least ambiguous fi gures of merit is the DAC’s
maximum and minimum absolute error.
 An absolute error curve is calculated by subtracting the ideal DAC
output curve from the actual measured DAC curve.
 The values on the absolute error curve can be converted to LSBs by
dividing each voltage by the ideal LSB size, VLSB.

 Mathematically, if we denote the ith value on the ideal and actual
transfer curves as SIDEAL(i) and S(i), respectively, then we can write
the normalized absolute error transfer curve ΔS(i) as
MONOTONICITY

 A monotonic DAC is one in which each voltage in the transfer curve is larger
than the previous voltage, assuming a rising voltage ramp for increasing codes.
(If the voltage ramp is expected to decrease with increasing code values, we
simply have to make sure that each voltage is less than the previous one.)
 While the 4-bit DAC in the previous examples has a terrible set of absolute
errors, it is nevertheless monotonic.
 Monotonicity testing requires that we take the discrete first derivative of the
transfer curve, denoted here as S′(i), according to

 If the derivatives are all positive for a rising ramp input or negative for a
falling ramp input, then the DAC is said to be monotonic.
DIFFERENTIAL NONLINEARITY
 Notice that in the monotonicity example the step sizes are not uniform.
 In a perfect DAC, each step would be exactly 100 mV corresponding to
the ideal LSB step size.
 Differential nonlinearity (DNL) is a figure of merit that describes the
uniformity of the LSB step sizes between DAC codes.
 DNL is also known as differential linearity error or DLE for short.
 The DNL curve represents the error in each step size, expressed in
fractions of an LSB.
 DNL is computed by calculating the discrete first derivative of the DACs
transfer curve, subtracting one LSB (i.e., VLSB) from the derivative
result, and then normalizing the result to one LSB
 As previously mentioned, we can define the average LSB size in one of
three ways.
 We can define it as the actual full-scale range divided by the number of
code transitions (number of codes minus 1) or we can define the LSB
as the slope of the best-fit line.
 Alternatively, we can define the LSB size as the ideal DAC step size.
 The choice of LSB calculations depends on what type of DNL
calculation we want to perform.
 There are four basic types of DNL calculation method: best-fi t,
endpoint, absolute, and beststraight- line.
 Best-fi t DNL uses the best-fi t line’s slope to calculate the average LSB
size.
 This is probably the best technique, since it accommodates gain errors
in the DAC without relying on the values of a few individual voltages.
 Endpoint DNL is calculated by dividing the full-scale range by the
number of transitions.
 This technique depends on the actual values for the maximum fullscale
(VFS+) and minimum full-scale (VFS–) levels.
 As such it is highly sensitive to errors in these two values and is
therefore less ideal than the best-fi t technique.
 The absolute DNL technique uses the ideal LSB size derived from the
ideal maximum and minimum full-scale values.
 This technique is less commonly used, since it assumes the DAC’s gain
is ideal.
 The best-straight-line method is similar to the best-fi t line method. The
difference is that the best-straight-line method is based on the line that
gives the best answer for integral nonlinearity (INL) rather than the line
that gives the least squared errors. Integral nonlinearity will be
discussed later in this chapter.
 Since the best-straight-line method is designed to yield the best
possible answer, it is the most relaxed specifcation method of the four.
 It is used only in cases where the DAC or ADC linearity performance is
not critical.
 Thus the order of methods from most relaxed to most demanding is
best-straight line, best-fi t, endpoint, and absolute.
 The choice of technique is not terribly important in DNL calculations.
 Any of the three techniques will result in nearly identical results, as
long as the DAC does not exhibit grotesque gain or linearity errors.
 DNL values of ±1/2 LSB are usually specifi ed, with typical DAC
performance of ±1/4 LSB for reasonably good DAC designs. A 1% error
in the measurement of the LSB size would result in only a 0.01 LSB
error in the DNL results, which is tolerable in most cases.
 The choice of technique is actually more important in the integral
INTEGRAL NONLINEARITY

 The integral nonlinearity curve is a comparison between the actual


DAC curve and one of three lines: the best-fi t line, the endpoint line, or
the ideal DAC line.
 The INL curve, like the DNL curve, is normalized to the LSB step size.
As in the DNL case, the best-fi t line is the preferred reference line,
since it eliminates sensitivity to individual DAC values.
 The INL curve can be calculated by subtracting the reference DAC line
(best-fi t, endpoint, or ideal) from the actual DAC curve, dividing the
results by the average LSB step size according to
 Note that using the ideal DAC line is equivalent to calculating the
absolute error curve.
 Since a separate absolute error test is often specifi ed, the ideal line is
seldom used in INL testing.
 Instead, the endpoint or best-fi t line is generally used.
 As in DNL testing, we are interested in the maximum and minimum
value in the INL curve, which we compare against a test limit such as
±1/2 LSB.
 The INL curve is the integral of the DNL curve, thus the term “integral
nonlinearity”; DNL is a measurement of how consistent the step sizes
are from one code to the next. INL is therefore a measure of
accumulated errors in the step sizes.
 Thus, if the DNL values are consistently larger than zero for many
codes in a row (step sizes are larger than 1 LSB), the INL curve will
exhibit an upward bias.
 Likewise, if the DNL is less than zero for many codes in a row (step
sizes are less than 1 LSB), the INL curve will have a downward bias.
 Ideally, the positive error in one code’s DNL will be balanced by
negative errors in surrounding codes and vice versa.
 If this is true, then the INL curve will tend to remain near zero. If not,
the INL curve may exhibit large upward or downward bends, causing
INL failures.
 The INL integration can be implemented using a running sum of the
elements of the DNL.
 The ith element of the INL curve is equal to the sum of the fi rst i–1
elements of the DNL curve plus a constant of integration.
 When using the best-fi t method, the constant of integration is equal to
the difference between the fi rst DAC output voltage and the
corresponding point on the best-fi t curve, all normalized to one LSB.
 When using the endpoint method, the constant of integration is equal
to zero. When using the absolute method, the constant is set to the
normalized difference between the fi rst DAC output and the ideal
output.
 In any running sum calculation it is important to use high-precision
mathematical operations to avoid accumulated math error in the running sum.
 Mathematically, we can express this process as
 This is usually the easiest way to calculate DNL.
 The first derivative technique works well in DAC testing, but we will see
in the next chapter that the DNL curve for an ADC is easier to capture
than the INL curve.
 In ADC testing it is more common to calculate the DNL curve fi rst and
then integrate it to calculate the INL curve.
 In either case, whether we integrate DNL to get INL or differentiate INL
to get DNL, the results are mathematically identical.
 Integral nonlinearity and differential nonlinearity are sometimes
referred to by the names integral linearity error (ILE) and differential
linearity error (DLE).
 However, the terms INL and DNL seem to be more prevalent in data
PARTIAL TRANSFER CURVES
 A customer or systems engineer may specify that only a portion of a DAC or ADC
transfer curve must meet certain specifi cations.
 For example, a DAC may be designed so that its VFS– code corresponds to 0 V.
 However, due to analog circuit clipping as the DAC output signal approaches
ground, the DAC may clip to a voltage of 100 mV. If the DAC is designed to
perform a specific function that never requires voltages below 100 mV, then the
customer may not care about this clipping.
 In such a case, the DAC codes below 100 mV are excluded from the offset, gain,
INL, DNL, and so on. specifi cations.
 The test engineer may then treat these codes as if they do not exist. This type of
partial DAC and ADC testing is becoming more common as more DACs and ADCs
are designed into custom applications with very specifi c requirements.
 General-purpose DACs are unlikely to be specifi ed using partial curves, since the
customer’s application needs are unknown.
MAJOR CARRIER TESTING
 The techniques discussed thus far for measuring INL and DNL are
based on a testing approach called all-codes testing.
 In all-codes testing, all valid codes in the transfer curve are measured
to determine the INL and DNL values.
 Unfortunately, all-codes testing can be a very time-consuming process.
 Depending on the architecture of the DAC, it may be possible to
determine the location of each voltage in the transfer curve without
measuring each one explicitly.
 We will refer to this as selected-code testing.
 Selected-code testing can result in signifi cant test-time savings, which
of course represents substantial savings in test cost.
 There are several selected-code testing techniques, the simplest of
 Many DACs are designed using an architecture in which a series of binary-weighted
resistors or capacitors are used to convert the individual bits of the converter code into
binary-weighted currents or voltages.
 These currents or voltages are summed together to produce the DAC output.
 For instance, a binary-weighted unsigned binary D-bit DAC’s output can be described as a
sum of binary-weighted voltage or current values,
multiplied by the individual bits of the DAC’s input code,
The DAC’s output value is therefore equal to
 If this idealized model of the DAC is suffi ciently accurate, then we only
need to make D+1 measurements of DAC behavior and solve for the
unknown model parameters: W0, W1, . . . , WD–1 and the DC Base
term.
 Subsequently, we can cycle through all binary values and compute the
entire DAC transfer curve. This DAC testing method is called the major
carrier technique.
 The major carrier approach can be used for ADCs as well as DACs.
 The assumption of sufficient DAC or ADC model accuracy is only valid if
the actual superposition errors of the DAC or ADC are low.
 This may or may not be the case. The superposition assumption can
only be determined through characterization, comparing the all-codes
DAC output levels with the ones generated by the major carrier
 The most straightforward way to obtain each model parameter Wn is to
set code bit bn to 1 and all other to zero. This is then repeated for each
code bit for n from 0 to D—1.
 However, the resulting output levels are widely different in magnitude.
 This makes them diffi cult to measure accurately with a voltmeter,
since the voltmeter’s range must be adjusted for each measurement.
 A better approach that alleviates the accuracy problem is to measure
the step size of the major carrier transitions in the DAC curve, which
are all approximately 1 LSB in magnitude.
 A major carrier transition is defi ned as the voltage (or current)
transition between the DAC codes 2n–1 and 2n.
 For example, the transition between binary 00111111 and 01000000 is
 Major carrier transitions can be measured using a voltmeter’s sample-
anddifference mode, giving highly accurate measurements of the
major carrier transition step sizes.
 Once the step sizes are known, we can use a series of inductive
calculations to fi nd the values of W0, W1, . . . , WD-1. We start by
realizing that we have actually measured the following values:
 The value of the fi rst major transition, V0, is a direct measurement of
the value of W0 (the step size of the least signifi cant bit).
 The value of W1 can be calculated by rearranging the second equation:
W1 = V1 + W0.
 Once the values of W0 and W1 are known, the value of W2 is
calculated by rearranging the third equation: W2 = V2 + W1 + W0, and
so forth.
 Once the values of W0–Wn are known, the complete DAC curve can be
reconstructed for each possible combination of input bits b0– bn using
the original model of the DAC described by Eq. (6.33).
 The major carrier technique can also be used on signed binary and
two’s complement converters, although the codes corresponding to the
major carrier transitions must be chosen to match the converter
 For example, the last major transition for our two’s complement 4-bit
DAC example happens between code 1111 (decimal –1) and 0000
(decimal 0).
 Aside from these minor modifications in code selection, the major
carrier technique is the same as the simple unsigned binary approach.
OTHER SELECTED-CODE TECHNIQUES

 Besides the major carrier method, other selected-code techniques have


been developed to reduce the test time associated with all-codes
testing.
 The simplest of these is the segmented method. This method only
works for certain types of DAC and ADC architectures, such as the 12-
bit segmented DAC shown in Figure.
 Although most segmented DACs are actually constructed using a
different architecture than that in Figure, this simple architecture is
representative of how segmented DACs can be tested.
 The example DAC uses a simple unsigned binary encoding scheme
with 12 data bits, D11-D0.
 It consists of two portions, a 6-bit coarse resolution DAC and a 6-bit fi
ne resolution DAC.
 The LSB step size of the coarse DAC is equal to the full-scale range of
the fi ne DAC plus one fine DAC LSB.
 In other words, if the combined 12-bit DAC has an LSB size of VLSB,
then the fi ne DAC also has a step size of VLSB, while the coarse DAC
has a step size of 26 × VLSB.
 The output of these two 6-bit DACs can therefore be summed together
to produce a 12-bit DAC
 DAC output = coarse DAC contribution + fine DAC contribution
 Both the fine DAC and the coarse DAC are designed using a resistive
divider architecture rather than a binary-weighted architecture.
 Since major carrier testing can only be performed on binary-weighted
architectures, an all-codes testing approach must be used to verify the
performance of each of the two 6-bit resistive divider DACs.
 However, we would like to avoid testing each of the 212, or 4096 codes
of the composite 12-bit DAC.
 Using superposition, we will test each of the two 6-bit DACs using an
all-codes test.
 This requires only 2 × 26, or 128 measurements.
 We will then combine the results mathematically into a 4096-point all-
codes curve using a linear model of the composite DAC.
 Let us assume that through characterization, it has been determined
that this example DAC has excellent superposition.
 In other words, the step sizes of each DAC are independent of the
setting of the other DAC. Also, the summation circuit has been shown
to be highly linear.
 In a case such as this, we can measure the all-codes output curve of
the coarse DAC while the fi ne DAC is set to 0 (i.e., D5-D0 = 000000).
 We store these values into an array VDAC-COARSE(n), where n takes
on the values 0 to 63, corresponding to data bits D11-D6.
 Then we can measure the all-codes output curve for the fi ne DAC
while the coarse DAC is set to 0 (i.e., D11-D6 = 000000).
 These voltages are stored in the array VDAC-FINE(n), where n
 Although we have only measured a total of 128 levels, superposition
allows us to recreate the full 4096-point DAC output curve by a simple
summation.
 Each DAC output value VDAC(i) is equal to the contribution of the
coarse DAC plus the contribution of the fi ne DAC

 Thus a full 4096-point DAC curve can be mathematically reconstructed


from only 128 measurements by evaluating this equation at each value
of i from 0 to 4095.
 Of course, this technique is totally dependent on the architecture of the
DAC.
 A more advanced selected-codes technique was developed at the
National Institute of Standards and Technology (NIST). This technique is
useful for all types of DACs and ADCs.
 It does not make any assumptions about superposition errors or
converter architecture.
 Instead, it uses linear algebra and data collected from production lots
to create an empirical model of the DAC or ADC.
 The empirical model only requires a few selected codes to recreate the
entire DAC or ADC transfer curve.
 Another similar technique uses wavelet transforms to predict the
overall performance of converters based on a limited number of
measurements
DYNAMIC DAC TESTS: CONVERSION TIME (SETTLING
TIME)

 So far we have discussed only low-frequency DAC performance.


 The DAC DC tests and transfer curve tests measure the DAC’s static
characteristics, requiring the DAC to stabilize to a stable voltage or
current level before each output level measurement is performed.
 If the DAC’s output stabilizes in a few microseconds, then we might
step through each output state at a high frequency, but we are still
performing static measurements for all intents and purposes.
 A DAC’s performance is also determined by its dynamic characteristics.
 One of the most common dynamic tests is settling time, commonly referred to
as conversion time.
 Conversion time is defined as the amount of time it takes for a DAC to
stabilize to its final static level within a specified error band after a DAC code
has been applied.
 For instance, a DAC’s settling time may be defined as 1 μs to ±1/2 LSB.
 This means that the DAC output must stabilize to its fi nal value plus or minus
a 1/2 LSB error band no more than 1 μs after the DAC code has been applied.
 This test definition has one ambiguity. Which DAC codes do we choose
to produce the initial and final output levels? The answer is that the
DAC must settle from any output level to any other level within the
specified time.
 Of course, to test every possibility, we might have to measure millions
of transitions on a typical DAC.
 As with any other test, we have to determine what codes represent the
worst-case transitions.
 Typically settling time will be measured as the DAC transitions from
minus full-scale (VFS–) to plus full-scale (VFS+) and vice versa, since
these two tests represent the largest voltage swing.
 The 1/2 LSB example uses an error band specifi cation that is referenced
to the LSB size.
 Other commonly used defi nitions require the DAC output to settle within
a certain percentage of the full-scale range, a percentage of the fi nal
voltage, or a fi xed voltage range.
 So we might see any of the following specifi cations:
 settling time = 1 μs to ± 1% of full-scale range
 settling time = 1 μs to ± 1% of fi nal value
 settling time = 1 μs to ±1 mV
 The test technique for all these error-band defi nitions is the same; we
just have to convert the error-band limits to absolute voltage limits
before calculating the settling time.
 The straightforward approach to testing settling time is to digitize the
DAC’s output as it transitions from one code to another and then use
the known time period between digitizer samples to calculate the
settling time.
 We measure the fi nal settled voltage, calculate the settled voltage
limits (i.e., ±1/2 LSB), and then calculate the time between the digital
signal transition that initiates a DAC code change and the point at
which the DAC first stays within the error band limits, as shown in
Figure .
 In extremely high frequency DACs it is common to defi ne the settling
time not from theDAC code change signal’s transition but from the time
the DAC passes the 50% point to the time it settles to the specifi ed
limits as shown in Figure.
 This is easier to calculate, since it only requires us to look at the DAC
output, not at the DAC output relative to the digital code.
OVERSHOOT AND UNDERSHOOT

 Overshoot and undershoot can also be calculated from the samples


collected during the DAC settling time test.
 These are defined as a percentage of the voltage swing or as an
absolute voltage.
 shows a DAC output with 10% overshoot and 2% undershoot on a VFS–
to VFS+ transition.
RISE TIME AND FALL TIME

 Rise and fall time can also be measured from the digitized waveform collected
during a settling time test.
 Rise and fall times are typically defi ned as the time between two markers, one
of which is 10% of the way between the initial value and the final value and the
other of which is 90% of the way between these values as depicted in Figure.
 Other common marker definitions are 20% to 80% and 30% to 70%.
DAC-TO-DAC SKEW
 Some types of DACs are designed for use in matched groups. For example,
a color palette RAM DAC is a device that is used to produce colors on video
monitors.
 A RAM DAC uses a random access memory (RAM) lookup table to turn a
single color value into a set of three DAC output values, representing the
red, green, and blue intensity of each pixel.
 These DAC outputs must change almost simultaneously to produce a clean
change from one pixel color to the next.
 The degree of timing mismatch between the three DAC outputs is called
DAC-to-DAC skew.
 It is measured by digitizing each DAC output and comparing the timing of
the 50% point of each output to the 50% point of the other outputs. There
are three skew values (R-G, G-B, and B-R), as illustrated in Figure 6.15.
 Skew is typically specifi ed as an absolute time value, rather than a signed
GLITCH
 ENERGY
Glitch energy, (GLITCH
or glitch impulse, isIMPULSE)
another specification common to high-
frequency DACs.
 It is defined as the total area under the voltage-time curve of the glitches
in a DAC’s output as it switches across the largest major transition (i.e.,
01111111 to 10000000 in an 8-bit DAC) and back again.
 As shown in Figure the glitch area is defi ned as the area that falls outside
the rated error band.
 These glitches are caused by a combination of capacitive/inductive ringing
in the DAC output and skew between the timing of the digital bits feeding
the binary-weighted DAC circuits.
 The parameter is commonly expressed in picosecond-volts (ps-V) or
equivalently, picovolt-seconds (pV-s). (These are not actually units of
energy, despite the term glitch energy.)
 The area under the negative glitches is considered positive area, and
CLOCK AND
 Clock and data DATA FEEDTHROUGH
feedthrough is another common dynamic DAC specifi
cation.
 It measures the crosstalk from the various clocks and data lines in a
mixed-signal circuit that couple into a DAC output.
 There are many ways to define this parameter; so it is difficult to list a
specific test technique.
 However, clock and data feedthrough can be measured using a technique
similar to all the other tests in this section.
 The output of the DAC is digitized with a high-bandwidth digitizer. Then the
various types of digital signal feedthrough are analyzed to make sure they
are below the defined test limits.
 The exact test conditions and defi nition of clock and data feedthrough
should be provided in the data sheet.
 This measurement may require time-domain analysis, frequency-domain
TESTS FOR COMMON DAC APPLICATIONS: DC
REFERENCES
 As previously mentioned, the test list for a given DAC often depends on
its intended functionality in the system-level application. Many DACs
are used as simple DC references.
 An example of this type of DAC usage is the power level control in a
cellular telephone.
 As the cellular telephone user moves closer or farther away from a
cellular base station (the radio antenna tower), the transmitted signal
level from the cellular telephone must be adjusted.
 The transmitted level may be adjusted using a transmit level DAC so
that the signal is just strong enough to be received by the base station
without draining the cellular telephone’s battery unnecessarily.
 If a DAC is only used as a DC (or slow-moving) voltage or current
reference, then its AC transmission parameters are probably
unimportant. It would probably be unnecessary to measure the 1-kHz
signal to total harmonic distortion ratio of a DAC whose purpose is to
set the level of a cellular telephone’s transmitted signal.
 However, the INL and DNL of this DAC would be extremely important,
as would its absolute errors,monotonicity, full-scale range, and output
drive capabilities (output impedance).
 Notable exceptions are signal-to-noise ratio and idle channel noise
(ICN).
 These may be of importance if the DC level must exhibit low noise.
 For example, the cellular telephone’s transmitted signal might be
corrupted by noise on the output of the transmit level control DAC, and
 Dynamic tests are not typically performed on DC reference DACs, with
the exception of settling time.
 The settling time of typical DACs is often many times faster than that
required in DC reference applications; so even this parameter is
frequently guaranteed by design rather than being tested in
production.
AUDIO RECONSTRUCTION
 Audio reconstruction DACs are those used to reproduce digitized
sound. Examples include the voice-band DAC in a cellular telephone
and the audio DAC in a PC sound card.
 These DACs are more likely to be tested using the transmission
parameters of Chapter 11, since their purpose is to reproduce arbitrary
audio signals with minimum noise and distortion.
 The intrinsic parameters (i.e., INL and DNL) of audio reconstruction
DACs are typically measured only during device characterization.
 Linearity tests can help track down any transmission parameter
failures caused by the DAC.
 It is often possible to eliminate the intrinsic parameter tests once the
device is in production, keeping only the transmission tests.
 Dynamic tests are not typically specifi ed or performed on audio DACs.
 Any failures in settling time, glitch energy, and so on, will usually
manifest themselves as failures in transmission parameters such as
signal-to-noise, signal-to-distortion, and idle channel noise.
DATA MODULATION
 Data modulation is another purpose to which DACs are often applied.
 The cellular telephone again provides an example of this type of DAC
application.
 The IF section of a cellular telephone base-band modulator converts
digital data into an analog signal suitable for transmission, similar to
those used in modems
 Like the audio reconstruction DACs, these DACs are typically tested
using sine wave or multitone transmission parameter tests.
 Again, the intrinsic tests like INL and DNL may be added to a
characterization test program to help debug the design.
 However, the intrinsic tests are often removed after the transmission
parameters have been verified.
 Dynamic tests such as settling time may or may not be necessary for
 Data modulation DACs also have very specifi c parameters such as
error vector magnitude (EVM) or phase trajectory error (PTE).
 Parameters such as these are very application-specifi c.
 They are usually defi ned in standards documents published by the
IEEE, NIST, or other government or industry organization.
 The data sheet should provide references to documents defining
applicationspecific tests such as these.
 The test engineer is responsible for translating the measurement
requirements into ATE-compatible tests that can be performed on a
production tester.
 ATE vendors are often a good source of expertise and assistance in
developing these application-specifi c tests.
VIDEO SIGNAL GENERATORS
 As discussed earlier, DACs can be used to control the intensity and
color of pixels in video cathode ray tube (CRT) displays.
 However, the type of testing required for video DACs depends on the
nature of their output. There are two basic types of video DAC
application, RGB and NTSC.
 An RGB (red-green-blue) output is controlled by three separate DACs.
 Each DAC controls the intensity of an electron beam, which in turn
controls the intensity of one of the three primary colors of each pixel as
the beam is swept across the CRT.
 In this application, each DAC’s output voltage or current directly control
the intensity of the beam. RGB DACs are typically used in computer
monitors.
 The NTSC format is used in transmission of standard (i.e., non-HDTV) analog
television signals.
 It requires only a single DAC, rather than a separate DAC for each color.
 The picture intensity, color, and saturation information is contained in the
time-varying offset, amplitude, and phase of a 3.54-MHz sinusoidal waveform
produced by the DAC.
 Clearly this is a totally different DAC application than the RGB DAC
application.
 These two seemingly similar video applications require totally different testing
approaches.
 RGB DACs are tested using the standard intrinsic tests like INL and
DNL, as well as the dynamic tests like settling time and DAC-to-DAC
skew.
 These parameters are important because the DAC outputs directly
control the rapidly changing beam intensities of the red, green, and
blue electron beams as they sweep across the computer monitor.
 Any settling time, rise time, fall time, undershoot, or overshoot
problems show up directly on the monitor as color or intensity
distortions, vertical lines, ghost images, and so on.
 The quality of the NTSC video DAC, by contrast, is determined by its
ability to produce accurate amplitude and phase shifts in a 3.54-MHz
sine wave while changing its offset.
 This type of DAC is tested with transmission parameters like gain,
signal-to-noise, differential gain, and differential phase

You might also like