0% found this document useful (0 votes)
149 views

Curs DFT Intro 3

The document provides information about a "Design-For-Testability" course, including details about lectures, projects, grading, and the importance and challenges of testing integrated circuits and electronic systems. Key points: - Lectures will be on Tuesdays from 4-6pm in room A416, and projects will be on Tuesdays from 6-7pm in room A410. - Projects make up 50% of the grade and the final exam makes up the other 50%. Groups of 2 are recommended. - Testing integrated circuits and electronic systems is crucial due to factors like Moore's Law and the risk of even one faulty component causing system failure.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views

Curs DFT Intro 3

The document provides information about a "Design-For-Testability" course, including details about lectures, projects, grading, and the importance and challenges of testing integrated circuits and electronic systems. Key points: - Lectures will be on Tuesdays from 4-6pm in room A416, and projects will be on Tuesdays from 6-7pm in room A410. - Projects make up 50% of the grade and the final exam makes up the other 50%. Groups of 2 are recommended. - Testing integrated circuits and electronic systems is crucial due to factors like Moore's Law and the risk of even one faulty component causing system failure.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 76

Design-For-Testability

course instructor: Mariana ILA , PhD

Lectures, Project Classes and Grading Lectures: Tue 16:00 18:00 A416 Project: Tue 18:00 19:00 A410 Textbooks
VLSI Test Principles and Architectures: Design for Testability, Edited by Laung-Terng Wang, ChengWen Wu, and Xiaoqing Wen, Elsevier Inc. 2006

Grading will be assigned on:


Project ( 50% ) Final ( 50% )

Projects
Groups of 2 people are strongly recommended Tentative schedule:
Make your choice by week 3 (Tue) First update: late October Second update: late November Final presentation: January

May be shared with other classes you are taking

Importance and challenges of testing


Modern electronic testing has a history of over 40 yrs. The IC developed first time in 1958 at TI and Fairchild Semicond. In 2005 cc. $230 billion in sales worldwide Introd. of new, nanotechnologies (less than 90nm geometry) made semiconductor test grow steadily Now test costs can amount to 40% of overall product cost In order to tackle problems associated with testing, we need to address them at early stages in the design Important to expose students and practitioners to most advanced techniques VLSI test techniques and DFT architectures to help them design better quality products that can be reliably manufactured in quantity Sept.2011

Importance of testing Moores law: the scale of ICs doubles every 18 months SSI MSI LSI VLSI 10s of transistors 100s 1000s 100.000s Technologies of less than 90 nm Clock speeds from 108kHz in 1971 to several gigahertz today One faulty transistor or wire can make a whole 100-million transistors chip faulty
Sept.2011

Rule of ten How we produce an electronic system? 1. Produce the ICs 2. Use ICs to assemble PCBs 3. Use PCBs to assemble system Rule of ten: The cost of detecting a faulty IC increases by an order of magnitude as we move through each stage of manufacturing, from device level to board level to system level and finally to system operation in the field.
Sept.2010

Electronic testing: What, When WHAT: IC testing PCB testing System testing WHEN: At various manufacturing stages During system operation

1. 2. 3.

1. 2.

Sept.2010

Electronic testing: Why, For Whom


1. 2. 3.

1. 2. 3. 4. 5. 6.
Sept.2010

WHY: To find fault-free ICs, PCBs, systems To improve production yield by analyzing cause of defects when faults are found To ensure fault-free system operation and initiate repair procedures when faults are detected FOR WHOM: Designers Test engineers Product engineers Managers Manufacturers End-users

VLSI Design Flow

Specs; Partitioning Behavioral, then RTL + Testbench(es)


Testbench.v RTL_descr. v

Verification (Incl. Simulation)

Synthesis
Post-layout simulation

P&R
layout.xx layout.xx layout.v

Mask fabrication
Sept.2010

Mask ready for testing

VLSI Design Verification


Tools to assist design verification process CAD simulation tools Hardware emulation Formal verification methods Verification very time-consuming and expensive definite impact on time-to-market Many verification techniques borrowed from test technology Also, test stimulus developed for RTL verification are used in conjunction with associated output responses obtained from simulation, to test VLSI device during manufacturing

1. 2. 3.

Sept.2010

DUT Verification
You draw it
n Descriere RTL m

n Testbench (test stimulus)

?
Descriere post-layout n m

So design verification can be considered a form of testing Once verified, VLSI design goes to fabrication In parallel, test engineers develop test procedures based on design spec and fault models assoc. with implem. technology
Sept.2010

CUT Testing

Pass = fault-free

Input test stimulus

Input1

Input_n

Output1 Circuit Under Test (CUT) Output_m

Output Response Analysis


Fail = Faulty

Sept.2010

Defects
Defect: a flaw or physical imperfection that may lead to a fault

Statistical flaws in the materials and masks used to fabricate ICs are unavoidable it is impossible for 100% of any IC to be defectfree
First tests performed during manufacturing process are to detect defects the wafer-level tests

Sept.2010

Testing flow during manufacturing process


Fabricate IC on wafer

Extract and package defect-free devices

Retest packaged devices to eliminate those damaged during packaging, or put in defective packages FMA Additional testing to ensure final quality, before going to market (incl. measurement of parameters such as in/out timing, specs, voltage, current) Additional burn-in or stress testing for chips subjected to high temps and supply voltages Sept.2010

Yield and Reject Rate Number of acceptable parts Yield = ------------------------------------------------------Total number of parts fabricated Types of yield losses: 1. Catastrophic yield loss: due to random defects 2. Parametric yield losses: due to process variations

Sept.2010

Undesirable situations during testing an IC


1. A faulty device appears to be a good part passing the test 2. A good device fails the test and appears as faulty Why: Poorly designed test (mainly 1) when faulty ICs are finally found in a system, they are returned to the IC manufacturer, who performs a FMA for possible improvements to the VLSI development and manufacturing processes The lack of DFT

Sept.2010

Reject Rate = Defect Level Number of faulty parts passing final tests Total number of parts passing final tests
The reject rate gives an indication of the overall quality of VLSI testing process E.g. a reject rate of 500 parts per million (PPM) may be considered acceptable A reject rate <= 100 PPM is high-quality The goal of six-sigma manufacturing (called also zero defects) is 3.4 PPM or less
Sept.2010

Reject Rate = ----------------------------------------------------------

Electronic System Manufacturing Process

PCB Fabrication

Bare Board Test

PCB Assembly Unit Assembly

Board Test Unit Test

System Assembly

System Test

Sept.2010

Physical Implementation of an IC The microscopic world of the physical structure of an IC with six levels of interconnections and effective transistor channel length of 0.Qm

Sept.2010

Cause of defects
Any small piece of dust or abnormality of geometrical shape Process variations affecting transistor channel length, transistor threshold voltage, metal interconnect width and thickness, and inter-metal layer dielectric thickness will impact logical and timing performance Randomly localized manufacturing imperfections can result in resistive bridging between metal lines, resistive opens in metal lines, improper via formation, etc.
Sept.2010

Nanometer-scale structures vs. CMOS CMOS - conventional complementary metal oxide semiconductor devices Nanometer-scale structures: use sophisticated fabrication techniques
Have much lower current drive capabilities and are much more sensitive to noise-induced errors such as crosstalk Are more susceptible to failures of transistors and wires due to soft (cosmic) errors, process variations, electromigration, and material aging
Sept.2010

Nanometer-scale structures vs. CMOS (2) Nanometer-scale structures:


Allow higher integration and lower cost per transistor, but Difficulty of testing each transistor increases due to the increased complexity of the VLSI device and increased potential for defects, as well as the difficulty of detecting the faults produced by those defects

Sept.2010

Fault, error, failure A fault is a representation of a defect reflecting a physical condition that causes a circuit to fail to perform in a required manner A circuit error is a wrong output signal produced by a defective circuit A failure is a deviation in the performance of a circuit or system from its specified behavior and represents an irreversible state of a component such that it must be repaired in order for it to provide its intended design function
Sept.2010

Fault, error, failure (2)

May lead to Circuit defect Fault

Can cause

Can result in Circuit error System failure

Sept.2010

Exhaustive testing Test vector = input pattern 2^n = total number of test vectors If apply all exhaustive testing Issues:
Impossible to do for big n Even all applied, still cant guarantee that all possible states have been visited
Pass = fault-free

Input test stimulus


Sept.2010

Input1

Input_n

Output1 Circuit Under Test (CUT) Output_m

Output Response Analysis


Fail = Faulty

Structural testing
Uses fault models Saves time and improves test efficiency Cant guarantee detection of all possible defects But use of fault models provides a quantitative measure of fault-detection capabilities for a given set of test vectors for a targeted fault model Fault coverage = ---------------------------------------------------

Number of detected faults

Total number of faults

Sept.2010

Fault coverage vs. fault detection efficiency


Impossible to get 100% fault coverage due to Undetectable faults: there is no test to distinguish the fault-free circuit from a faulty circuit containing that fault Difficult to identify how many are

Fault detection = --------------------------------------------------Total no. of faults No. of undetectable faults efficiency

Number of detected faults

Sept.2011

Defect level
Defect level = 1 yield ^ (1 fault coverage) (Williams) Using equation we can show that a PCB with 40 chips, each having 90% fault coverage and 90% yield could result in a reject rate of 41.9% (419,000 PPM)!!!! Improving fault coverage can be easier and less expensive than improving manufacturing yield because making yield enhancements can be costly; generating test stimulus with high fault coverage is very important!!!!
Sept.2010

Goal of test generation

To find an efficient set of test vectors that detects all faults considered for a given circuit

Sept.2010

Fault models
k = no. of types of faults that can occur at each potential fault site (k = 2 for most fault models) n = no. of possible fault sites, depending on the fault model Assuming there can be only one fault in the circuit, then the total number of possible single faults, referred to as the single-fault model or single-fault assumption is:
No. of single faults = k x n

Sept.2010

Multiple-fault model
Multiple-fault model : the total number of possible combinations of multiple faults, is:
No. of multiple faults = (k + 1) ^ n - 1

Single-fault model

Multiple-fault model More accurate No. of faults becomes too large for big k and n

High fault coverage under single-fault assumption


Sept.2010

High fault coverage for multiple-fault model

Typically the single-fault assumption is used for test generation and evaluation

Equivalent faults; Fault collapsing Equivalent faults: two or more faults that result in identical faulty behaviour for all possible input patterns can be represented by any single fault from the set of equivalent faults So, no. of single faults to be considered for test generation becomes < k x n called fault collapsing

Sept.2010

Stuck-at faults A stuck-at fault transforms the correct value on the faulty signal line to appear to be stuck at a constant logic value, either a logic 0 or a logic 1, referred to as stuck-at-0 (SA0) or stuck-at-1 (SA1) A stuck-at fault affects the state of logic signals on lines in a logic circuit, including primary inputs (PIs), primary outputs (POs), internal gate inputs and outputs, fanout stems (sources), and fanout branches
Sept.2010

An example

x1 x2

a b d g i e f y

x3

9 signal lines: a to i b = fanout source d, e = fanout branches 18 (9 x 2) possible faulty circuits under single-fault assumption
Sept.2010

The truth table for faultfree and faulty circuits


Test vectors: 011 100 001 110
Enough for 100% single stuck-at fault coverage for this circuit. Sept.2010

x1x2x3 y
a SA0 a SA1 b SA0 b SA1 c SA0 c SA1 d SA0 d SA1 e SA0 e SA1 f SA0 f SA1 g SA0 g SA1 h SA0 h SA1 i SA0 i SA1

000 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1

001 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1

010 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1

011 0 0 1 1 0 0 0 0 0 1 0 0 1 0 1 0 1 0 1

100 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1

101 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 1 0 1

110 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1

111 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1

Fault collapsing
Stuck-at fault collapsing typically reduces the total number of faults by 50 to 60% SA0 at the input to an AND (NAND) gate ~ the SA0 (SA1) at the output of the gate SA1 at the input to an OR (NOR) gate ~ SA1 (SA0) at the output of the gate a SA0 (SA1) at the input of an inverter (or buffer) ~ SA1 (SA0) at the output of the inverter (or SA0 (SA1) of a buffer) a stuck-at fault at the source (output of the driving gate) of a fanout-free net ~ same stuck-at fault at the destination (gate input being driven)

Number of collapsed faults = 2 x (number of POs+number of fanout stems) + total number of gate (including inverter) inputs - total number of inverters
For our example: Sept.2010

No. of collapsed faults = 2x(1+1)+7-1 = 10

Theorems for detection of SA-faults in CLBs

Theorem 1.1 A set of test vectors that detects all single stuck-at faults on all primary inputs of a fanout-free combinational logic circuit will detect all single stuck-at faults in that circuit. Theorem 1.2 A set of test vectors that detect all single stuck-at faults on all primary inputs and all fanout branches of a combinational logic circuit will detect all single stuck-at faults in that circuit.

Sept.2010

Stuck-at fault model for sequential circuits


High fault coverage test generation for sequential circuits is much more difficult than for combinational circuits because, for most faults in a sequential logic circuit, it is necessary to generate sequences of test vectors DFT techniques are frequently used to ease sequential circuit test generation

Sept.2010

Delay faults
Fault-free operation of a logic circuit means:
performing the logic function correctly propagating the correct logic signals along paths within a specified time limit

A delay fault causes excessive delay along a path such that the total propagation delay falls outside the specified limit 2 types of delay faults:
Gate-delay fault (& transition fault) model: time interval taken for a transition from the gate input to its output exceeds its specified range Path-delay fault: the cumulative propagation delay along a signal path through the CUT
Sept.2010

Delay faults (2)

0 0 0 1 v2 v1

x1 x2 t=0

3 2 t=2 3 t=5

t=7 y

1 1

x3

Sept.2010

Delay faults (3) issues due to nanometer technologies


The portion of delay contributed by gates reduces while the delay due to interconnect becomes dominant If clock frequencies increase with scaling, then onchip inductances can play a role in determining the interconnect delay for long wide wires, such as those in clock trees and buses Increase of cross-coupling capacitance and inductance between interconnects, leading to severe crosstalk effects, resulting in improper functioning of a chip So, path delay is not equal to the sum of all delays of Sept.2011 gates along the path

Crosstalk effects
2 categories:
Crosstalk glitch: a pulse that is provoked by coupling effects among interconnect lines. The magnitude of the glitch depends on the ratio of the coupling capacitance to the line-to-ground capacitance Crosstalk delay: a signal delay that is provoked by the same coupling effects among interconnect lines, but it may be produced even if line drivers are balanced but have large loads (it adds up to gate and interconnects delays)

So, critical need to develop testing techniques for manufacturing defects that produce crosstalk effects

Sept.2011

Pattern Sensitivity and Coupling Faults In high - density RAMs


The contents of a cell or the ability of a memory cell to change can be influenced by the contents of its neighboring cells, referred to as a pattern sensitivity fault Coupling fault results when a transition in one cell causes the content of another cell to change
For memory testing tests for pattern sensitivity coupling faults stuck-at faults
Sept.2010

March LR Algorithm
One of most efficient RAM test algorithms (in terms of test time and fault detection capability) Can detect:
pattern sensitivity faults intra-word coupling faults bridging faults

Test time on order of 16N (N = no. of address locations) Test Algorithm March LR March Test Sequence (w0); w1); (r1, w0); (r0, w1); (r1, w0, r0, r0, (r0)

(r0, w1, r1, r1, w0);

Notation: w0=write 0 (or all 0s); r1=read 1 (or all 1s); =address up; =address down; =address either way.
Sept.2011

Levels of Abstractions in Testing


Behavioural RTL levels Gate level Switch level: when the switch-level model for each gate in the netlist is substituted, we obtain an accurate abstraction of the netlist used for physical layout Physical level: most important for VLSI testing because it provides the actual layout and routing information for the fabricated device

Sept.2010

Testing at Behavioural and RTL Levels


Common practice methodology for ASIC design is: design, simulate, and synthesize at RTL level
Black boxes or IPs cores are often incorporated in SOC design, for which there is very little, if any structural information. Traditional automatic test pattern generation (ATPG) tools cannot effectively handle designs including blocks with implementation detail unknown or subject to change However, several approaches to test pattern generation at RTL have been proposed. Most of these approaches can generate test patterns of good quality, sometimes comparable to gatelevel ATPG tools They lack general applicability and so are not widely accepted

Sept.2011

RTL testing an example f = abc+abc+xabc, where x = dont care

Sept.2011

RTL Testing Despite cons, it is desirable to move ATPG operations toward higher levels of abstraction while targeting new types of faults in deep submicron devices Main advantages of high-level approaches are :
compact test sets reduced computation time

It is expected that this trend will continue

Sept.2011

Gate Level Testing


At this level the stuck-at fault model can be applied There are many commercial ATPG and fault simulation tools available Usually the stuck-at fault model is also employed to evaluate the effectiveness of the input stimuli used for simulation-based design verification. So, the design verification stimuli are often also used for fault detection during manufacturing testing Also delay fault models and delay testing are traditionally based on gate-level description Still, test development at the gate level is not enough for deep submicron designs
Sept.2011

Switch Level Testing


Transistor fault models (stuck-open and stuck-short) can be applied and evaluated based on the switch-level model for each gate in the netlist , which represents an accurate abstraction of the netlist used for physical layout Transmission gate and tristate buffer faults can also be tested at the switch level Also, a defect-based test methodology is more effective with a switch-level model of circuit (contains more detailed structural info than a gate-level and will yield a more accurate defect coverage analysis) NB: switch-level description is more complicated than the gate-level for both ATPG and fault simulation
Sept.2011

Physical Level Testing


Physical Level provides the actual layout and routing information for the fabricated device So, gives most accurate information for delay faults, crosstalk effects, bridging faults and interconnect delays, for more accurate delay fault analysis For deep submicron IC chips, in order to characterize electrical properties of interconnections, a distributed resistanceinductancecapacitance (RLC) model is based on the physical layout RLC model is used to analyze and test for potential crosstalk problems
Sept.2011

Physical Level Testing (2)


Bridging fault sites can be determined by extracting the capacitance between the wires from the physical design it provides an accurate determination of those wires that are adjacent and hence likely to sustain bridging faults Also, the value of the capacitance between two adjacent wires is proportional to the distance between the wires and/or the length of adjacency So, fault sites with the highest capacitance value can be targeted for test generation and evaluation as these sites have a higher probability of incurring bridging faults

Sept.2011

HISTORICAL REVIEW OF VLSI TEST TECHNOLOGY

Automatic Test Equipment (ATE) Automatic Test Pattern Generation (ATPG) Fault Simulation Digital Circuit Testing Analog and Mixed-Signal Circuit Testing DFT

Sept.2011

Design For Testability (DFT) Test engineers usually have to construct test vectors AFTER the design is completed. This invariably requires a substantial amount of time and effort that could be avoided if testing is considered early in the design flow to make the design more testable. As a result, integration of design and test, referred to as design for testability (DFT), was proposed in the 1970s.
Sept.2011

DFT Techniques To structurally test circuits, we need to control and observe logic values of internal lines! Difficult, especially for sequential circuits! DFT techniques help find those parts of a digital circuit that will be most difficult to test and to assist in test pattern generation for fault detection

Sept.2011

DFT Techniques - Categories Ad-hoc DFT techniques Level-sensitive scan design (LSSD) or scan design Built-in self-test (BIST)

Sept.2011

Ad-hoc DFT Techniques Goal: to target only those portions of the circuit that would be difficult to test and to add circuitry to improve the controllability or observability Use test point insertion to access internal nodes directly E.g. a multiplexer inserted to control or observe an internal node

Sept.2011

Level-Sensitive Scan Design (LSSD) It is latch-based Testability is improved by adding extra logic to each flip-flop in the circuit to form a shift register, or scan chain

Sept.2011

Built-In-Self-Test (BIST) Integrates a test-pattern generator (TPG) and an output response analyzer (ORA) in the VLSI device to perform testing internal to the IC Because the test circuitry resides with the CUT, BIST can be used at all levels of testing, from wafer through system-level testing

Sept.2011

Board Testing In the 70s and 80s, PCBs were tested by probing the backs of the boards with probes (also called nails) in a bed-of-nails tester. The probes are positioned to contact various solder points on the PCB in order to force signal values at the component pins and monitor the output responses A PCB tester
can perform analog and digital functional tests is designed to be modular and flexible enough to integrate different external instruments

Sept.2011

Board Testing (2) Steps to test a PCB:


Bare board testing (to target shorts and opens on all interconnections) Testing of components to be assembled on PCB Test on the PCB tester of assembled PCB:
Solder paste inspection automated optical and x-ray inspections in-circuit (bed-of-nails)

Problem: When surface-mount devices on PCBs appeared in mid-1980s:


Sept.2011

pins of the package did not go through the board to guarantee contact sites on the bottom of the PCB

Boundary-Scan Solution: boundary-scan, proposed by JTAG, i.e. inserted logic to provide a scan path through all I/O buffers of ICs to assist in testing the assembled PCB

Boundary-scan cell applied to bidirectional I/O buffer


Sept.2011

Boundary-Scan (2)
Scan chain provides ability to shift in test vectors to be applied through the pad to the pins and interconnections on the PCB. Output responses are captured at the input buffers on other devices on the PCB and subsequently shifted out for fault detection Boundary scan provides access to the various signal nodes on a PCB without the need for physical probes Test Access Port (TAP) provides access to the boundary scan chain through a four-wire serial bus interface, in conjunction with instructions transmitted over the interface
Sept.2011

Boundary-Scan (3) Boundary scan interface also provides access to DFT features (LSSD or BIST), designed and implemented in the VLSI devices for board and system-level testing. The boundary scan description language (BSDL) provides a mechanism with which IC manufacturers can describe testability features in a chip
4-wire serial bus interface

Sept.2011

SoC Testing SOCs incorporate embedded cores, difficult to access during testing In 1997 IEEE developed a scalable wrapper architecture and access mechanism similar to boundary scan, to enable test access to embedded cores and the associated interconnect between embedded cores
independent of the underlying functionality of the SOC or its individual embedded cores creates necessary testability requirements for detection and diagnosis of faults for debug and yield enhancement

Sept.2011

Cost of Manufacturing Testing 0.5-1.0GHz, analog instruments,1,024 digital pins: ATE purchase price
= $1.2M + 1,024 x $3,000 = $4.272M

Running cost (five-year linear depreciation)


= Depreciation + Maintenance + Operation = $0.854M + $0.085M + $0.5M = $1.439M/year

Test cost (24 hour ATE operation)


= $1.439M/(365 x 24 x 3,600) = 4.5 cents/second

Design for Testability Basic Techniques

Controllability and Observability Controllability of a signal reflects the difficulty of setting a signal line to a required logic value from primary inputs Observability of a signal reflects the difficulty of propagating the logic value of the signal line to primary outputs

Testing of Combinational vs. Sequential Circs. Level of the combinational logic increases The testability of combinational logic decreases Good testability for sequential circuits is difficult to achieve due to:
many internal states, so setting a sequential circuit to a required internal state can require a very large number of input events Difficulty to identify the exact internal state of a sequential circuit from the primary outputs a more structured approach for testing designs with large amounts of sequential logic is required as part of a methodical DFT approach

Structured DFT Approaches Why:


to allow DFT engineers to follow a methodical process for improving the testability of a design are much easier to automate: EDA vendors provide sophisticated DFT tools to simplify and speed up DFT tasks (e.g. scan design)

Ad-hoc DFT Typical ad-hoc DFT techniques

A2-A6: good design practices

Test Point Insertion (TPI) TPI is an ad hoc DFT technique for improving the controllability and observability of internal nodes. Testability analysis is typically used to identify the internal nodes where test points should be inserted, in the form of control or observation points.

Observation Point Insertion (OPI) OP2 shows the structure of an OP: MUX + FF

SE = 0 and active CK logic values of the low-observability nodes are captured into the DFF SE = 1 the 3 DFF operate as a shift register, allowing to observe the captured logic values through OP_output during sequential clock cycles

Control Point Insertion (CPI)


CP2 shows the structure of an CP: MUX + FF
Original Connection is cut

During normal operation TM=0 During test TM=1; the 3 DFF form a shift-reg that shifts the CM_input to control the destination end of nodes

Control Point Insertion (CPI) (2) Controllability of node is dramatically improved BUT, additional delay appears in logic path care must be taken not to insert control points on a critical path Or better: add a scan point Scan Point: a combination of a CP and an OP, instead of a CP, as this allows to observe the source end as well.

You might also like