0% found this document useful (0 votes)
425 views

Top 100 DV Interview Questions

The VLSI Design Verification Handbook provides a comprehensive overview of essential interview questions related to design verification methodologies, including functional and formal verification, testing approaches, and verification flows in ASIC design. It covers various topics such as constrained-random verification, assertion-based verification, and the use of SystemVerilog in verification processes. Additionally, it discusses challenges in low-power design verification, hardware-software co-verification, and the importance of shift-left verification in improving efficiency.

Uploaded by

mudassar124541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
425 views

Top 100 DV Interview Questions

The VLSI Design Verification Handbook provides a comprehensive overview of essential interview questions related to design verification methodologies, including functional and formal verification, testing approaches, and verification flows in ASIC design. It covers various topics such as constrained-random verification, assertion-based verification, and the use of SystemVerilog in verification processes. Additionally, it discusses challenges in low-power design verification, hardware-software co-verification, and the importance of shift-left verification in improving efficiency.

Uploaded by

mudassar124541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

VLSI DESIGN

VERIFICATION
HANDBOOK
All in One : Top 100 Essential
Interview Questions

Prasanthi Chanda
Top 100 Design Verification
Interview Questions
1. What is the difference between functional verification and
formal verification?

Functional verification ensures that a design behaves as expected


under various test scenarios using simulation-based techniques like
directed and random tests. Formal verification uses mathematical
proofs and static analysis to check design correctness without
running test vectors.

Feature Functional Verification Formal Verification

Methodology Simulation-based Static analysis

Detects corner cases May miss some Exhaustive

Scalability Large designs, slow Best for small blocks

2. Explain the difference between directed and random testing.

Feature Direct Testing Random Testing

Approach Manually written test cases Uses randomized inputs

Coverage Limited to pre-defined cases Covers more scenarios

Debugging Effort Less More


3. Explain the verification flow in ASIC design.
ASIC verification follows these steps:
1. Testbench Development – Create an environment using
SystemVerilog/UVM.
2. Functional Simulation – Simulate design under test (DUT)
with test vectors.
3. Assertion-Based Verification – Use SystemVerilog Assertions
(SVA) to check functional correctness.
4. Coverage Analysis – Check code and functional coverage.
5. Debugging & Regression Testing – Debug failures and rerun
tests.
6. Formal Verification – Prove correctness using static analysis.
7. Gate-Level Simulation (GLS) – Verify synthesized netlist with
timing.
8. Emulation & FPGA Prototyping – Validate with real hardware.

4. What are the different verification methodologies used in VLSI?


Directed Testing – Manually written test cases for specific
scenarios.
Constrained-Random Verification (CRV) – Generates test
scenarios using constraints.
Assertion-Based Verification (ABV) – Uses SystemVerilog
Assertions (SVA) to check design properties.
Coverage-Driven Verification (CDV) – Monitors functional
and code coverage to guide test creation.
UVM (Universal Verification Methodology) – A reusable
object-oriented framework for verification.
5. What is constrained-random verification?
Constrained-random verification (CRV) generates random test
cases within defined constraints. It increases test coverage while
reducing manual effort.
class packet;
rand bit [7:0] addr;
rand bit [15:0] data;
constraint addr_c { addr inside {[8'h10 : 8'h50]};}
end class

6. What is the difference between pre-silicon and post-silicon


verification?

Feature Pre-Silicon Verification Post-Silicon Verification

Performed on Simulation, FPGA, Emulation Actual silicon chip

Debugging Tools Simulation, waveform viewers Logic analyzers, oscilloscopes

Purpose Detect logical bugs early Detect process, power

7. Explain the need for functional coverage in verification.

Functional coverage measures how well the verification plan


exercises the design’s features. It helps identify gaps in test
execution.
coverage cg;
coverpoint opcode {bins arith_ops = {4'h0, 4'h1} ; }
endgroup
8. What are the different types of coverage metrics in verification?

1. Code Coverage – Measures how much of the RTL is exercised.


a. Statement Coverage
b. Branch Coverage
c. Toggle Coverage
2. Functional Coverage – Tracks if all required design behaviors
are tested.
3. Assertions Coverage – Checks if assertions triggered during
simulation.

9. What is assertion-based verification (ABV)?

ABV uses SystemVerilog Assertions (SVA) to check for protocol


violations and functional correctness during simulation.

property p_req_ack;
@(posedge clk) req |=> ##[1:5] ack;
endproperty
assert property(p_req_ack);

10. How do you handle clock-domain crossing (CDC) issues in


verification?

CDC verification ensures safe data transfer between different clock


domains. Techniques include:
1. Handshake Synchronization – Use a request-acknowledge
mechanism.
2. Two-Flip-Flop Synchronization – Prevent metastability.
3. FIFO Synchronization – For bulk data transfer.
4. Static Analysis (Linting) – Detect potential CDC issues.
5. Dynamic Verification (Assertions & Gate-Level Simulations) –
Catch real-time failures.
11. What is a golden reference model in verification?

A golden reference model is a high-level, functionally correct


model of a design used for verification. It acts as a trusted baseline
against which the DUT (Design Under Test) is compared.
Purpose:
Provides expected outputs for given inputs.
Detects functional mismatches between DUT and expected
behavior.
Often written in C, C++, Python, SystemVerilog, or MATLAB.

12. What are the challenges in verifying a low-power design?

Low-power designs use techniques like clock gating, power gating,


and voltage scaling, which introduce verification challenges:
1. Power-Aware Verification – Checking functionality across
different power modes.
2. Retention Registers – Ensuring data retention when power
domains switch off.
3. Power-Intent Verification – Verifying UPF (Unified Power
Format) or CPF (Common Power Format) constraints.
4. X-Propagation Issues – Handling unknown states caused by
power gating.
5. Multiple Clock Domains – Synchronization between different
power islands.
13. Explain the difference between static and dynamic verification.

Feature Static Verification Dynamic Verification

Definition Analyzing code without executing To check behaviour

Examples Linting, Formal Verification Simulation, Emulation

Speed Faster Slower

14. What are X-propagation issues, and how do you debug them?

X-propagation occurs when an unknown value ('X') spreads


through a design, leading to unpredictable simulation behavior.
Causes:
Uninitialized signals.
Clocking issues.
Power-on reset failures.
Gated clocks or power domains.
Debugging Methods:
1. Waveform Analysis – Track 'X' sources in simulation.
2. Assertions & Coverage – Use assertions to detect 'X' early.
3. Deterministic Resets – Ensure all registers have explicit resets.
4. Formal Verification – Prove that no illegal 'X' states exist.

assert property (@(posedge clk) ! ($isunknown(data_out))) ;


15. What is equivalence checking in verification?

Equivalence checking ensures that two representations of a design


are functionally identical. It is mainly used to verify RTL vs.
Netlist after synthesis.
Techniques:
Formal Equivalence Checking (FEC) – Uses mathematical
proofs to compare RTL and netlist.
Golden Reference Model Comparison – Compares DUT output
with a high-level model.

16. Explain hardware-software co-verification.

Hardware-Software Co-Verification ensures that both hardware


(RTL) and software (drivers, firmware) work correctly together
before tape-out.
Methods:
1. Simulation-Based Co-Verification – Running software on RTL
in a simulator.
2. Emulation-Based Co-Verification – Using FPGA/emulation to
execute real software.
3. Virtual Prototyping – Using a functional model to test
software early.
17. What is shift-left verification, and why is it important?

Shift-left verification means starting verification earlier in the


design cycle to detect bugs sooner.
Why important?
Reduces costly late-stage errors in silicon.
Improves time-to-market by detecting bugs early.
Enhances coverage closure before synthesis.
Techniques:
Linting & Static Analysis – Detect early RTL issues.
Emulation & FPGA Prototyping – Test software before silicon.
Continuous Integration (CI) – Automate regression testing.

18. How do you debug a design that fails in simulation but works in
FPGA?

1. Check Timing Differences – FPGAs use real clocks, but


simulations may use ideal clocks.
2. Look for Uninitialized Signals – Simulation might detect ‘X’,
while FPGA assumes zero.
3. Check CDC Issues – Simulations might not fully expose clock-
domain crossing bugs.
4. Verify Testbench Assumptions – Ensure your simulation
testbench is realistic.
5. Gate-Level Simulation (GLS) – Run post-synthesis netlist
simulation to match FPGA behavior.
19. Explain the role of assertions in improving verification
efficiency.
Assertions are runtime checks that validate design properties
during simulation.
Benefits:
Detect bugs earlier in the simulation.
Improve debugging efficiency by pinpointing failures.
Reduce reliance on manual testbenches.

assert property (@(posedge clk) (req|-> ##[1;:5] ack));

20. What are the key challenges in verifying a high-speed serial


interface (like PCIe, USB, or DDR)?

1. Protocol Compliance – Ensuring adherence to standards (e.g.,


PCIe Gen4, USB 3.0).
2. Signal Integrity – Checking for jitter, crosstalk, and noise
issues.
3. Clock Domain Crossings (CDC) – Handling multiple clock
frequencies.
4. Error Recovery – Testing retransmission mechanisms for
corrupted data.
5. High-Speed Data Patterns – Verifying scrambling, encoding,
and decoding logic.
6. Latency & Performance Testing – Measuring throughput
under realistic workloads.
21. What are the advantages of SystemVerilog over Verilog?

SystemVerilog extends Verilog by adding features for verification,


abstraction, and productivity.
Key Advantages:
Object-Oriented Programming (OOP) – Enables reusable
testbenches.
Assertions (SVA) – Adds formal checks for functional
verification.
Randomization – Supports constraint-driven test case
generation.
Interfaces – Improves modularity and reduces code
duplication.
Classes & Inheritance – Allows scalable and reusable
testbenches.
Functional Coverage – Provides better measurement of test
quality.

22. Explain the difference between logic and bit in SystemVerilog.

Feature logic bit

Value 0,1,X,Z Only 0 or 1

Default State ‘X’ 0

Use case Registers, nets Compact storage, arrays


23. What is a SystemVerilog mailbox, and how is it used?

A mailbox is a FIFO-based communication mechanism between


threads or classes in SystemVerilog.
Usage:
Pass data between processes in testbenches.
Synchronization between components in UVM.

mailbox #(int) mbx = new();


initial begin
mbx.put(10); // Send Data
mbx.get(data); // Receive Data
end

24. How does the SystemVerilog interface improve verification?

Interfaces group signals and procedural code into a reusable


module, improving readability and modularity.
Benefits:
Reduces redundant signal declarations in multiple modules.
Encapsulates testbench logic in a single place.
Improves maintainability when modifying signal lists.

interface bus_if;
logic clk, reset;
logic [31:0] data;
endinterface
25. Explain the use of modport in SystemVerilog interfaces.

modport defines directional constraints on interface signals for


different modules.
interface bus_if;
logic clk, reset;
logic [31:0] data;
modport master (input clk, reset, output data);
modport slave (input clk, reset, input data);
endinterface

26. What is the role of rand and randc in SystemVerilog?

Feature rand randc


Definition Generates random values Generates in a cyclic order
Use Case General random stimulus Exhaustive testing of values
Example rand int x; randc int y;

27. Explain the difference between deep and shallow copies in


SystemVerilog.

Type Shallow Copy Deep Copy


Definition Copies only references Copies the entire object

Impact Changes affect original Independent copy

Use Case Temporary references Duplicating objects

28. What is the UVM factory, and why is it important?

The UVM factory creates objects dynamically, allowing testbench


flexibility.
Benefits:
Replaces objects at runtime without modifying testbenches.
Supports testbench reuse across different configurations.
29. How does UVM improve verification efficiency compared to
other methodologies?

Reusable components – Reduces testbench development time.


Factory-based object creation – Enables easy test
reconfiguration.
Transaction-based verification (TLM) – Simplifies
communication.
Built-in coverage collection – Improves test quality
measurement.
Automation & Sequences – Reduces manual test creation effort

30. Explain the difference between uvm_component and


uvm_object.

Feature uvm_component uvm_object


Definition A testbench element with phases A data object without phases

Use Case Agents, monitors, drivers Transactions, sequences

Example class my driver extends uvm_driver; class my_transaction extends uvm_object;

31. What is a uvm_sequence and how does it interact with a


uvm_driver?
A uvm_sequence generates stimulus, while a uvm_driver applies it
to the DUT.
Interaction:
uvm_sequence creates transactions.
uvm_driver consumes transactions and drives DUT signals.
32. Explain the use of TLM in UVM.

TLM (Transaction-Level Modeling) enables loosely coupled


communication between components. Allows monitoring and
scoreboard updates without direct coupling.

uvm_analysis_port #(my_transaction) analysis_port;

33. What is a uvm_monitor, and what role does it play in a


testbench?

A uvm_monitor observes DUT signals and sends transaction data to


the scoreboard.
Purpose:
Non-intrusive observation of DUT behavior.
Captures functional coverage.

34. How do you create a scoreboard in UVM?

A scoreboard compares expected vs. actual results.


class my_scoreboard extends uvm_scoreboard;
uvm_analysis_imp #(my transaction, my_scoreboard) analysis_imp;
endclass

35. What is uvm_analysis_port, and how is it used?

A uvm_analysis_port allows data transfer between monitor and


scoreboard.
The monitor writes transactions into ap.
The scoreboard reads transactions from ap.
36. What are UVM callbacks, and where are they used?

UVM callbacks allow runtime modifications of component


behavior without modifying the base class.
Example Use Case:
Modifying driver behavior dynamically.
Injecting errors for fault testing.

37. Explain UVM phasing and its importance.

UVM phases manage testbench execution order, ensuring correct


initialization and execution.
Key Phases:
1. build_phase – Creates testbench components.
2. connect_phase – Connects components.
3. run_phase – Executes stimulus.

38. What is the difference between uvm_config_db and


uvm_resource_db?

Feature uvm_config_db uvm_resource_db


Scope Hierarchical Global
Usage Pass parameters between components Store reusable resources

39. How do you create reusable UVM testbenches?

Use TLM for communication.


Use UVM factory for flexible object creation.
Use modular sequences for different test scenarios.
40. What is uvm_do_with and how does it work?

uvm_do_with generates transactions with constraints. Ensures


constraints are applied before execution.
p_sequencer.start_item(req);
req.randomize() with { addr 0x1000; };
p_sequencer.finish_item(req);

41. What are concurrent and immediate assertions in


SystemVerilog?

Concurrent Assertions:
Evaluated over multiple clock cycles.
Used for protocol and temporal checks.

property p_handshake;
@(posedge clk) req | => ack;
endproperty
assert property (p_handshake);

Immediate Assertions:
Evaluated instantly within the same simulation time step.
Used for combinational checks.

assert (a ==b) else $error("Mismatch detected");

42. How do you check handshake protocols using assertions?

A typical handshake protocol ensures that ack follows req within a


cycle.

property p_handshake;
@(posedge clk) req |=> ##[1:3] ack;
endproperty
assert property (p_handshake);
43. Explain the disable iff construct in SystemVerilog assertions.

Purpose: Temporarily disable assertions when a reset or exception


occurs.

property p_stable;
@ ( p o s e d g e c l k ) d i s a b l e i f f ( r e s e t ) ( a = = b);
endproperty
assert property (p_stable);

44. What is the purpose of assert and assume in SVA?

Keyword Purpose
assert Checks design correctness in simulation

assume Constrains stimulus for formal verification

45. How do you use cover properties in SystemVerilog?

Measures event occurrences for functional coverage.

property p_transaction;
@(posedge clk) req && ack;
endproperty
cover property (p_transaction);

46. What are layered assertions, and when are they used?

Definition: Multiple assertions layered to check different


abstraction levels.
Use Case:
First assertion checks signal integrity.
Second assertion verifies protocol behavior.
47. How do you debug assertion failures in simulation?

Enable waveform dump (.vcd or .fsdb).


Use $display or $error messages.
Use $assertcontrol(OFF/ON); to isolate assertions.

48. Explain the difference between functional and code coverage.

Coverage Type Definition Example


Functional Coverage Measures feature-level verification Ensuring all transaction types are tested
Code Coverage Measures execution completeness Checking if every if statement is executed

49. How do you measure branch coverage in verification?

Tracks if both if and else branches execute.


Example Code Coverage Tool Output

if (ab) // Check if both true and false cases are covered


x = y;
else
x = z;

50. What is cross-coverage, and why is it important?

Tracks combinations of multiple variables.


Ensures all critical cases are tested.
Ensures all opcode-operand combinations are exercised.

covergroup cg;
coverpoint opcode;
coverpoint operand;
cross opcode, operand;
endgroup
51. How do you implement covergroups in SystemVerilog?

Defines coverage metrics for variables.

covergroup cg @(posedge clk);


coverpoint data;
coverpoint valid;
endgroup

52. What are illegal bins in coverage, and how do they help?

Detect unwanted conditions in coverage collection.

coverpoint opcode {
illegal_bins illegal_values {4'b1111}; // Illegal opcode
}

53. How do you write a covergroup to track protocol transactions?

covergroup protocol_cg;
coverpoint req;
coverpoint ack;
cross req, ack;
endgroup

54. What is the difference between toggle coverage and path


coverage?

Type Definition Example


Toggle Coverage Tracks 0-1 and 1-0 transitions clk toggles from 0+1
Path Coverage Tracks unique execution paths If statements and loops
55. How do you ensure coverage closure in verification?

Increase random stimulus to trigger missing scenarios.


Use functional coverage bins to analyze gaps.
Run multiple simulations with different seeds.
Use ignore_bins and illegal_bins to refine coverage goals.

56. Explain how ignore_bins and illegal_bins work in covergroups.

Bin Type Purpose Example


ignore_bins Ignores specific values Ignore reserved opcodes

illegal_bins Flags an error if hit Detects invalid states

57. What is the importance of weighted coverage?

Prioritizes critical scenarios in functional coverage.


Executes common operations more frequently.
coverpoint opcode{
bins common_ops = {0,1,2} weight =3;
bins rare_ops = {15} weight =1;
}

58. How can you merge coverage from multiple simulations?

Use coverage merging tools in simulators (vcs, questa, etc.).


Merge .ucdb or .db files from multiple runs.

vcover merge merged.ucdb run1.ucdb run2.ucdb


59. How do you check if a covergroup has hit all bins?

Use $get_inst_coverage() to check coverage percentage.

if (cg.get_coverage() ==100)
$display(“Coverage complete!”);

60. How do you analyze coverage holes in a verification project?

Generate a coverage report (.ucdb or .cov).


Identify missing bins in functional coverage.
Increase stimulus targeting uncovered scenarios.
Use covergroups and cross coverage to improve coverage.

61. What are the common issues found in waveform debugging?

Glitches or Spikes: Due to incorrect clocking or race


conditions.
Unknown (X/Z) States: Due to uninitialized signals.
Incorrect Timing: Mismatched clock edges.
Wrong Signal Transitions: Missing or extra transitions.
Data Corruptions: Invalid memory reads/writes.
Metastability: Improper clock domain crossings.

62. How do you debug mismatches between expected and actual


results?

Use Assertions: Catch incorrect values in simulation.


Enable Debug Messages: $display, UVM reports.
Print Intermediate Values: Check each transaction.
Run with Different Seeds: Identify randomness issues.
Check Reference Models: Ensure golden model correctness.
63. What techniques can improve simulation performance?

Reduce Dumping (+access+r selectively).


Use Fast Seed (+ntb_random_seed optimally).
Increase Time Step (#1ns → #10ns where possible).
Reduce Log Printing (+UVM_NO_RELNOTES in UVM).
Disable Unused Coverage (+disable_coverage option).
Use -O2 Compilation for Speed (vlog -O2 in ModelSim).

64. What is the use of $dumpfile and $dumpvars in Verilog?

Used for waveform dumping in .vcd format

initial begin
$dumpfile(“dump.vcd”) ; // specify file
$dumpvars(0, testbench); // Dump all signals
end

65. How do you debug memory corruption in a testbench?

Enable Memory Protection in tools (+check_mem in VCS).


Use Debug Prints for read/write locations.
Enable Randomization Debugging (+UVM_DEBUG in UVM).
Check for Uninitialized Memory (X states).
Use Assertions for Address Bounds (assert(addr <
MEM_SIZE);).
66. Explain the impact of zero-delay race conditions in simulation.

Occurs when multiple processes update the same signal at the


same time.
Fix: Use non-blocking assignments (<=) or explicit delays.

always@(posedge clk) a=b ;


always@ (posedge clk) b=a;

67. What are the different debugging tools available for


verification?

Waveform Viewers: GTKWave, DVE (Synopsys), QuestaSim,


VCS.
Log Analysis: UVM Reporting, grep, diff tools.
Assertions (SVA, PSL): Identify failures at runtime.
Memory Debuggers: Valgrind, AddressSanitizer.
Coverage Tools: UCDB (Questa), vcover report (VCS).

68. How do you handle unknown values (X and Z) in simulation?

Initialize Registers (reg [3:0] a = 4'b0000;).


Use if (a === 1'bx) to catch unknowns.
Replace Uninitialized Values (+no_x for VCS).
Use Assertions (assert(a !== 'bx);).
Improve Reset Logic (always @(posedge rst) a <= 0;).
69. What are the common causes of non-deterministic behavior in
testbenches?

Uninitialized Variables (reg a; // X-state initially).


Clock Domain Crossing Issues.
Randomization Issues (randc not constrained).
Floating Signals (highZ or X propagation).

70. How do you debug a deadlock in UVM sequences?

Enable UVM Logs (+UVM_VERBOSITY=UVM_DEBUG).


Check Sequence Status (seq.print();).
Use Timeout Monitors (uvm_info("SEQ", "Waiting too long",
UVM_MEDIUM)).
Check TLM FIFO Blockage (get_next_item() blocked).
Kill and Restart the Sequence (seq.kill();).

71. Explain how UVM reporting helps in debugging.

Provides Verbosity Levels (UVM_LOW, UVM_MEDIUM,


UVM_HIGH).
Filters Messages Using the Command Line mentioned here
(+UVM_VERBOSITY=UVM_NONE).

72. How do you reduce test execution time in UVM?

Disable Debug Messages (+UVM_NO_RELNOTES).


Reduce Waveform Dumping (set DumpDepth 3 in VCS).
Optimize Constraints (rand case_weight=3;).
Use Parallel Simulations (make -j8).
73. What is wave dumping, and how does it help in debugging?

Captures simulation signals for offline analysis.


Use with GTKWave, DVE, or Questa Wave.

$dumpfile(“wave.vcd”);
$dumpvars(0, testbench);

74. How do you analyze simulation log files effectively?

Use grep for filtering logs (grep ERROR sim.log).


Search for UVM_ERROR, UVM_FATAL entries.
Enable the Simulation of the Time Stamps given as mentioned
(+UVM_TIME_PRECISION=1ns).
Sort Unique Issues (sort sim.log | uniq -c).

75. What is the use of $time, $realtime, and $strobe in debugging?


Function Purpose Example
$time Returns integer time $display("Time: %t", $time);
$realtime Returns floating-point time $display("Real Time: %f", $realtime);

$strobe Prints at the end of timestep $strobe ("Value of x: %d", x);

76. Explain back-annotated simulations and when they are


required.

Definition: Uses post-layout timing data in verification.


Use Case:
Ensures accurate timing after synthesis.
Detects hold-time and setup-time violations.
77. How do you validate performance metrics in verification?

Measure Latency ($time - start_time).


Track Bandwidth (Throughput / Time).
Use Transaction Profiling (uvm_phase::start_time).
Analyze Power Consumption (+power_analysis).

78. What are the best practices for logging in UVM?

Use uvm_info() Instead of $display.


Set Verbosity Levels (+UVM_VERBOSITY=UVM_HIGH).
Write to Separate Log Files (uvm_report_server).
Avoid Logging Redundant Messages.

79. How do you identify redundant assertions in verification?

Run Coverage Analysis (vcover report).


Check Assertion Hits (assertion_coverage.db).
Remove Assertions that Always Pass.

80. Explain how checkers improve debugging efficiency.

Checkers Automate Verification of Protocols.


Use in Assertions for Protocol Compliance.

if (valid &&(addr> MAX_ADDR)) $error(“Address out of range!”);


81. What is hybrid verification, and when is it used?

Combines multiple verification methods, such as simulation,


emulation, and formal verification.
Used when:
Simulation is too slow.
Emulation is expensive.
Formal verification is infeasible for large designs.

82. How do you verify a multi-core processor?

Core-Level Testing: Check individual cores with directed and


random tests.
Cache Coherency Verification: Use MESI/MOESI protocol
checkers.
Interconnect Verification: Check data consistency across
cores.
Performance Metrics: Validate latency, throughput, and
power.
Concurrency Testing: Run parallel workloads to detect race
conditions.

83. Explain the role of a verification IP (VIP).

Pre-verified component that models a protocol or interface


(e.g., AXI, PCIe).
Provides:
Stimulus generation.
Scoreboarding & checkers.
Protocol compliance checking.
Reduces verification effort and accelerates time-to-market.
84. How do you validate an AI accelerator in an FPGA-based
design?

Functional Validation: Run known AI models (e.g., MNIST,


ResNet).
Data Path Verification: Ensure correct matrix multiplications,
activations.
Power & Performance Testing: Measure FLOPS/Watt.
Golden Model Comparison: Match results with software AI
framework (TensorFlow, PyTorch).
Latency & Throughput Testing: Optimize scheduling and
memory access.

85. What are the challenges in verifying a RISC-V core?

Custom Instruction Support: Variability in implementations.


Compliance Testing: Ensuring ISA compliance.
Memory Model Verification: Handling weak memory
ordering.
Security Validation: Checking side-channel vulnerabilities.
Debug & Trace Verification: Ensuring correctness of debug
features.

86. Explain the use of ML-based verification techniques.

Uses Machine Learning to:


Predict coverage holes.
Optimize test generation.
Detect patterns in simulation failures.
Examples:
Reinforcement Learning for adaptive stimulus.
Supervised Learning to classify failing vs. passing tests.
87. How do you verify a power-gated design?

Power-Aware Simulation: Use UPF (Unified Power Format).


Retention Checking: Ensure state retention across power
cycles.
Leakage Current Testing: Verify minimal power draw in
standby.
Glitch Detection: Avoid unwanted wake-up transitions.

88. Explain the role of Portable Stimulus in verification.

Creates reusable, portable test scenarios across simulation,


emulation, and silicon.
Uses PSS (Portable Stimulus Standard) to describe complex
sequences.
Reduces effort in test creation for SoCs with multiple
interfaces.

89. What is mixed-signal verification, and how is it done?

Verifies both analog and digital components of a design.


Methods:
Co-Simulation: Use tools like Cadence AMS, Mentor Questa
ADMS.
Behavioral Modeling: Model analog components in
Verilog-AMS.
Monte Carlo Analysis: Check variations in analog signals.
90. How do you handle large-scale regression testing?

Use Parallel Execution: Run tests across multiple machines.


Leverage Cloud or Emulation: Speed up execution.
Prioritize Tests: Run high-risk tests first.
Use ML-Based Predictive Analysis: Reduce redundant test
runs.

91. What are the challenges in verifying a chiplet-based SoC?

Inter-chiplet Communication: Ensuring correct packet


exchange.
Heterogeneous Integration: Different IP vendors.
Timing Closure: Synchronization across chiplets.
Power & Thermal Management: Verifying dynamic voltage
scaling.

92. How do you optimize coverage-driven verification for large


designs?
Enable Hierarchical Coverage: Focus on module-level first.
Use Coverage Filters: Ignore unnecessary toggles.
Merge Coverage Across Runs: Avoid redundant tests.
Leverage ML for Coverage Analysis: Identify missing
scenarios.

93. Explain the difference between emulation and simulation.

Feature Simulation Emulation


Speed Slow 1000x faster

Accuracy High Near-real-time

Debugging Easy Limited


Use Case RTL & unit testing Full-system validation
94. How do you verify security features in a hardware design?

Check Access Control: Verify user privileges.


Run Side-Channel Tests: Detect power analysis attacks.
Fuzzing & Fault Injection: Stress-test security mechanisms.
Formal Security Verification: Use model checking.

95. What are the challenges in verifying automotive SoCs?

Functional Safety (ISO 26262 Compliance).


Long Lifetime Requirements: Designs must last 10+ years.
Real-Time Constraints: Hard deadlines for responses.
Environmental Testing: Handling extreme temperature
variations

96. How do you perform post-silicon validation?

Bring-Up Testing: Power-up and functional tests.


JTAG & Scan Chain Debugging: Access internal states.
Silicon Debug with Logic Analyzers: Identify timing issues.
Automated Workload Testing: Run real-world applications.

97. What is shadow logic verification, and why is it important?

Shadow Logic: Duplicate logic for redundancy (e.g., fault


tolerance).
Verification Importance: Ensures fault coverage, reliability.
Use Case: Aerospace, automotive, and safety-critical
applications.
98. How do you verify a NoC (Network-on-Chip)?

Functional Verification: Check packet integrity.


Performance Testing: Measure bandwidth, latency.
Traffic Pattern Simulation: Validate congestion handling.
Fault Injection: Test resilience to failures.

99. Explain the verification challenges in DDR memory controllers.

High-Speed Timing Constraints: DDR4/DDR5 operate at GHz


speeds.
Protocol Compliance: JEDEC standard conformance.
Signal Integrity: Avoiding jitter and noise issues.
Multi-Port Arbitration: Ensuring fair memory access.

100. How do you perform verification signoff for tapeout?

Checklist for Signoff:


Functional Coverage ≥ 95%.
Code Coverage (Branch, FSM, Toggle) ≥ 90%.
Lint & CDC Checks Passed.
Timing Signoff Done (STA).
Power & Thermal Analysis Completed.
Excellence in World class
VLSI Training & Placements

Do follow for updates & enquires

+91- 9182280927

You might also like