Module 3 System Verilog (1)_241031_102111
Module 3 System Verilog (1)_241031_102111
Advanced VLSI
Jagannath B R
Assistant Professor
VVIET
Module 3 a
Verification
Guidelines:
The
Randomization, verification
process,
methodology directed
basics, testing,
Module 3 b
Data
Types:
Built in
Data
types,
fixed and
dynamic
arrays,
Queues,
associativ
e arrays,
linked lists,
array
methods,
choosing a
type,
Why was System Verilog created?
Tight integration
with event-
Support for HDL simulator for
types such as control of the
Verilog’s 4-state design
especially Object values „
Oriented
Programming „
Multi-threading
Higher-level and inter process
structures, communication „
Constrained-
random
stimulus
generation „
Functional
coverage „
The Verification Process
Is it Find
Bugs ?
Yes No
The Verification Process
What types of bugs are lurking in the design? The easiest ones
to detect are at the block level, in modules created by a single
person.
After the block level, the next place to look for discrepancies is
at boundaries between blocks
The Verification Process
As you start to integrate design blocks, they can stimulate each other,
reducing your workload. These multiple block simulations may uncover more
bugs, but they also run slower.
At the highest level of the DUT, the entire system is tested, but the simulation
performance is greatly reduced.
Once you have verified that the DUT performs its designated
functions correctly, you need to see how it operates when there are
errors.
Can the design handle a partial transaction, or one with corrupted data or control fields?
Just trying to enumerate all the possible problems is difficult, not to mention how the
design should recover from them. Error injection and handling can be the most challenging
part of verification
The Verification Plan
For a more
complete
discussion on
These steps may verification see
include directed Bergeron (2006).
or random
testing,
assertions, HW/
The verification plan SW co-
is closely tied to the verification,
hardware emulation, formal
specification and
contains a proofs, and use of
description of what verification IP.
features need to be
exercised and the
techniques to be
used.
Basic Test bench Functionality
The purpose of a test bench is to determine the correctness of the design under test (DUT).
This is accomplished by the following steps.
Generate stimulus „
„Test-specific
„Constrained- „Layered test code kept
random bench using separate from
stimulus transactors test bench
A directed test finds the bugs you expect to be in the design, while a random test can find bugs
you never anticipated.
All these principles are related. Random stimulus is crucial for exercising complex designs. A directed
test finds the bugs you expect to be in the design, while a random test can find bugs you never
anticipated.
When using random stimulus, you need functional coverage to measure verification progress.
once you start using automatically generated stimulus, you need an automated way to predict the
results, generally a scoreboard or reference model.
A layered testbench helps you control the complexity by breaking the problem into manageable
pieces.
With appropriate planning, you can build a testbench infrastructure that can be shared by all tests and
does not have to be continually modified.
You just need to leave “hooks” where the tests can perform certain actions such as shaping the
stimulus and injecting disturbances.
Building this style of testbench takes longer than a traditional directed testbench,
especially the self-checking portions, causing a delay before the first test can be
run. This gap can cause a manager to panic, so make this effort part of your
schedule.
In Figure 1-3, you can see the initial delay before the first random test runs.
Constrained-Random Stimulus
Figure 1-4 shows the coverage for constrained-random tests over the total
design space. First, notice that a random test often covers a wider space than
a directed one.
This extra coverage may overlap other tests, or may explore new areas that
you did not anticipate.
If these new areas find a bug, you are in luck! If the new area is not legal, you
need to write more constraints to keep away.
Lastly, you may still have to write a few directed tests to find cases not
covered by any other constrained-random tests.
Figure 1-5 shows
the paths to Start at the upper left
with basic
achieve constrained-random
complete tests.
coverage.
In a real world environment, the DUT’s configuration becomes more random the longer it
To test this device, the engineer had to write several dozen lines of directed testbench cod
configure each channel.
In the real world, your device operates in an environment containing other components.
When you are verifying the DUT, it is connected to a testbench that mimics this environme
You should randomize the entire environment configuration, including the length of the
simulation, number of devices, and how they are configured.
Of course you need to create constraints to make sure the configuration is legal
Input data
A test that uses the shortest delays runs the fastest, but it won’t
create all possible stimulus. You can create a testbench that talks
to another block at the fastest rate, but subtle bugs are often
revealed when intermittent delays are introduced
You can create a testbench that talks to another block at the
fastest rate, but subtle bugs are often revealed when
intermittent delays are introduced.
A block may function correctly for all possible permutations of
stimulus from a single interface, but subtle errors may occur
when data is flowing into multiple inputs.
. What if the inputs arrive at the fastest possible rate, but the
output is being throttled back to a slower rate? What if stimulus
arrives at multiple inputs concurrently? What if it is staggered
with different delays? Use functional coverage to measure what
combinations have been randomly generated
Parallel random testing
A random test consists of the testbench code plus a random seed. If you run the
same test 50 times, each with a unique seed, you will get 50 different sets of
stimuli. Running with multiple seeds broadens the coverage of your test and
leverages your work
You need to plan how to organize your files to handle multiple simulations. Each
job creates a set of output files such as log files and functional coverage data.
You can run each job in a different directory, or you can try to give a unique
name to each file.
Functional Coverage
• The previous sections have shown how to create stimuli that can
randomly walk through the entire space of possible inputs. With
this approach, your testbench visits some areas often, but takes
too long to reach all possible states.
• Unreachable states will never be visited, even given unlimited
simulation time. You need to measure what has been verified in
order to check off items in your verification plan.
• The process of measuring and using functional coverage
consists of several steps.
• First, you add code to the testbench to monitor the stimulus
going into the device, and its reaction and response, to
determine what functionality has been exercised.
• Next, the data from one or more simulations is combined into
a report.
• Lastly, you need to analyze the results and determine how to
create new stimulus to reach untested conditions and logic
Feedback from functional coverage to stimulus