0% found this document useful (0 votes)
17 views

Module 3 System Verilog (1)_241031_102111

Uploaded by

Sonika Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Module 3 System Verilog (1)_241031_102111

Uploaded by

Sonika Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Module 3

Advanced VLSI

Jagannath B R
Assistant Professor
VVIET
Module 3 a

Verification
Guidelines:

The
Randomization, verification
process,

constrained basic test


random bench
stimulus, functionality,

methodology directed
basics, testing,
Module 3 b

Data
Types:
Built in
Data
types,
fixed and
dynamic
arrays,

Queues,
associativ
e arrays,

linked lists,

array
methods,

choosing a
type,
Why was System Verilog created?

. Hardware Verification The donation of the OpenVera


Languages (HVL) such as language formed the basis for
Verilog Hardware Description
OpenVera and e were created. the HVL features of
Language (HDL) became the
Companies that did not want SystemVerilog. Accellera’s goal
most widely used language for
to pay for these tools instead was met in November 2005
describing hardware for
spent hundreds of man-years with the adoption of the IEEE
simulation and synthesis.
creating their own custom standard P1800-2005 for
tools SystemVerilog, IEEE (2005).
Verification Guidelines
chapter introduces a set of guidelines and coding styles
for designing and constructing a test bench that meets
your particular needs.

These techniques use some of the same concepts as shown


in the Verification Methodology Manual for System Verilog
(VMM), Bergeron et al. (2006), but without the base classes
Typical features of an HVL that distinguish it from a
Hardware Description Language such as Verilog or
VHDL are

Tight integration
with event-
Support for HDL simulator for
types such as control of the
Verilog’s 4-state design
especially Object values „
Oriented
Programming „
Multi-threading
Higher-level and inter process
structures, communication „

Constrained-
random
stimulus
generation „
Functional
coverage „
The Verification Process
Is it Find
Bugs ?
Yes No
The Verification Process

A designer reads the hardware specification for a block,


interprets the human language description, and creates the
corresponding logic in a machine-readable form, usually RTL
code

he or she needs to understand the input format, the


transformation function, and the format of the output

As a verification engineer, you must also read the hardware


specification, create the verification plan, and then follow it to
build tests showing the RTL code correctly implements the
features.

What types of bugs are lurking in the design? The easiest ones
to detect are at the block level, in modules created by a single
person.

After the block level, the next place to look for discrepancies is
at boundaries between blocks
The Verification Process

To simulate a single design block, you need to create tests that


generate stimuli from all the surrounding blocks

As you start to integrate design blocks, they can stimulate each other,
reducing your workload. These multiple block simulations may uncover more
bugs, but they also run slower.

At the highest level of the DUT, the entire system is tested, but the simulation
performance is greatly reduced.

Once you have verified that the DUT performs its designated
functions correctly, you need to see how it operates when there are
errors.
Can the design handle a partial transaction, or one with corrupted data or control fields?
Just trying to enumerate all the possible problems is difficult, not to mention how the
design should recover from them. Error injection and handling can be the most challenging
part of verification
The Verification Plan

For a more
complete
discussion on
These steps may verification see
include directed Bergeron (2006).
or random
testing,
assertions, HW/
The verification plan SW co-
is closely tied to the verification,
hardware emulation, formal
specification and
contains a proofs, and use of
description of what verification IP.
features need to be
exercised and the
techniques to be
used.
Basic Test bench Functionality
The purpose of a test bench is to determine the correctness of the design under test (DUT).
This is accomplished by the following steps.

Generate stimulus „

Apply stimulus to the DUT „

Capture the response „

Check for correctness „

Measure progress against the overall verification goals


Directed Testing

The task of verifying the correctness of a


design, you may have used directed tests.

Using this approach, you look at the hardware


specification and write a verification plan with
a list of tests, each of which concentrated on
a set of related features.

Armed with this plan, you write stimulus


vectors that exercise these features in the
DUT.

You then simulate the DUT with these vectors


and manually review the resulting log files
and waveforms to make sure the design does
what you expect.

Once the test works correctly, you check off


the test in the verification plan and move to
the next.

Incremental approach makes steady progress,


Incremental approach makes steady progress,

What if you do not have the necessary time or resources to carry


out the directed testing approach? As you can see, while you
may always be making forward progress, the slope remains the
same. When the design complexity doubles, it takes twice as
long to complete or requires twice as many people. Neither of
these situations is desirable. You need a methodology that finds
bugs faster in order to reach the goal of 100% coverage.
Figure 1-2 shows the total design space and the features that get
covered by directed test cases.
In this space are many features, some of which have bugs. You
need to write tests that cover all the features and find the bugs.
Methodology Basics

„Test-specific
„Constrained- „Layered test code kept
random bench using separate from
stimulus transactors test bench

„Functional „Common test


coverage bench for all
tests

All these principles are related.

Random stimulus is crucial for exercising complex designs.

A directed test finds the bugs you expect to be in the design, while a random test can find bugs
you never anticipated.
All these principles are related. Random stimulus is crucial for exercising complex designs. A directed
test finds the bugs you expect to be in the design, while a random test can find bugs you never
anticipated.

When using random stimulus, you need functional coverage to measure verification progress.

once you start using automatically generated stimulus, you need an automated way to predict the
results, generally a scoreboard or reference model.

A layered testbench helps you control the complexity by breaking the problem into manageable
pieces.

Transactors provide a useful pattern for building these pieces.

With appropriate planning, you can build a testbench infrastructure that can be shared by all tests and
does not have to be continually modified.

You just need to leave “hooks” where the tests can perform certain actions such as shaping the
stimulus and injecting disturbances.
 Building this style of testbench takes longer than a traditional directed testbench,
especially the self-checking portions, causing a delay before the first test can be
run. This gap can cause a manager to panic, so make this effort part of your
schedule.
 In Figure 1-3, you can see the initial delay before the first random test runs.
Constrained-Random Stimulus
Figure 1-4 shows the coverage for constrained-random tests over the total
design space. First, notice that a random test often covers a wider space than
a directed one.

This extra coverage may overlap other tests, or may explore new areas that
you did not anticipate.

If these new areas find a bug, you are in luck! If the new area is not legal, you
need to write more constraints to keep away.

Lastly, you may still have to write a few directed tests to find cases not
covered by any other constrained-random tests.
Figure 1-5 shows
the paths to Start at the upper left
with basic
achieve constrained-random
complete tests.
coverage.

Run them with many


Now you make
different seeds. When
minimal code
you look at the
changes, perhaps
functional coverage
with new constraints,
reports, find the
or injecting errors or
holes, where there
delays into the DUT.
are gaps.
What Should You Randomize?
When you think of randomizing the stimulus to a
design, the first thing that you might think of is the
data fields. This stimulus is the easiest to create –
just call $random.

The problem is that this gives a very low payback in


terms of bugs found.

The primary types of bugs found with random data


are data path errors, perhaps with bit-level
mistakes. You need to find bugs in the control logic.
Device and environment configuration
What is the most common reason why bugs are missed during testing of the RTL design?

Not enough different configurations are tried. Most tests


just use the design as it comes out of reset, or ap
fixed set of initialization vectors to put it into a known state

In a real world environment, the DUT’s configuration becomes more random the longer it

To test this device, the engineer had to write several dozen lines of directed testbench cod
configure each channel.

In the real world, your device operates in an environment containing other components.

When you are verifying the DUT, it is connected to a testbench that mimics this environme

You should randomize the entire environment configuration, including the length of the
simulation, number of devices, and how they are configured.

Of course you need to create constraints to make sure the configuration is legal
Input data

When you read about


random stimulus, you
probably thought of taking a
transaction such as a bus
write or ATM cell and filling
the data fields with random
values.

Actually this approach is


fairly straightforward as long
as you carefully prepare your
transaction classes
Protocol exceptions, errors, and violations
• If something can go wrong in the real hardware, you should
try to simulate it. Look at all the errors that can occur.
• What happens if a bus transaction does not complete?
• If an invalid operation is encountered?
• Does the design specification state that two signals are
mutually exclusive? Drive them both and make sure the
device continues to operate.
• Just as you are trying to provoke the hardware with ill-
formed commands, you should also try to catch these
occurrences.
• For example, recall those mutually exclusive signals. You
should add checker code to look for these violations
• Your code should at least print a warning message when this
occurs, and preferably generate an error and wind down the
test.
Delays and synchronization

 A test that uses the shortest delays runs the fastest, but it won’t
create all possible stimulus. You can create a testbench that talks
to another block at the fastest rate, but subtle bugs are often
revealed when intermittent delays are introduced
 You can create a testbench that talks to another block at the
fastest rate, but subtle bugs are often revealed when
intermittent delays are introduced.
 A block may function correctly for all possible permutations of
stimulus from a single interface, but subtle errors may occur
when data is flowing into multiple inputs.
 . What if the inputs arrive at the fastest possible rate, but the
output is being throttled back to a slower rate? What if stimulus
arrives at multiple inputs concurrently? What if it is staggered
with different delays? Use functional coverage to measure what
combinations have been randomly generated
Parallel random testing

 A random test consists of the testbench code plus a random seed. If you run the
same test 50 times, each with a unique seed, you will get 50 different sets of
stimuli. Running with multiple seeds broadens the coverage of your test and
leverages your work
 You need to plan how to organize your files to handle multiple simulations. Each
job creates a set of output files such as log files and functional coverage data.
You can run each job in a different directory, or you can try to give a unique
name to each file.
Functional Coverage

• The previous sections have shown how to create stimuli that can
randomly walk through the entire space of possible inputs. With
this approach, your testbench visits some areas often, but takes
too long to reach all possible states.
• Unreachable states will never be visited, even given unlimited
simulation time. You need to measure what has been verified in
order to check off items in your verification plan.
• The process of measuring and using functional coverage
consists of several steps.
• First, you add code to the testbench to monitor the stimulus
going into the device, and its reaction and response, to
determine what functionality has been exercised.
• Next, the data from one or more simulations is combined into
a report.
• Lastly, you need to analyze the results and determine how to
create new stimulus to reach untested conditions and logic
Feedback from functional coverage to stimulus

 A random test evolves using feedback. The initial test can


be run with many different seeds, creating many unique
input sequences. Eventually the test, even with a new seed,
is less likely to generate stimulus that reaches areas of the
design space.
 As the functional coverage asymptotically approaches its
limit, you need to change the test to find new approaches
to reach uncovered areas of the design. This is known as
“coverage-driven verification.”

You might also like