This document provides guidance on writing structured Verilog testbenches. It discusses elements that should be included in testbenches such as parameter definitions, preprocessor directives, timescale directives, include statements, input registers, output wires, device under test instantiation, initial conditions, test vector generation, debug output, using memory, and events. It gives an example of a basic testbench and improves it by adding parameters. Structured testbenches allow for easier readability, modification, and expansion compared to unstructured testbenches.
The document provides an overview of the ASIC design and verification process. It discusses the key stages of ASIC design including specification, high-level design, micro design, RTL coding, simulation, synthesis, place and route, and post-silicon validation. It then describes the importance of verification, including why 70% of design time and costs are spent on verification. The verification process uses testbenches, directed and constrained-random testing, and functional coverage to verify the design matches specifications. Verification of more complex designs like FPGAs, SOCs is also discussed.
This document provides an introduction to verification and the Universal Verification Methodology (UVM). It discusses different types of verification including simulation, functional coverage, and code coverage. It describes how simulators work and their limitations in approximating physical hardware. It also covers topics like event-driven simulation, cycle-based simulation, co-simulation, and different types of coverage metrics used to ensure a design is fully tested.
The document discusses developing Groovy scripts securely and productively in the cloud for Oracle Application Developer Framework (ADF). It outlines using Groovy AST transformations to add debugging capabilities and runtime security checks when executing scripts in the cloud. Caching is also discussed to improve performance of compiling thousands of scripts across many applications. The implementation transforms the AST to wrap method calls and inject breakpoints while limiting access to restricted APIs.
The document discusses using coverage to accelerate functional verification on emulators. It describes defining a test plan, running tests with coverage enabled, and analyzing coverage reports to focus on uncovered areas. Code coverage is demonstrated on an emulator with minimal performance impact. Functional coverage is also supported to further drive verification closure.
The document provides tips for improving the performance of MATLAB code. It discusses using the profiler to identify bottlenecks, preallocating arrays to avoid dynamic resizing overhead, and how the Just-In-Time accelerator can speed up loops and functions by avoiding interpretation. Preallocating arrays is shown to improve the speed of examples by over 3 times, and is beneficial for cases where the final array size may vary. The JIT accelerator most effectively accelerates code using supported data types, array shapes, and language elements within loops and conditionals.
This document discusses the basics of RTL design and synthesis. It covers stages of synthesis including identifying state machines, inferring logic and state elements, optimization, and mapping to target technology. It notes that not everything that can be simulated can be synthesized. Good coding style reduces hazards and improves optimization. Examples are given of how logic, sequential logic, and datapaths can be synthesized. Pipelining is discussed as dividing complex operations into simpler operations processed sequentially.
The document discusses using algorithmic test generation to improve functional coverage in existing verification environments. It describes limitations of current constrained random stimuli generation techniques for complex designs. Algorithmic test generation uses rule graphs and action functions to efficiently target coverage goals without requiring extensive changes to verification environments. A case study shows algorithmic test generation achieved coverage goals over 600x faster than constrained random for an AXI bus bridge design while requiring minimal changes to the testbench.
The document discusses various whitebox testing techniques including statement coverage, branch coverage, condition coverage, path coverage, and data flow-based testing. Statement coverage requires designing test cases such that every statement in a program is executed at least once. Branch coverage requires test cases where different branch conditions are given true and false values. Path coverage requires test cases such that all linearly independent paths in a program are executed based on the program's control flow graph. Data flow-based testing focuses on connections between variable definitions and uses.
This document describes a methodology for evaluating the effectiveness of a manufacturing test suite for detecting faults in an integrated circuit design. It discusses using an emulator to "faultgrade" the test suite, which involves inserting single faults into the design and seeing if the tests detect the faults. For the Motorola C-5e DCP chip, faultgrading the entire test suite on a software simulator would take over 4,000 years, but the emulator could do it in only 6 weeks. A random sample of faults was used to faultgrade efficiently while maintaining accuracy of the results. The methodology provided a way to quantify test quality and identify areas for improving the test patterns.
Defensive programming
Organizing straight-line code
Using Conditionals
Controlling Loops
Unusual Control Structures
Table-Driven Methods
General Control Issues
Layout and Style
Code Tuning Strategies
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
This document provides an introduction to C++ programming including problem solving skills, software evolution, procedural and object oriented programming concepts, basic C++ programs, operators, header files, conditional statements, loops, functions, pointers, structures and arrays. It discusses topics such as analyzing problems, planning algorithms, coding solutions, evaluating results, procedural and object oriented paradigms, inheritance, polymorphism, flowcharts, basic syntax examples, and more. Various examples are provided to illustrate key concepts in C++.
Code coverage is a measure of how much source code is covered during testing. It is not a goal in itself but can be used pragmatically to improve testing in several ways. Coverage data should be filtered and combined with other metrics to prioritize test development and focus on the most important or risky code. While 100% coverage may not be needed or prove quality, coverage is a useful tool when used properly along with other techniques rather than in isolation or as the only metric.
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://cainc.to/CAW17-CD
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://ow.ly/3X1E50g62YC
How to create SystemVerilog verification environment?Sameh El-Ashry
Basic knowledge for the verification engineer to learn the art of creating SystemVerilog verification environment.
Starting from the specifications extraction till coverage closure.
The document discusses various software testing techniques, including black-box testing and white-box testing. It focuses on white-box testing techniques like basis path testing. Basis path testing uses the control flow graph of a program to identify the minimum number of test paths needed to guarantee that all statements are executed at least once. These basis paths are used to derive test cases that achieve full statement coverage. The cyclomatic complexity metric can be used to measure the number of independent paths in a program and guide test planning.
The document discusses various software testing techniques, including black-box testing and white-box testing. It focuses on white-box testing techniques like basis path testing. Basis path testing uses the control flow graph of a program to identify the minimum number of test paths needed to guarantee that all statements are executed at least once. These basis paths are used to derive test cases that achieve full statement coverage. The cyclomatic complexity metric can be used to measure the number of independent paths in a program and guide test planning.
Tool Development 09 - Localization & TestingNick Pruehs
This document discusses localization and testing for WPF applications. It begins by explaining the differences between globalization and localization, with globalization referring to making applications work in multiple locations and localization being the translation of resources. It then provides best practices for WPF UI design to support localization. The document outlines the process for localizing WPF applications using the LocBaml tool to extract and replace resources. It also discusses unit testing WPF applications using the NUnit framework, including test design, assertions and attributes. The document notes advantages and limitations of both unit testing and test-driven development (TDD).
White box testing techniques analyze the internal structures the used data structures, internal design, code structure, and the working of the software rather than just the functionality as in black box testing.
Delivery Pipelines as a First Class Citizen @deliverAgile2019ciberkleid
In this talk, we will cover important elements for successful CI and CD. We will discuss how these elements make CI and CD much simpler, and hence more attainable. We will cover some best practices / recommendations to include in your application pipelines. We will look at a sample implementation of a pipeline leveraging modern tools. Finally, we will discuss some forthcoming ideas for making it even easier to declaratively enable CI and CD for applications.
The document discusses various techniques for determining where to start testing a legacy application, including:
1) Analyzing code coverage reports to find which parts of the code are most used.
2) Checking version control history to identify files that change frequently.
3) Conducting code reviews to find areas that modify data.
4) Running static code analysis tools to locate sections likely to contain bugs.
The document advocates starting testing with the highest priority areas identified using these techniques, and provides examples of writing initial tests against a sample codebase.
Measurement .Net Performance with BenchmarkDotNetVasyl Senko
This document discusses using BenchmarkDotNet to measure .NET performance. It provides an overview of profiling vs benchmarking vs microbenchmarking and best practices for benchmarking. It then demonstrates how to install BenchmarkDotNet, design a benchmark, analyze results, and configure columns, jobs, and diagnosers. Key aspects of how BenchmarkDotNet works and useful statistics and tools it provides are also summarized.
This document discusses the basics of RTL design and synthesis. It covers stages of synthesis including identifying state machines, inferring logic and state elements, optimization, and mapping to target technology. It notes that not everything that can be simulated can be synthesized. Good coding style reduces hazards and improves optimization. Examples are given of how logic, sequential logic, and datapaths can be synthesized. Pipelining is discussed as dividing complex operations into simpler operations processed sequentially.
The document discusses using algorithmic test generation to improve functional coverage in existing verification environments. It describes limitations of current constrained random stimuli generation techniques for complex designs. Algorithmic test generation uses rule graphs and action functions to efficiently target coverage goals without requiring extensive changes to verification environments. A case study shows algorithmic test generation achieved coverage goals over 600x faster than constrained random for an AXI bus bridge design while requiring minimal changes to the testbench.
The document discusses various whitebox testing techniques including statement coverage, branch coverage, condition coverage, path coverage, and data flow-based testing. Statement coverage requires designing test cases such that every statement in a program is executed at least once. Branch coverage requires test cases where different branch conditions are given true and false values. Path coverage requires test cases such that all linearly independent paths in a program are executed based on the program's control flow graph. Data flow-based testing focuses on connections between variable definitions and uses.
This document describes a methodology for evaluating the effectiveness of a manufacturing test suite for detecting faults in an integrated circuit design. It discusses using an emulator to "faultgrade" the test suite, which involves inserting single faults into the design and seeing if the tests detect the faults. For the Motorola C-5e DCP chip, faultgrading the entire test suite on a software simulator would take over 4,000 years, but the emulator could do it in only 6 weeks. A random sample of faults was used to faultgrade efficiently while maintaining accuracy of the results. The methodology provided a way to quantify test quality and identify areas for improving the test patterns.
Defensive programming
Organizing straight-line code
Using Conditionals
Controlling Loops
Unusual Control Structures
Table-Driven Methods
General Control Issues
Layout and Style
Code Tuning Strategies
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
This document provides an introduction to C++ programming including problem solving skills, software evolution, procedural and object oriented programming concepts, basic C++ programs, operators, header files, conditional statements, loops, functions, pointers, structures and arrays. It discusses topics such as analyzing problems, planning algorithms, coding solutions, evaluating results, procedural and object oriented paradigms, inheritance, polymorphism, flowcharts, basic syntax examples, and more. Various examples are provided to illustrate key concepts in C++.
Code coverage is a measure of how much source code is covered during testing. It is not a goal in itself but can be used pragmatically to improve testing in several ways. Coverage data should be filtered and combined with other metrics to prioritize test development and focus on the most important or risky code. While 100% coverage may not be needed or prove quality, coverage is a useful tool when used properly along with other techniques rather than in isolation or as the only metric.
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://cainc.to/CAW17-CD
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://ow.ly/3X1E50g62YC
How to create SystemVerilog verification environment?Sameh El-Ashry
Basic knowledge for the verification engineer to learn the art of creating SystemVerilog verification environment.
Starting from the specifications extraction till coverage closure.
The document discusses various software testing techniques, including black-box testing and white-box testing. It focuses on white-box testing techniques like basis path testing. Basis path testing uses the control flow graph of a program to identify the minimum number of test paths needed to guarantee that all statements are executed at least once. These basis paths are used to derive test cases that achieve full statement coverage. The cyclomatic complexity metric can be used to measure the number of independent paths in a program and guide test planning.
The document discusses various software testing techniques, including black-box testing and white-box testing. It focuses on white-box testing techniques like basis path testing. Basis path testing uses the control flow graph of a program to identify the minimum number of test paths needed to guarantee that all statements are executed at least once. These basis paths are used to derive test cases that achieve full statement coverage. The cyclomatic complexity metric can be used to measure the number of independent paths in a program and guide test planning.
Tool Development 09 - Localization & TestingNick Pruehs
This document discusses localization and testing for WPF applications. It begins by explaining the differences between globalization and localization, with globalization referring to making applications work in multiple locations and localization being the translation of resources. It then provides best practices for WPF UI design to support localization. The document outlines the process for localizing WPF applications using the LocBaml tool to extract and replace resources. It also discusses unit testing WPF applications using the NUnit framework, including test design, assertions and attributes. The document notes advantages and limitations of both unit testing and test-driven development (TDD).
White box testing techniques analyze the internal structures the used data structures, internal design, code structure, and the working of the software rather than just the functionality as in black box testing.
Delivery Pipelines as a First Class Citizen @deliverAgile2019ciberkleid
In this talk, we will cover important elements for successful CI and CD. We will discuss how these elements make CI and CD much simpler, and hence more attainable. We will cover some best practices / recommendations to include in your application pipelines. We will look at a sample implementation of a pipeline leveraging modern tools. Finally, we will discuss some forthcoming ideas for making it even easier to declaratively enable CI and CD for applications.
The document discusses various techniques for determining where to start testing a legacy application, including:
1) Analyzing code coverage reports to find which parts of the code are most used.
2) Checking version control history to identify files that change frequently.
3) Conducting code reviews to find areas that modify data.
4) Running static code analysis tools to locate sections likely to contain bugs.
The document advocates starting testing with the highest priority areas identified using these techniques, and provides examples of writing initial tests against a sample codebase.
Measurement .Net Performance with BenchmarkDotNetVasyl Senko
This document discusses using BenchmarkDotNet to measure .NET performance. It provides an overview of profiling vs benchmarking vs microbenchmarking and best practices for benchmarking. It then demonstrates how to install BenchmarkDotNet, design a benchmark, analyze results, and configure columns, jobs, and diagnosers. Key aspects of how BenchmarkDotNet works and useful statistics and tools it provides are also summarized.
☁️ GDG Cloud Munich: Build With AI Workshop - Introduction to Vertex AI! ☁️
Join us for an exciting #BuildWithAi workshop on the 28th of April, 2025 at the Google Office in Munich!
Dive into the world of AI with our "Introduction to Vertex AI" session, presented by Google Cloud expert Randy Gupta.
Raish Khanji GTU 8th sem Internship Report.pdfRaishKhanji
This report details the practical experiences gained during an internship at Indo German Tool
Room, Ahmedabad. The internship provided hands-on training in various manufacturing technologies, encompassing both conventional and advanced techniques. Significant emphasis was placed on machining processes, including operation and fundamental
understanding of lathe and milling machines. Furthermore, the internship incorporated
modern welding technology, notably through the application of an Augmented Reality (AR)
simulator, offering a safe and effective environment for skill development. Exposure to
industrial automation was achieved through practical exercises in Programmable Logic Controllers (PLCs) using Siemens TIA software and direct operation of industrial robots
utilizing teach pendants. The principles and practical aspects of Computer Numerical Control
(CNC) technology were also explored. Complementing these manufacturing processes, the
internship included extensive application of SolidWorks software for design and modeling tasks. This comprehensive practical training has provided a foundational understanding of
key aspects of modern manufacturing and design, enhancing the technical proficiency and readiness for future engineering endeavors.
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...Infopitaara
A feed water heater is a device used in power plants to preheat water before it enters the boiler. It plays a critical role in improving the overall efficiency of the power generation process, especially in thermal power plants.
🔧 Function of a Feed Water Heater:
It uses steam extracted from the turbine to preheat the feed water.
This reduces the fuel required to convert water into steam in the boiler.
It supports Regenerative Rankine Cycle, increasing plant efficiency.
🔍 Types of Feed Water Heaters:
Open Feed Water Heater (Direct Contact)
Steam and water come into direct contact.
Mixing occurs, and heat is transferred directly.
Common in low-pressure stages.
Closed Feed Water Heater (Surface Type)
Steam and water are separated by tubes.
Heat is transferred through tube walls.
Common in high-pressure systems.
⚙️ Advantages:
Improves thermal efficiency.
Reduces fuel consumption.
Lowers thermal stress on boiler components.
Minimizes corrosion by removing dissolved gases.
its all about Artificial Intelligence(Ai) and Machine Learning and not on advanced level you can study before the exam or can check for some information on Ai for project
π0.5: a Vision-Language-Action Model with Open-World GeneralizationNABLAS株式会社
今回の資料「Transfusion / π0 / π0.5」は、画像・言語・アクションを統合するロボット基盤モデルについて紹介しています。
拡散×自己回帰を融合したTransformerをベースに、π0.5ではオープンワールドでの推論・計画も可能に。
This presentation introduces robot foundation models that integrate vision, language, and action.
Built on a Transformer combining diffusion and autoregression, π0.5 enables reasoning and planning in open-world settings.
International Journal of Distributed and Parallel systems (IJDPS)samueljackson3773
The growth of Internet and other web technologies requires the development of new
algorithms and architectures for parallel and distributed computing. International journal of
Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to
publish high quality scientific papers arising from original research and development from
the international community in the areas of parallel and distributed systems. IJDPS serves
as a platform for engineers and researchers to present new ideas and system technology,
with an interactive and friendly, but strongly professional atmosphere.
ELectronics Boards & Product Testing_Shiju.pdfShiju Jacob
This presentation provides a high level insight about DFT analysis and test coverage calculation, finalizing test strategy, and types of tests at different levels of the product.
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. Originally applied to water (hydromechanics), it found applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of various fluids at rest, and fluid dynamics.
Fluid statics, also known as hydrostatics, is the study of fluids at rest, specifically when there's no relative motion between fluid particles. It focuses on the conditions under which fluids are in stable equilibrium and doesn't involve fluid motion.
Fluid kinematics is the branch of fluid mechanics that focuses on describing and analyzing the motion of fluids, such as liquids and gases, without considering the forces that cause the motion. It deals with the geometrical and temporal aspects of fluid flow, including velocity and acceleration. Fluid dynamics, on the other hand, considers the forces acting on the fluid.
Fluid dynamics is the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Fundamentally, every fluid mechanical system is assumed to obey the basic laws :
Conservation of mass
Conservation of energy
Conservation of momentum
The continuum assumption
For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.
The continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxRishavKumar530754
LiDAR-Based System for Autonomous Cars
Autonomous Driving with LiDAR Tech
LiDAR Integration in Self-Driving Cars
Self-Driving Vehicles Using LiDAR
LiDAR Mapping for Driverless Cars
2. Coverage types
• Coverage is a generic term used to measure the progress of verifying a
design. Two types of Coverage Code and Functional
– Code coverage:
• Here you are measuring how many lines of code have been
executed(line coverage).
• Which paths through the code and expressions have been
executed(path coverage)
• Which states and transitions have been executed(FSM)
• Single bit variable transitions from 0 or 1 (toggle coverage)
• Most Simulators include a code coverage tool.
• Code Coverage checks how accurately your tests exercised the
implementation of the design specifiacation and not the verification plan.
• 100% code coverage does not mean your design is completely verified
1/28/2013 Krispan Inc. Confidential 2
3. Code Coverage
• The code coverage tool will not catch mistakes you make
– Suppose your design implementation is missing a feature from the
design specification. Code Coverage will not catch it.
– Suppose you forget to implement the reset logic in your flop. Code
coverage will not catch it. It will report that every line in the code is
exercised.
module dff ( output logic q,
input logic clk, d, reset_low);
always @(posedge clk or negedge reset_low) begin
q <= d;
end
endmodule
• The reset logic was not implemented. Code Coverage tool will not catch it.
1/28/2013 Krispan Inc. Confidential 3
4. TestBench with Functional Coverage
1/28/2013 Krispan Inc. Confidential 4
test
Generator
Driver
DUT
Assertions
Scoreboard checker
monitor
Test
Environment
Functional
Coverage
5. Functional Coverage
• Functional Coverage tells you how much of the design is exercised.
• If the design is exercised completely for all possible scenarios then
the coverage is 100%.
• Create a test plan with the list of tests that are needed from the
design specification.
• Look at the coverage results, then decide on what actions one
needs take in order to achieve 100% coverage
• First action is to run the existing tests with more seeds to see if you
can achieve 100% coverage.
• Next step is to write more constraints and see if that fulfils the
cause.
• Write Directed tests only if absolutely necessary
1/28/2013 Krispan Inc. Confidential 5
6. Coverage Convergence
• For e.g. if we are looking at coverage metrics for processors
– Did we cover all instructions
– Did we cover all addressing modes
– Did we cover all valid combinations of instructions and
addressing modes.
• Functional coverage identifies tested and untested areas of
your design and makes sure you test for all
– Corner cases
– All states in a state machine and transitions are verified
• Use a combination of seeds, constraints and directed tests if
needed to increase your coverage to 100%.
1/28/2013 Krispan Inc. Confidential 6
7. Functional Coverage
1/28/2013 Krispan Inc. Confidential 7
Design Specification Verification Plan
design
tests
Coverage
database
pass
Coverage
Analysis
Debug
8. Functional Coverage
• You can see from the previous diagram you test for pass/fail
of a test.
– Pass gives you the coverage analysis
– Fail has no significance
– Functional Coverage data is discarded if the test failed due to design
bug.
– The Coverage data gives you the verification metrics and gives you a
measure of how many items in the verification plan is complete which
is based on the design specifications
– If the design does not match the specifications then the coverage
values have no meaning.
– Analyze coverage with multiple runs with different seeds, more
constraints, and directed tests if needed to achieve the coverage goal.
1/28/2013 Krispan Inc. Confidential 8
9. covergroup
• As we have seen before functional coverage starts with
writing of the verification plan from design specification
• Then in SystemVerilog we write an executable version using
covergroup and cover points
• covergroup construct is a keyword that encapsulates the
specification of a coverage model. It may include a clocking
event that samples values of variables and expressions which
are called cover points.
• Cover groups are defined using the keyword covergroup and
endgroup.
• A covergroup instance can be created using the new operator.
1/28/2013 Krispan Inc. Confidential 9
10. covergroup
• Coverage model definiton
– Encapsulated in the covergroup construct defined by the user
– Clocking events for sampling variables or expressions which are cover
points . This is done using the keyword coverpoint
– Cross coverage between coverage points using keyword cross
– Optional coverage options
• SystemVerilog functional coverage enables
– Variable and expression coverage as well as cross coverage
– Automatic and/or user-defined coverage bins
– Filtering conditions (illegal values, values to ignore, etc.)
– Automatic coverage sample triggering by events and sequences
• – Dynamic query of coverage information via built-in methods
1/28/2013 Krispan Inc. Confidential 10
11. covergroup
• Syntax is
covergroup buslogic;
……..
……..
endgroup
buslogic bl;
bl = new;
• An instance of buslogic covergroup bl is created by assigning
bl to new
1/28/2013 Krispan Inc. Confidential 11
12. covergroup example
program test1(busifc.TB ifc)
class BusTransaction;
//properties variables (data portion)
rand bit [31:0] data; //random variables
rand bit [3:0] port; //16 possible ports
endclass
covergroup portdata
coverpoint bt.port; // Measure coverage
endgroup
initial begin
BusTransaction bt;
portdata pd;
pd = new(); // Instantiate group
bt = new();
repeat (32) begin // Run a few cycles
assert(bt.randomize); // Create a transaction
ifc.cb.port <= bt.port; // and transmit
ifc.cb.data <= bt.data; // onto interface
pd.sample(); // Gather coverage
@ifc.cb; // Wait a cycle
end end endprogram
1/28/2013 Krispan Inc. Confidential 12
13. covergroup example
• In this example a random transaction is created and driven to
the interface.
• Testbench samples the value of the port field using the
portdata covergroup.
• 16 possible values, 32random transactions. Did your
testbench have enough coverage
• vcs will generate a coverage report which gives you a
summary of your coverage.
• If coverage does not satisfy your requirements then you can
rerun with a new seed value, add more constraints etc.
1/28/2013 Krispan Inc. Confidential 13
14. Sample of vcs® output
• Coverpoint Coverage report
• CoverageGroup: portdata
• Coverpoint: bt.port
• Summary
• Coverage: 50
• Goal: 100
• Number of Expected auto-bins: 16
• Number of User Defined Bins: 0
• Number of Automatically Generated Bins: 7
• Number of User Defined Transitions: 0
• Automatically Generated Bins
• Bin # hits at least
• ================================
• auto[1] 7 1
• auto[2] 7 1
• auto[3] 1 1
• auto[4] 5 1
• auto[5] 4 1
• auto[6] 2 1
• auto[7] 6 1
• ================================
1/28/2013 Krispan Inc. Confidential 14
15. Sample of vcs® output
• This is just how the vcs output would look like. It is not a real
output.
• You can see testbench generated the values for ports
1,2,3,4,5,6,7 but nothing was generated for
0,8,9,10,11,12,13,14,15.
• The at least option specifies how many hits are needed
before a bin is covered.
• The easiest way to improve coverage is to try to generate
more random transactions (more simulation cycles) or to try
new seed values.
1/28/2013 Krispan Inc. Confidential 15
16. Coverage bins
• Bins can be created implicitly or explicitly
• When you specifiy coverpoints if you do not specify any bins then
implicit bins are created.
• In the example vcs file we were trying the coverage on each port
and since there are 16 ports it will automatically create 16 bins
• The number of bins to be created can be controlled by using the
auto_bin_max parameter.
covergroup portdata
coverpoint bt.port;
{ option.auto_bin_max = 8; }
endgroup
• In this case total bins created will be 8 half of 16 and 1 bin for
2 ports
1/28/2013 Krispan Inc. Confidential 16
17. Coverage Bins
• Explicit bin creation is the preferred way. Not all cover points
are of interest to the user so he can use explicit bins if he
knows exactly what he wants to cover. You can also name the
bins
covergroup portdata
coverpoint bt.port {
bins portzero = {0}; // 1 bin for port0
bins portone = {[1:5], 7}; // 1 bin for ports 1:5, and 7
bins porttwo = {6, [8:15]}; // 1 bin for ports 6, 8:15
}
endgroup // portdata
• In this case we created 3 bins as shown above.
1/28/2013 Krispan Inc. Confidential 17
18. Conditional Coverage
• You can add the iff keyword to add a condition to the
coverpoint
covergroup portdata
coverpoint bt.port iff(!bus.if.reset)
endgroup
• This way you can turn off cover for ports during reset
assuming reset is active high.
• you can also use start and stop functions to turn on and off
cover.
• Here is an example of that
1/28/2013 Krispan Inc. Confidential 18
19. Conditional Coverage
initial begin
BusTransaction bt;
portdata pd;
pd = new(); // Instantiate group
bt = new();
#1 pd.stop(); //stop coverage
bus.if.reset = 1; //start reset
#100 bus.ifi.reset = 0; //end reset
pd.start(); //start coverage
repeat (32) begin // Run a few cycles
assert(bt.randomize); // Create a transaction
ifc.cb.port <= bt.port; // and transmit
ifc.cb.data <= bt.data; // onto interface
pd.sample(); // Gather coverage
@ifc.cb; // Wait a cycle
end
end
1/28/2013 Krispan Inc. Confidential 19
20. Covergroup range
• You can use the $ to specify the upper limit if it is used to
specify the right side of the range to specify the upper limit
• You can use the $ to specify the lower limit if it is used on the
leftside of the range to specify the lower limit
covergroup portdata
coverpoint bt.port {
bins portzero = {[$:5]}; // 1 bin for port 0:5
bins portone = {[6,7]}; // 1 bin for ports 6 and 7
bins porttwo = {[8:$]}; // 1 bin for ports 8:15
}
endgroup // portdata
1/28/2013 Krispan Inc. Confidential 20
21. Creating bins for enumerated
types
• For enumerated data types SystemVerilog creates a bin for
each value of the enumerated type
typedef enum { idle, decode, data_transfer} fsmstate;
fsmstatecstate, nstate;
covergroup cg_fsm;
coverpoint cstate;
endgroup
• In this case it will create a bin each for idle, decode and
data_transfer.
• If you want to group multiple values into a single bin then
you have to define your bins just as we did before
1/28/2013 Krispan Inc. Confidential 21
22. Transition Coverage
• Individual State transitions can be specified for a cover point.
• For eg you can check if port went from 0 to 1 2 or 3
covergroup portdata
coverpoint bt.port {
bins p1 = (0 => 1), (0 => 2), (0 => 3); } // Measure coverage
endgroup
• Wildcard states and transitions:
– Wildcard keyword is used to specify state transitions. X,Z or ? are treated as a wildcard
for 0 or 1. Here is a way to create a coverpoint with bins for even and odd values
bit [2:0] port;
covergroup cp;
coverpoint port {
wildcard bins even = {3’b??0};
wildcard bins odd = {3’b??1};
}
endgroup
1/28/2013 Krispan Inc. Confidential 22
23. Coverage group with event trigger
• In this case the coverage is started only when a trigger event
happens. For eg you can start the coverage when the
transaction ready signal is triggered from the testbench
• e.g.
event transaction_ready
covergroup cp @(transaction_ready);
coverpoint ifc.cb.port; //measure coverage for port
endgroup
• In this case we start measuring coverage on the coverpoint
only after the event is triggerred.
1/28/2013 Krispan Inc. Confidential 23
24. Cover group in a class
• Cover group can be defined in a module program or class.
• In any case they have to be instantiated explicitly to start
measuring coverage.
• For a class you don’t need to create a separate name when
you instance it. You can use the original cover group name
• Cover group is triggered whenever there are new values to
the variables or expressions
• The trigger can be done by a sampling function or by using
some kind of events to trigger the cover group as we had seen
under coverage group with event trigger.
1/28/2013 Krispan Inc. Confidential 24
25. Functional coverage in a class
class BusTransactor; class BusTransaction
BusTransaction bt; rand bit [31:0] data; //random variables
mailbox tr_in; rand bit [3:0] port; //16 possible ports
covergroup portdata; ……………
coverpoint bt.port; ……………
endgroup endclass
function new(mailbox tr_in);
portdata = new(); // Instantiate covergroup
this.tr_in = tr_in;
endfunction
task main;
forever begin
bt = tr_in.get; // Get next transaction
ifc.cb.port <= bt.port; // Send into DUT
ifc.cb.data <= bt.data;
portdata.sample(); // Gather coverage
end
endtask
endclass
1/28/2013 Krispan Inc. Confidential 25