0% found this document useful (0 votes)
19 views60 pages

DVConFuSA2018

Uploaded by

djani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views60 pages

DVConFuSA2018

Uploaded by

djani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Making Autonomous Cars Safer – One chip at a

time

Apurva Kalia: Vice President R&D


Ann Keffer: Product Management Director

© Accellera Systems Initiative 1


Agenda

• Automotive Market
• Complex Challenges
• ISO 26262 and Basic Safety
• Functional Safety Methodology

© Accellera Systems Initiative 2


The Automotive Market

© Accellera Systems Initiative 3


Automotive Semiconductor Growth

© Accellera Systems Initiative 4


Forces Shaping the Automotive Industry
“Automotive Revolution – Perspective towards 2030” – a 2016 McKinsey Report identified 4
areas that deemed particularly important in shaping the auto industry thru 2030

Growth of
Vehicle Increased Shared Mobility
Autonomous
electrification Connectivity Services
Driving

Advances to solve
Advances to ADAS deployment Proliferation of
• High battery
• 5G deployment • Cost effective • Ride sharing
costs
• Telematics Level 3 and services
• Proliferation of
services Level 4 by • Car sharing
charging
• V2I; V2V 2020~2025 services
infrastructure

© Accellera Systems Initiative 5


Autonomous Driving Vehicles after
2020

• Amount of electronics is growing fast


Vehicles in
development
5
Mind
• (ADAS) based on complex SoCs to
Vehicles in
production
4 off
enable high-performance computing

• Safety critical ADAS applications


3 Eyes
off

have stringent requirements on


– Functional Safety
LEVELS OF DRIVING
AUTOMATION AS
DEFINED IN
2 Hands
off
– Security
SAE INTERNATIONAL
STANDARD J3016
1
– Reliability
0
No Automation
HUMAN DRIVER
MONITORS DRIVING ENVIRONMENT

Driver Partial
AUTOMATED DRIVING SYSTEM
MONITORS DRIVING ENVIRONMENT

Conditional High Full


Assistance Automation Automation Automation Automation

© Accellera Systems Initiative 6


Automotive Opportunities and Focus Areas
Automotive
ADAS Infotainment
SoC Sign-off

Camera Radar Lidar Audio Voice ANC,… Safety Security Reliability

Sensor Fusion Basic ADAS Features ISO26262, AEC-Q100,…

High-performance computing Highly integrated cockpit Qualification of new SoCs


- Scalability - Scalability - Safety, Security and Reliability
- High resolution - Connectivity - FMEDA not sufficient for SoCs
- Low power - In-vehicle networking - Integrated FMEDA and safety
- Vision + CNN - SW app availability verification flow
- Memory bandwidth - Comprehensive I/F support - Interfaces to RM & Tracing
- Safety and Security is a must! - Basic ADAS features tools
© Accellera Systems Initiative 7
Complex Challenges

© Accellera Systems Initiative 8


The Megatrends Dilemma
Efficient
Electric
Vehicles Government EURO NCAP Safe
Regulations Program Autonomous
Enhanced
Cars
Reduce
Emissions Safety

Source: Volvo
Source: BMW
Need low-power, small footprint, high-performance SoCs
© Accellera Systems Initiative 9
Making a Car Autonomous
Vision

Vision
Vision
Vision

Vision
Vision Vision
Radar
Radar
Audio

Passive Vision
Radar

Audio Vision Audio

Vision
Audio

Radar Rear View Camera


Rear Object Detection Vision
Radar Vision Enhancement
Parking Assist/Auto Park Vision Auto Dimming Headlights
Voice Recognition Blind Spot Detection
Cabin Noise Reduction 360 View
Emergency Recognition Parking Assist
Spatial Audio for Warnings Lane Detection and Following
Radar Sign Recognition
Active Vision Traffic Signal Recognition
Front Collision
Avoidance Braking
(LiDAR) Rain, Snow, /Fog Removal
Fusion Adaptive Cruise Control Adaptive Cruise Pedestrian Tracking /Avoidance
Radar, LIDAR, Image 360 degree Hazard Control Eye Focus Detection
correlation Awareness Collision Avoidance Driver Monitoring
System Functional Safety Rear Collision Detection Blind Spot Detection Vehicle Detection/Avoidance
System Data Control
Complicated Convolutional Neural Networks
Radar Point Cloud Lidar Point Cloud Digital Camera

~10-100 KB/sec ~10-70 MB/sec ~20-40 MB/sec

Automated and Reliable Object Recognition


using Convolutional Neural Networks

Need a high-performance, low-power


hardware platform to combine and analyze point
clouds and accurately identify objects
© Accellera Systems Initiative 11
Automotive SoC Verification Challenges
Systematic Failure Verification
ADAS SoC
Concurrent SW Development
Example
Requirements Traceability

Use Case Verification

Performance Verification

Security Verification

Automotive Protocol Verification

Mixed Signal Verification

Functional Safety Verification


Multiple verification and validation platforms
Random Failure Verification
12
© Accellera Systems Initiative
ISO 26262 and Failure Mode Effects and Diagnostic
Analysis

© Accellera Systems Initiative 13


Functional Safety standards

ISO 26262 defines


• Processes to follow
• Hardware/software performance to achieve
• Safety documentation to produce
• Software tools compliance process

© Accellera Systems Initiative 14


Functional Safety Definition—ISO 26262
“Absence of unreasonable risk due to hazards caused by malfunctioning behavior
of electrical and/or electronic systems” (ISO 26262)

Malfunction How much harm can the


What level of safety integrity
malfunction cause?
(risk reduction) is needed?
(risk)

ASIL (Automotive Safety Integrity Level)

Dashboard
Airbag not firing Braking
ASIL examples for illustration purposes only

© Accellera Systems Initiative 15


ASIL Determination Example—ISO 26262
For illustration purposes only

Malfunction ABS system failure Safety Goal Prevent ABS failure

Hazard Analysis What unintended situations (hazards) could happen? → Loss of stability on split-µ surface

• How likely is the hazard to happen? (Exposure) → oil spill, gravel, water potholes, ….
Risk Analysis • How harmful is the hazard? (Severity) → Car may spin out of control and crash
• How controllable is the system if the hazard occur? (Controllability) → dashboard, driver

What level of safety (risk reduction) does the system need?


ASIL
• How likely can the malfunction be? → FIT (Failure in Time)
Determination • How often does the system need to catch it and get to a safe situation? → DC (Diagnostic coverage)

ASIL (Automotive Safety Integrity Level)

 FIT (Failure In Time),  Diagnostic Coverage (DC)


© Accellera Systems Initiative 16
ISO 26262—Design and Safety Flow
< 10
Concept Phase FIT
Safety Goals
(ISO- Part 3)
ASIL

System Design Technical Safety < 1 FIT


(ISO- Part 4) Requirements NAND
DDR Wide IO SD/eMMC
Flash
ASIL LPDDR HBM, HMC UFS
ONFi, TGL

AFE
<Ethernet
0.1 ADC/DAC
SW Design HW Design DSP
CPU/GPU
FIT ARM/x86
Sensors
(ISO - Part 6) (ISO - Part 5) < PCIe
0.1 PVT

SW Technical HW Technical FITUSB


Audio/Voice
LDO
Custom POR
Image/video Memory
Safety Safety Logic
Baseband PLL
Requirements Requirements MIPI®
DLL

ASIL ASIL HDMI,


M-PCIe™ Systems
MHL SDIO
SSIC Peripherals
DP/eDP

FIT gets distributed from the item to each of the elements


© Accellera Systems
17
Initiative
Break

© Accellera Systems Initiative 18


ASIL Hardware Metrics

ASIL Failure Rate SPFM LFM


Functional
A < 1000 FIT Not relevant Not Relevant
CPU Output
B < 100 FIT > 90% > 60% data Safety data
C < 100 FIT > 97% > 80% Mechanism
• Monitor
D < 10 FIT > 99% > 90% • Detection
Checker
• Alarm Gen. alarm Output
reset
• FIT Failure In Time (1 Failure / 109 hours) clock

• PMHF Probalbilistic Metric for Random


HW failures
Diagnostic Safety Goal
• SPFM Single Point Fault Metric FIT Coverage Violation
• LFM Latent Fault Metric

SPFM, LFM
(formula inputs)

© Accellera Systems Initiative 19


Functional Safety Life Cycle Main Tasks
• Silicon provider is asked to execute five main activates to implement
a Functional Safety life cycle in light of the hardware random capability.
Selection of ISO26262 Process Requirements and tailoring of the
Life Cycle
development process for the specific SoC (Safety Manger).

Safety Concept Assumed Safety Requirements definition for the HW component for the
Development of the SoC (Safety Architect)

Safety Analysis Safety Analysis: FMEA/FMEDA/DFA (Safety Engineer)

Metrics Computation Compute Hardware Architecture Metrics (SPFM, LFM), PMHF based on the
defined Safety Concept (Safety Engineer)
Perform applicable Verification Reviews, Confirmation Reviews, Safety
Reviews/Confirmations Audit and Assessment (Auditor)

• Safety Manager is the person in charge to define and track the


Functional Safety process, define the work products, define the
template documentation and execute internal reviews
ISO26262—Functional Safety Principles

Systematic Failures Random Failures


(e.g., software bug) (e.g., component malfunction, noise injection)

• Addressed by processes (planning, • Considers permanent failure and transient effects


traceability, documentation, specs, …) • Includes safety mechanisms design and integration to handle faults
• Strictness of processes are dependent • Demonstrated by calculations of Reliability/verification of failure rates
on the ASIL level • Failure rates and diagnostic coverage requirement depend on ASIL

Design/Analysis Verification

ISO 26262 covers random and systematic errors


© Accellera Systems Initiative 21
Functional Safety Metrics
• Target metrics values according to ASIL (Automotive Safety Integrity Level)
• Architectural Matrices (measured in %)
– SPFM: Single Point Fault Metrics
The single point fault metric reveals whether or not the coverage by the safety
mechanisms (i.e. the DC), to prevent risk from single point faults in the hardware
architecture, is sufficient. Single point faults are faults in an element that leads directly to
the violation of a safety goal  SPFM high means that the set of Safety Mechanisms
have high capacity to cover dangerous faults, resulting in high DC.

– LFM: Latent Fault Metrics


The latent fault metric reveals whether or not the coverage by the safety mechanisms, to
prevent risk from latent faults in the hardware architecture, is sufficient. Latent faults are
multiple-point faults whose presence are not detected by a safety mechanism. Latent
faults become dangerous when a second faults appears and it will be not detected due
to the latent fault previously occurred  LFM high means that the set of Safety
Mechanism have high capability to cover multiple faults (multiple = 2) scenario.
Functional Safety Metrics
• Absolute Metrics
– PMHF: Probabilistic Metric for (Random) Hardware Failures

Is the sum of the single point, residual and multipoint fault metrics.
Is expressed in FITs  PMHF low means a low probability that
the SoC, including its safety mechanisms, fails without any
detection. It is measured in FIT: 1FIT = probability that one
failure occur in 10^9 hours. It represents the probability to
violate the safety goal
Work Products and Documentations
• List of the most relevant documents to be produced during a Functional Safety
Development and to be used during an assessment
Work Products Content ISO26262 References
Company realted process quality standards, product life cyle, product responsibilities, tools qualificaiton, project
Safety Plan activities plan,... ISO 26262-2:2018, 6.4.3.9
Process to control that work products can be uniquely identified and reproduced in a controlled manner at any time,
Configuration Management Plan e.g. bugs tracking and documentation ISO 26262-8:2018
Change Management Plan Process to changes to safety-related work products throughout the safety lifecycle, impact analysis, revisioning, ... ISO 26262-8:2018
Design and safety mechanisms requirements compliant with technical safety report and system requirements
Safety Requirements (traceable) ISO 26262-5:2018, Clause 6
Requirements traceability report Show the traceability backward ans forward of the requirements. ISO 26262-5:2018 - 7.4.2.5
HW Design Verification Plan Descripition of the techniques and masures to avoid systematic capability: the pass and fail criteria for the ISO 26262-11:2018, 5.1.9 - table
verification, the metrics; the verification environment; the tools used for verification; the regression strategy.
30
ISO 26262-8:2018, Clause 9
ISO 26262-5:2018, 7.4.4 table3
HW Design Verification Report Results of the verification measures (typcally metrics driven verification), derogation, ... ISO 26262-8:2018, Clause 9
Safety Analysis Report FMEA, FMEDA. Safety scope description, Base failure rate calculation, Fault models applied, Analysis assumptions,
Analysis results , Fault injection strategy (how to execute the measures, which WL, sampling,…, expert Judgment ISO 26262-9:2018, Clause 8
evidences ,... ISO 26262-11, 4.6
Analysis of Dependent Failure report DFA analysis, assumption, adopted measures and results ISO 26262-9:2018, Clause 7
Confirmation reviews of: saftey plan, safety analysis, software tool criteria evaluation report, completeness of the
Confirmations Measure Reports safety case, ... ISO 26262-2:2018, Table 1
Applied Safety Life Cycle, safety goal, safety scope, AoU description, fault models, Safety Mech. Description, Safety ISO 26262-11, 4.5.4.9
Safety Manual results summary, ...
FMEDA – Capture and Analyze Safety Goals
Diag. Cov. HW Safety Mechanism
SoC Part Failure Mode Safe Fraction
IP Subpart
Failure Rate Failure Mode Distribution
SETTINGS SPFMp 59,97% SPFMt 52,76%
P FIT/gates 1,20E-05 NAND2 1 LFM not calculated
T FIT/gates 1,64E-03 FLIP FLOP 8
PERMANENT TRANSIENT
ID PART SUBPART Failure Mode #Gates #Flops λp Sp % λpd λps λpd % λt St % λtd λts λtd % DCp SMp DCt SMt
Wrong Data Transaction caused by
1 BUS_ITF a fault in the AHB interface 836 23 0,010 0,26 0,007447 0,00262 100,00% 0,039099 40% 0,023459 0,015639 100,00% 30% E2E 30% E2E
Incorrect Instruction Flow caused by
2 DECODER a fault the decode logic 326 9 0,004 0,01 0,003885 0,00004 100,00% 0,015298 15% 0,013003 0,002295 100,00% 60% CTRL FLOW, WD 60% CTRL FLOW, WD
Un-intended execution/not executed
3 LINK VIC interrupt request 141 4 0,002 0,26 0,001256 0,00044 100,00% 0,006793 40% 0,004076 0,002717 100,00% 60% INT MONITOR 60% INT MONITOR
Corrupt data or value caused by a
4 fault in the register bank shadow 0,018 0,01 0,017841 0,00018 20,13% 0,069709 15% 0,059252 0,010456 19,81% 60% PARITY 60% PARITY
Incorrect Instruction Result caused
5 by a fault in the multiplier 0,009 0,01 0,008998 0,00009 10,15% 0,035685 15% 0,030332 0,005353 10,14% 90% 90%
Incorrect Instruction Result caused HW REDUNDANT HW REDUNDANT
6 CPU by a fault in the adder 0,002 0,01 0,002229 0,00002 2,51% 0,008508 15% 0,007232 0,001276 2,42% 90% RANGE CHK
90% RANGE CHK
ALU Incorrect Instruction Result caused
7465 206
7 by a fault in the divider 0,002 0,01 0,001256 0,00035 1,42% 0,006779 15% 0,005763 0,001017 1,93% 90% 90%
Corrupt data or value caused by a
8 fault in the register bank 0,030 0,01 0,029329 0,00030 33,09% 0,115579 15% 0,098242 0,017337 32,85% 95% STL 0% -
Incorrect Instruction Flow caused by
9 a fault the pipeline controller 0,029 0,01 0,028984 0,00029 32,70% 0,115579 15% 0,098242 0,017337 32,85% 40% CTRL FLOW, WD 40% CTRL FLOW, WD
Incorrect Instruction Flow caused by
a fault the branch logic (Wrong 0,001 0,01 0,001025 0,00001 5,35% 0,003422 15% 0,002908 0,015639 0,04574 25% STL, WD 15% WD
10 FETCH Branch Prediction) 1606 44
Incorrect Instruction Flow caused by
11 a fault the fetch logic 0,018 0,01 0,018115 0,00018 94,65% 0,071387 15% 0,060679 0,015639 0,95426 19% STL 0% -

12
13
14
15
16
17 BUS
10374 286 0,120364 0,00452 0,403188 0,104706

A SM can cover more the one FMs One FM can be covered by multiple SMs
© Accellera Systems Initiative 25
FMEDA Analysis
• User defines the FMEDA Hierarchy starting from design requirements
• Part and Subpart are not one by one with the physical implementation
FMEDA Hierarchy Design Hierarchy: from requirements
ID PART SUBPART Failure Mode CPU
Wrong Data Transaction caused by a fault in
1 BUS_ITF the AHB interface
Incorrect Instruction Flow caused by a fault the core
2 DECODER decode logic bus_if dec_hi dec_lo vic_int vic_ctrl
Un-intended execution/not executed interrupt
3 VIC request
Corrupt data or value caused by a fault in the
4 register bank shadow alu
Incorrect Instruction Result caused by a fault
5 in the multiplier
Incorrect Instruction Result caused by a fault fsm_pipe
6 CPU in the adder
ALU Incorrect Instruction Result caused by a fault
7 in the divider
Corrupt data or value caused by a fault in the
branch_buffer fetch_unit
8 register bank
Incorrect Instruction Flow caused by a fault the branch_fsm
9 pipeline controller
Incorrect Instruction Flow caused by a fault the
10 branch logic (Wrong Branch Prediction)
FETCH Incorrect Instruction Flow caused by a fault the
11 fetch logic
FMEDA Analysis
• User provides textual description of the FMs (for every subpart) figured-out during the
failure functional analysis FM definition: comes from a cause-effect user
analysis starting from specs or RTL
ID PART SUBPART Failure Mode
Wrong Data Transaction caused by a fault in
BUS_ITF
1 the AHB interface
SPECS FM4: “Corrupt data or value
Incorrect Instruction Flow caused by a fault the
2 DECODER decode logic (e.g. caused by a fault in the register
Un-intended execution/not executed interrupt ALU) bank shadow”
3 VIC request
Corrupt data or value caused by a fault in the
4 register bank shadow
Incorrect Instruction Result caused by a fault
5 in the multiplier
e.g. The ALU function has six different way to fail
Incorrect Instruction Result caused by a fault
6 CPU in the adder
ALU ALU
Incorrect Instruction Result caused by a fault
alu
7 in the divider reg_banks
Corrupt data or value caused by a fault in the add fsm_pipe
8 register bank reg_bank
Incorrect Instruction Flow caused by a fault the
9 pipeline controller mul reg_shadow div
Incorrect Instruction Flow caused by a fault the
10 branch logic (Wrong Branch Prediction)
FETCH Incorrect Instruction Flow caused by a fault the
11 fetch logic
FMEDA Validation
• FM mapping is performed by the user associating FMs (defined into the FMEDA) to
Design Instances (hierarchical full path name)

Design Hierarchy: instances full path names


CPU MODULES DES INFO
Not Stuctural #gates #flops
core bus_if 810 21
bus_if dec_hi dec_lo vic_int vic_ctrl
dec 295 12
vic_int 70 2
alu vic_ctrl 50 6
reg_banks
add
FM10 reg_shadow 1650 20
reg_bank
add 1100 40
fsm_pipe
mul reg_shadow mul 1200 60
div 1500 80
Design Information fetch_unit reg_bank 2240 60
branch_buffer
branch_fsm fsm_pipe 2320 73
branch_fsm 98 4
branch_buffer 1420 35
tot 12753 413
FMEDA Validation
• Before executing the fault injection campaigns an FMEDA Plan
shall be finalized
• The FMEDA validation is executed on a FM basis, meaning that a
specific fault campaign is executed for every FM.
• The user supplies, still on a FM basis, observation points and
detection points according to the verification requirements supplied
by the safety engineer
Use Cases
• When the SoC complexity grows a modular approach is required to
initiate an FMEDA and execute its validation
• An FMEDA team based approach should be also supported to allow
splitting the job among different teams, enabling an IP-based
methodology
• IP could be provided from 3rd party IP provider and will come with it’s
own FMEDA
© Accellera Systems Initiative 31
Build a Holistic Solution
• Integrate Safety Mechanisms to reduce the FIT
• Positive testing (functional verification)
– Verify proper functionality prior to safety
verification
Requirements Traceability
• Negative testing (assess diagnostic capability):
Verification and Safety Planning
– Targeted tests to confirm failure mode
Functional Safety assumptions
Design with
Verification Verification
Diagnostics – Statistical tests to ensure design function
integrity
Database / Results
– Transient faults testing to provide
evidence safety mechanisms integrity
© Accellera Systems Initiative
32
Build Chips for Safe Autonomous Automobiles
• A dedicated functional safety verification methodology and process for these safety-
critical IPs and SoCs
Current Need • Safety analysis in semiconductor such as fault injection, fault metrics, base failure
rate estimation, interfaces within distributed developments, handling of Hardware
Intellectual Property (IP)

• Holistic methodology which combines analytical methodologies such as FMEDA with


Methodology dynamic fault simulation and formal analysis based methodologies to significantly
reduce the safety verification effort and achieve faster product certification

• ISO26262 recommends single point fault metric (SPFM) and Latent Fault Metric
Metrics (LFM) for the component (IP and SoCs)
• Will be measured for each of the identified Safety Goals associated with the safety
critical modules within the IPs and/or SoCs.

© Accellera Systems Initiative


33
Safety Verification Challenges and More
Systematic Failure Verification
Failure Mode Definition ADAS SoC Example
Safety Mechanism Design

Fault Campaign Planning

Safety Requirement Traceability

Fault Set (+Optimization)


Execution

Verification Environment Re-use

Multiple Engines Support


Link to FMEDA (Metrics
Calculation)
Safety Certified IPs
Tool Confidence Level (TCL) Safety Certified IPs
Safety Certified IPs
© Accellera Systems Initiative 34
Break

© Accellera Systems Initiative 35


Safety Verification Methodology
FMEDA FS architecture analysis key Start serial fault injection early on RTL. Common fault coverage DB to integrate
to reducing overall FS efforts Reuse same TB and coverage results across engines
Months Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Apr May Jun
Architecture
IP/Subsystem D&V
SoC Integration
SoC Implementation
SoC Verification
FMEDA
Fault Campaign Planning
IP/SS Serial Fault Sims
SoC RTL Fault Sims
SoC GL Fault Sims
SoC SW Driven Long tests
RTL (serial) fault
Start early with FMEDA, campaigns to clean flow Complete fault campaigns Hardware
Fault Campaign planning, select tests, & debug at gate level for signoff accelerated fault
and flow set up Lesser faults, RTL sim is (concurrent) simulation
faster
Typical Functional Safety Workflow
Define Failure Design
Design SMs
Safety Design Verification Modes Information

Plan + Test
+ FMEDA FIT/DC
bench
Analysis
estimation

FMEDA
Goals
Analysis met?
No Add SMs

Traceability and Verification


(ASIL)

Fault list Fault


Management generation
Campaign
Verificatio

Fault list Management


Safety

optimization
n

Fault Use new


Fault injection
DB tests/patterns

Metrics No
met?
Yes
Safety
report
© Accellera Systems Initiative 37
Safety Verification Solution
Functional Verification Safety Verification
Verification
Tracking Functional & Safety Requirements
Safety
Analysis
• Unified functional + safety
Reports Reports
verification flow and
Tool Planner FMEDA Plan Tool Planner engines
• Integrated fault campaign
Fault List
Tests Verification Environment Fault List Optimization management across
SoC/Subsyste
m formal, simulation, and
Design
Coverage
Runs DB
Fault
Results DB
emulation
Functional Mgmt Fault Campaign Mgmt
• Common fault results
Verification
Tool
Verification
Tool
database unifies diagnostic
coverage
• Proven requirements
traceability, enabling
FMEDA integration
© Accellera Systems Initiative 38
Example Design and FMEDA

© Accellera Systems Initiative 39


Safety Mechanisms in Ethernet IP
Parity or redundancy of
Parity or ECC on CSRs
packet/descriptor buffer

AMBA AHB/AXI APB Failure status interrupt

AHB/AXI APB
MDIO DMA descriptor address
master
Config range checking
Reg
Packet AVB
Buffer DMA
Queue
MIB Parity protection @
Stats
timestamp generation
Redundancy compare
TSN TSN
MAC L3/L4
IP/TCP checksum
filter 1588
Tx Rx TSU
FCS Pause Illegal packet filter

Ethernet frame FCS


RGMII PCS

Anti-lockup watchdog
RGMII GMII(MII) TBI
Loopbacks

© Accellera Systems Initiative 40


GEM Block – FMEDA Analysis
Block or FM
λ [FIT] Failure Mode Effect Description of FM SM Implemented
Subblock Distribution
TSU 0.0719 Fault in TSU compare 0.9% TSU compare interrupt is incorrect Compare logic is
pulse duplicated
TSU 0.0719 Fault in TSU seconds 0.9% The TSU seconds interrupt is Interrupt logic is
increment pulse incorrect duplicated
TSU 0.0719 Fault in generation of the 0.9% The timer value may not be captured Strobe Pulse Logic
TSU strobe pulse to the or captured incorrectly is duplicated
registers
TSU 0.0719 Fault in TSU timer output 97.3% TX/RX timestamp is corrupted, output Timer logic is
value TSU timer value to local system will duplicated
be invalid, Timer value read back in
registers is also invalid.
Registers 0.3013 Fault in static 95% Unpredictable behavior of IP Parity generation
configuration outputs from and detection
the registers

© Accellera Systems Initiative 41


Ethernet IP – GEM Block
GEM_TOP
TX MEM RX MEM
MAC
L3/L4 PCS
TBI

EDMA TSN
AXI
AXI TX RX TX FCS RX GMII
RGMI
INTF DMA DMA MAC Filter MAC I
RGMII

APB
Registers FM1 - - - - - FM 6

TSU TSU PROT Faults


FM1 FM1
FM2 FM2
FM3 SM FM3
Block
Faults FM4 FM4 strobes

CO
© Accellera Systems Initiative 42
GEM Block – FMEDA Verification
Block or DC Number
λ [FIT] Failure Mode FM Distribution DC Number Estimated
Subblock Achieved
TSU 0.0719 Fault in TSU compare pulse 0.9% 95% 96%
TSU 0.0719 Fault in TSU seconds increment pulse 0.9% 95% 98%
Fault in generation of the TSU strobe pulse
TSU 0.0719 0.9% 95% 78%
to the registers
TSU 0.0719 Fault in TSU timer output value 97.3% 95% 100%
Fault in static configuration outputs from
Registers 0.3013 95% 90% 92.5%
the registers

© Accellera Systems Initiative 43


© Accellera Systems Initiative 44
Safety Mechanisms in Ethernet IP
Parity or redundancy of
Parity or ECC on CSRs
packet/descriptor buffer

AMBA AHB/AXI APB Failure status interrupt

AHB/AXI APB
MDIO DMA descriptor address
master
Config range checking
Reg
Packet AVB
Buffer DMA
Queue
MIB Parity protection @
Stats
timestamp generation
Redundancy compare
TSN TSN
MAC L3/L4
IP/TCP checksum
filter 1588
Tx Rx TSU
FCS Pause Illegal packet filter

Ethernet frame FCS


RGMII PCS

Anti-lockup watchdog
RGMII GMII(MII) TBI
Loopbacks

© Accellera Systems Initiative 45


ADAS Platform – SM Enabled
$I $D DMA I-RAM D-RAM 1. ECC Enabled Memories for VP5

Xtensa 233
IVP-EP SM
Vision P5
On-Chip Enabled
System
SRAM Blocks
On-Chip
System SRAM 128 KB
128KB

AXIM 128

AXIS 128
AXI 128

AXI 128
DMA
128
AXI
System Interconnect
NOC SM’s

AXI 128
AXI 128

AXI 128

AXI 128
AHB 32
APB 32

APB 32
APB 32

APB 32

APB 32

APB 32

AXI 64
Lock Up’s
ECC
Pixel2AXI AXI2Pixel
Auto
I2C UART QSPI BOOT TIMER ENET SD
ROM MIPI MIPI DSI MAC SDIO
CSI2 Rx eMMC

1. Standalone as Out of Context


Boot ROM Self Tests – Checksum, MIPI MIPI BR
Timer DPHY DPHY PHY
2. SM Enabled Automotive Ethernet IP as
Periodic Tests in Software in context
Duplication
Software Enabled Safety Mechanism 3. Top Level SM’s ECC Enabled
Memories, Clock Monitors, Bus Lock UP’s
CSI Tx RGB TBA outside the IP Boundaries
CAM Display
GEM Block Diagram – SM View
Data path parity Data Path Lockup
ECC Protection Protection Protection

GEM_TOP
DMA Descriptor
Address, Data

MAC
Protection

TX MEM RX MEM
L3/L4 PCS
TBI

EDMA TSN
AXI
AXI TX RX TX FCS RX GMII
RGMI
INTF DMA DMA MAC Filter MAC I
RGMII

APB
Registers
underrun, overrun
response errors,
Lockup, bus

protection

Time Stamp Unit (TSU)

IP/TCP Checksum Loopback


Parity Protection Redundancy Illegal Packet Filter
Redundancy Protection Protection Ethernet Frame FCS Access Protection
GEM Block Diagram – Fault Campaign view
GEM_TOP
TX MEM RX MEM
MAC
L3/L4 TBI
PCS
EDMA TSN
AXI
AXI TX RX TX FCS RX GMII
RGM
INTF DMA DMA MAC Filter MAC II RGMII

APB
Registers FM1 - - -FM
- -10 Faults

TSU TSU PROT Fault Classification


FM1 FM1 1. Dangerous Detected (D) – Faults
observable on both Functional & SM
FM2 FM2
SM FO
checker Output
FM3 FM3 2. Dangerous Undetected (U) – Faults
Blo observable on Functional Outputs but
ck not detected by SM checker
Faults

3. Safe Faults (UT) – Faults not


FM4 FM4 observable on Functional Outputs &
SM Checker outputs

CO

FO and CO Strobe List


ISO26262 Compliant Fault Classification
Total faults
Agenda:
DD: dangerous detected faults
OBS DU: dangerous undetected faults
Formal S S: Safe faults (not violating the safety goal)
DC: Diagnostic Coverage
• Architectural NC: Not classified as S, DD or DU
• Functional

Remaining faults

OBS SM
Fault Dangerous Fault
Work Load Injection (violate the SG) Injection DD, DU S
WL
patterns.
NC
Formal (remaining faults not DD’, DU’, S’ DC%, S%
classified)
WL IMPROVMENT Calculated per Failure Mode
EXPERT JUDGMENT

Optional. If applied the user shall to provide


additional evidences in place of fault injection and
formal analysis to justify the expert judgment
Demo Setup and Run the VNC
• Chosen 5 Failure modes for the demo showcasing the solution and automation capabilities
– fm_tsu_comp_pulse - Fault in TSU comp pulse – show cases the ranking capability and undetected faults as SM is not implemented
PRE
– fm_tsu_tmr_op_val - Fault in TSU timer Output value - show cases the ranking capability and detected faults as SM is implemented
RUN
– fm_tsu_sec_incr - Fault in TSU seconds increment pulse – run campaign for the module TSU
– fm_tsu_tmr_op_val - Fault in generation of the TSU strobe pulse to the registers - run campaign for the module TSU
Live Run
– fm_tsu_tmr_op_val_samp - Fault in generation of the TSU strobe pulse to the registers - run campaign for the module TSU

Block or FM
λ [FIT] Failure Mode Effect Description of FM SM Implemented
Subblock Distribution

TSU 0.0719 Fault in TSU compare pulse 0.9% TSU compare interrupt is incorrect Incomplete

TSU 0.0719 Fault in TSU seconds increment pulse 0.9% The TSU seconds interrupt is incorrect Incomplete

TSU 0.0719 Fault in generation of the TSU strobe 0.9% The timer value may not be captured or captured incorrectly Incomplete
pulse to the registers
TSU 0.0719 Fault in TSU timer output value 97.3% TX/RX timestamp is corrupted, output TSU timer value to Timer is duplicated
local system will be invalid, Timer value read back in
registers is also invalid.
Registers 0.3013 Fault in static configuration outputs 95% Unpredictable behavior of IP Parity generation and
from the registers detection
Fault Campaign Executor - Interface
Inputs: FMEDA info
Campaign Campaign Configuration • Fault List
Initiator Fault Strobe Test Campaign
− Definition of the faults to be injected
(e.g. FMEDA) List List List Config. • Strobe List
− Definition of the observation points
Inputs: FS Verification Engineer
• Test List
− Tests to be used during the campaign
• Campaign Configuration:
Campaign Executor − Define the campaign parameters

Outputs:
• Annotated Fault List
− Fault classification is back annotated
Annotated
Fault List
Reports • Reports
Results − Various kind according to the use case
(e.g. Diagnostic
Coverage)
Fault Campaign Executor - Interface
Campaign Campaign Configuration
Initiator Fault Strobe Test Campaign • Test selection
(e.g. FMEDA) List List List Config. – Execute the user defined list of tests
• Good Simulation
– Fault instrumentation
Preparation – Generate strobe data for each selected
test

Execution • Fault Simulation Setup


– Prepare fault simulation including static
and dynamic (formal) fault set
Reporting optimization

Annotated
Fault List
Reports • Fault Simulation Execution
Results – Simulate each fault with the selected tests
(e.g. Diagnostic
Coverage)
Campaign Reports - Abstract
Safe Faults by Formal

Possibly need to improve work load

Detected Faults

Test Coverage = (D/(D+U))

Fault Coverage = (D/(D+U+UT))


Campaign Reports - Abstract

Sampled Fault Processed

Test Coverage = (D/(D+U))

Fault Coverage = (D/(D+U+UT))


© Accellera Systems Initiative 55
Summary
• Autonomous cars are coming and ‘Mind-Off’ driving is expected to be real by the
mid 2020s
• ISO 26262 is the automotive standard that defines the processes to follow, the
performance level for hardware and software performance and the compliance
process
• A systematic analysis technique such as the FMEDA is essential for meeting ISO
26262 metrics
• The complexity of ADAS SoCs requires a new holistic approach to functional
verification and functional safety
• Functional safety and functional verification are complementary problems
• A multi-engine automated solution is required to meet ASIL certification goals in a
timely manner.

© Accellera Systems Initiative 56


Questions

© Accellera Systems Initiative 57


www.cadence.com/automotive
© 2018 Cadence Design Systems, Inc. All rights reserved worldwide. Cadence, the Cadence logo, and the other Cadence marks found at www.cadence.com/go/trademarks are trademarks or registered trademarks of Cadence Design Systems, Inc. All other trademarks are the property of their respective holders.
DVCon Slide Guidelines
• Use Arial or Helvetica font for slide text
• Use Courier-new or Courier font for code
• First-order bullets should be 24 to 28 point
– Second-order bullets should be 24 to 26 point
• Third-order bullets should be 22 to 24 point
• Code should be at least 18 point
• Your presentation will be shown in a very large room
– These font guidelines will help ensure everyone can read you slides!

No Company Logo
except on title slide!
3/2/2022 Change "footer" to presenter's name and affiliation 59
Code and Notes

module example
Code should be
(input logic foo,
enclosed in text boxes
output logic bar
(using a background
);
color is optional)
initial begin
Code should be $display (“Hello World!”);
18pt Courier-bold, or
larger endmodule

Informational boxes should be 18pt Arial-bold, or larger


(using a background color is optional)

3/2/2022 Change "footer" to presenter's name and affiliation 60

You might also like