System Verilog 2009 & 2012 enhancementsSubash John
This document summarizes enhancements made to System Verilog in 2009 and 2012. The 2009 enhancements included final blocks, bit selects of expressions, edge detection for DDR logic, fork-join improvements, and display enhancements. The 2012 enhancements extended enums, added scale factors to real constants and mixed-signal assertions, introduced aspect-oriented programming features, and removed X-optimism using new keywords. It also proposed signed operators and discussed some high-level problems not yet addressed.
- Author: Kuan-Yu Liao, Chia-Yuan Chang, and James Chien-Mo Li, National Taiwan University
- Publication: IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems, 2011
Paper: SCOTCH: Improving Test-to-Code Traceability using Slicing and Conceptual Coupling
Authors: Abdallah Qusef, Gabriele Bavota, Rocco Oliveto, Andrea De Lucia, David Binkley
Session: Research Track Session 3: Dynamic Analysis
This document discusses randomization using SystemVerilog. It begins by introducing constraint-driven test generation and random testing. It explains that SystemVerilog allows specifying constraints in a compact way to generate random values that meet the constraints. The document then discusses using objects to model complex data types for randomization. It provides examples of using SystemVerilog functions like $random, $urandom, and $urandom_range to generate random numbers. It also discusses constraining randomization using inline constraints and randomizing objects with the randomize method.
This document is a lab report submitted by Bhukya Ramesh Naik for an embedded systems design lab at the National Institute of Technology Calicut. The report details 13 experiments conducted using a PSoC microcontroller, including blinking LEDs, switch interfaces, LCD interfaces, timers, PWM, analog to digital conversion, and controlling RGB LEDs using both software and hardware. It also describes a project to control home appliances using DTMF tones. The report includes the aim, block diagrams, code, and results for each experiment.
SystemVerilog Assertions verification with SVAUnit - DVCon US 2016 TutorialAmiq Consulting
This document provides an overview of SystemVerilog Assertions (SVAs) and the SVAUnit framework for verifying SVAs. It begins with an introduction to SVAs, including types of assertions and properties. It then discusses planning SVA development, such as identifying design characteristics and coding guidelines. The document outlines implementing SVAs and using the SVAUnit framework, which allows decoupling SVA definition from validation code. It provides an example demonstrating generating stimuli to validate an AMBA APB protocol SVA using SVAUnit. Finally, it summarizes SVAUnit's test API and features for error reporting and test coverage.
Unit testingandcontinousintegrationfreenest1dot4JAMK
This document discusses unit testing, integration testing, and continuous integration. It provides information on different levels of testing including unit/module/component testing, integration testing, system testing, and acceptance testing. It also describes techniques for unit testing like test-driven development. Continuous integration is introduced as running automated tests and building software with each code commit. Static and dynamic code analysis, code coverage, and using stubs/mocks for integration testing are also summarized.
The document discusses several advanced verification features in SystemVerilog including the Direct Programming Interface (DPI), regions, and program/clocking blocks. The DPI allows Verilog code to directly call C functions without the complexity of Verilog PLI. Regions define the execution order of events and include active, non-blocking assignment, observed, and reactive regions. Clocking blocks make timing and synchronization between blocks explicit, while program blocks provide entry points and scopes for testbenches.
TMPA-2017: Evolutionary Algorithms in Test Generation for digital systemsIosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Evolutionary Algorithms in Test Generation for digital systems
Yuriy Skobtsov, Vadim Skobtsov, St.Petersburg Polytechnic University
For presentation follow the link: https://www.youtube.com/watch?v=gUnKmPg614k
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
The document describes the implementation of 16-bit and 64-bit shift registers using VHDL in data flow modeling. It includes the VHDL code, test bench, and simulation results for shift registers that shift the values in the input register right by 1 bit position on the positive edge of the clock. The 16-bit shift register outputs the shifted value on q1 and the 64-bit shift register outputs the shifted value on q2. The design and functionality of both shift registers are verified through simulation.
This document provides an introduction to VHDL, including:
- VHDL allows modeling and developing digital systems through modules that can be reused, with in/out ports and behavioral or structural specification.
- Models can be tested through simulation and used for synthesis.
- There are three ways to specify models: dataflow, behavioral, and structural. Behavioral models describe algorithms, structural models compose subsystems.
- A test bench applies inputs to verify a model's outputs through simulation.
This document provides an introduction to verification and the Universal Verification Methodology (UVM). It discusses different types of verification including simulation, functional coverage, and code coverage. It describes how simulators work and their limitations in approximating physical hardware. It also covers topics like event-driven simulation, cycle-based simulation, co-simulation, and different types of coverage metrics used to ensure a design is fully tested.
The document provides information about a lab manual for Verilog programs for the 4th year 1st semester Electronics and Communication Engineering course. It includes the course objectives, outcomes, list of experiments and programs to be covered. The programs include designing basic logic gates using Verilog HDL, a 2-to-4 decoder, and layout and simulation of CMOS circuits. It provides Verilog code examples for logic gates and the 2-to-4 decoder along with simulation results. It also includes theory and vivas related to the experiments.
This document discusses various topics related to computer graphics and input devices, including:
1. It provides an overview of polar coordinates and how to convert between polar and Cartesian coordinates.
2. It describes different input device modes including request, sample, and event mode and provides examples of each.
3. It covers color information and graphics functions in OpenGL related to color, including color tables, pixel arrays, and color functions.
4. It discusses additional graphics functions in OpenGL related to points, lines, polygons and filling algorithms.
The document discusses the building blocks of a SystemVerilog testbench. It describes the program block, which encapsulates test code and allows reading/writing signals and calling module routines. Interface and clocking blocks are used to connect the testbench to the design under test. Assertions, randomization, and other features help create flexible testbenches to verify design correctness.
The document discusses various ATPG (Automatic Test Pattern Generation) methods and algorithms. It provides an introduction to ATPG, explaining that ATPG generates test patterns to detect faults in circuits. It then covers major ATPG classifications like pseudorandom, ad-hoc, and algorithmic. Several algorithmic ATPG methods are described, including the D-algorithm, PODEM, FAN, and genetic algorithms. Sequential ATPG is more complex due to memory elements. The summary reiterates that testing large circuits is difficult and many ATPG methods have been developed for combinational and sequential circuits.
This tutorial is intended for verification engineers that must validate algorithmic designs. It presents the detailed steps for implementing a SystemVerilog verification environment that interfaces with a GNU Octave mathematical model. It describes the SystemVerilog – C++ communication layer with its challenges, like proper creation and activation or piped algorithm synchronization handling. The implementation is illustrated for Ncsim, VCS and Questa.
The document provides guidelines for writing SystemVerilog code, including:
- Use descriptive names, consistent formatting, and comments to document code clearly.
- Structure code into classes that encapsulate related functionality and name classes after their purpose.
- Declare private class members with m_ prefix and define class methods externally.
- Organize files into directories based on functionality for better maintenance of code.
The document provides an overview of using SystemVerilog coverage, including:
- The two types of functional coverage in SystemVerilog: cover properties and covergroups
- Examples of defining cover properties and covergroups
- Tips for using shorthand notation, adding covergroup arguments, and utilizing coverage options to make covergroups more flexible and reusable
The document provides an overview of the UVM configuration database and how it is used to store and access configuration data throughout the verification environment hierarchy. Key points include: the configuration database mirrors the testbench topology; it uses a string-based key system to store and retrieve entries in a hierarchical and scope-controlled manner; and the automatic configuration process retrieves entries during the build phase and configures component fields.
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
The document discusses assertion based verification and interfaces in SystemVerilog. It describes immediate assertions which execute in zero simulation time and can be placed within always blocks. Concurrent assertions check properties over time and are evaluated at clock edges. The document also introduces interfaces in SystemVerilog which allow defining communication ports between modules in a single place, reducing repetitive port definitions. Interfaces can include protocol checking and signals can be shared between interface instances.
Upgrading to System Verilog for FPGA Designs, Srinivasan Venkataramanan, CVCFPGA Central
This document discusses upgrading FPGA designs to SystemVerilog. It presents an agenda that covers SystemVerilog constructs for RTL design, interfaces, assertions, and success stories. It then discusses the SystemVerilog-FPGA ecosystem. The presenter has over 13 years of experience in VLSI design and verification and has authored books on verification topics including SystemVerilog assertions. SystemVerilog is a superset of Verilog-2001 and offers enhanced constructs for modeling logic, interfaces, testbenches and connecting to C/C++.
An integrated approach for designing and testing specific processorsVLSICS Design
This paper proposes a validation method for the des
ign of a CPU on which, in parallel with the
development of the CPU, it is also manually describ
ed a testbench that performs automated testing on t
he
instructions that are being described. The testbenc
h consists of the original program memory of the CP
U
and it is also coupled to the internal registers, P
ORTS, stack and other components related to the pro
ject.
The program memory sends the instructions requested
by the processor and checks the results of its
instructions, progressing or not with the tests. Th
e proposed method resulted in a CPU compatible with
the
instruction set and the CPU registers present into
the PIC16F628 microcontroller. In order to shows th
e
usability and success of the depuration method empl
oyed, this work shows that the CPU developed is
capable of running real programs generated by compi
lers existing on the market. The proposed CPU was
mapped in FPGA, and using Cadence tools, was synthe
sized on silicon.
This document discusses code coverage and functional coverage. It defines code coverage as measuring how much of the source code is tested by verification. It describes different types of code coverage like statement coverage, block coverage, conditional coverage, branch coverage, path coverage, toggle coverage and FSM coverage. It then discusses functional coverage, which measures how much of the specification is covered, rather than just the code. It notes some advantages of functional coverage over code coverage.
The document provides an overview of QuickTest Professional (QTP) and its key features and functionality. It discusses QTP's basic features, the elements that make up the QTP tool window like the test pane, active screen, and data table. It also summarizes the QTP testing process which involves planning, generating tests through recording or programming, enhancing tests with checkpoints and parameters, debugging, running tests, and reporting results.
The document describes the implementation of various digital logic circuits like D latch, D flip flop, JK flip flop, multiplexers, decoders, counters etc. using VHDL. It includes the VHDL code, test benches and synthesis reports for each circuit. The aim is to design the circuits in behavioral and structural modeling and verify their functionality.
The document discusses using algorithmic test generation to improve functional coverage in existing verification environments. It describes limitations of current constrained random stimuli generation techniques for complex designs. Algorithmic test generation uses rule graphs and action functions to efficiently target coverage goals without requiring extensive changes to verification environments. A case study shows algorithmic test generation achieved coverage goals over 600x faster than constrained random for an AXI bus bridge design while requiring minimal changes to the testbench.
Processor Verification Using Open Source Tools and the GCC Regression Test SuiteDVClub
The document summarizes a case study using open source tools to verify the OpenRISC 1200 processor implementation against its reference architectural simulation using the GCC regression test suite. Key aspects included:
1) Using the 53,000+ test GCC regression test suite to verify the SystemC design model against the reference Or1ksim architectural simulator.
2) Initial results found errors in both the RTL implementation and Or1ksim reference model, helping to improve both.
3) Connecting the GNU Debugger to drive the SystemC model via a remote serial protocol server, allowing the GCC regression tests to be used for verification.
TMPA-2017: Evolutionary Algorithms in Test Generation for digital systemsIosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Evolutionary Algorithms in Test Generation for digital systems
Yuriy Skobtsov, Vadim Skobtsov, St.Petersburg Polytechnic University
For presentation follow the link: https://www.youtube.com/watch?v=gUnKmPg614k
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
The document describes the implementation of 16-bit and 64-bit shift registers using VHDL in data flow modeling. It includes the VHDL code, test bench, and simulation results for shift registers that shift the values in the input register right by 1 bit position on the positive edge of the clock. The 16-bit shift register outputs the shifted value on q1 and the 64-bit shift register outputs the shifted value on q2. The design and functionality of both shift registers are verified through simulation.
This document provides an introduction to VHDL, including:
- VHDL allows modeling and developing digital systems through modules that can be reused, with in/out ports and behavioral or structural specification.
- Models can be tested through simulation and used for synthesis.
- There are three ways to specify models: dataflow, behavioral, and structural. Behavioral models describe algorithms, structural models compose subsystems.
- A test bench applies inputs to verify a model's outputs through simulation.
This document provides an introduction to verification and the Universal Verification Methodology (UVM). It discusses different types of verification including simulation, functional coverage, and code coverage. It describes how simulators work and their limitations in approximating physical hardware. It also covers topics like event-driven simulation, cycle-based simulation, co-simulation, and different types of coverage metrics used to ensure a design is fully tested.
The document provides information about a lab manual for Verilog programs for the 4th year 1st semester Electronics and Communication Engineering course. It includes the course objectives, outcomes, list of experiments and programs to be covered. The programs include designing basic logic gates using Verilog HDL, a 2-to-4 decoder, and layout and simulation of CMOS circuits. It provides Verilog code examples for logic gates and the 2-to-4 decoder along with simulation results. It also includes theory and vivas related to the experiments.
This document discusses various topics related to computer graphics and input devices, including:
1. It provides an overview of polar coordinates and how to convert between polar and Cartesian coordinates.
2. It describes different input device modes including request, sample, and event mode and provides examples of each.
3. It covers color information and graphics functions in OpenGL related to color, including color tables, pixel arrays, and color functions.
4. It discusses additional graphics functions in OpenGL related to points, lines, polygons and filling algorithms.
The document discusses the building blocks of a SystemVerilog testbench. It describes the program block, which encapsulates test code and allows reading/writing signals and calling module routines. Interface and clocking blocks are used to connect the testbench to the design under test. Assertions, randomization, and other features help create flexible testbenches to verify design correctness.
The document discusses various ATPG (Automatic Test Pattern Generation) methods and algorithms. It provides an introduction to ATPG, explaining that ATPG generates test patterns to detect faults in circuits. It then covers major ATPG classifications like pseudorandom, ad-hoc, and algorithmic. Several algorithmic ATPG methods are described, including the D-algorithm, PODEM, FAN, and genetic algorithms. Sequential ATPG is more complex due to memory elements. The summary reiterates that testing large circuits is difficult and many ATPG methods have been developed for combinational and sequential circuits.
This tutorial is intended for verification engineers that must validate algorithmic designs. It presents the detailed steps for implementing a SystemVerilog verification environment that interfaces with a GNU Octave mathematical model. It describes the SystemVerilog – C++ communication layer with its challenges, like proper creation and activation or piped algorithm synchronization handling. The implementation is illustrated for Ncsim, VCS and Questa.
The document provides guidelines for writing SystemVerilog code, including:
- Use descriptive names, consistent formatting, and comments to document code clearly.
- Structure code into classes that encapsulate related functionality and name classes after their purpose.
- Declare private class members with m_ prefix and define class methods externally.
- Organize files into directories based on functionality for better maintenance of code.
The document provides an overview of using SystemVerilog coverage, including:
- The two types of functional coverage in SystemVerilog: cover properties and covergroups
- Examples of defining cover properties and covergroups
- Tips for using shorthand notation, adding covergroup arguments, and utilizing coverage options to make covergroups more flexible and reusable
The document provides an overview of the UVM configuration database and how it is used to store and access configuration data throughout the verification environment hierarchy. Key points include: the configuration database mirrors the testbench topology; it uses a string-based key system to store and retrieve entries in a hierarchical and scope-controlled manner; and the automatic configuration process retrieves entries during the build phase and configures component fields.
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
The document discusses assertion based verification and interfaces in SystemVerilog. It describes immediate assertions which execute in zero simulation time and can be placed within always blocks. Concurrent assertions check properties over time and are evaluated at clock edges. The document also introduces interfaces in SystemVerilog which allow defining communication ports between modules in a single place, reducing repetitive port definitions. Interfaces can include protocol checking and signals can be shared between interface instances.
Upgrading to System Verilog for FPGA Designs, Srinivasan Venkataramanan, CVCFPGA Central
This document discusses upgrading FPGA designs to SystemVerilog. It presents an agenda that covers SystemVerilog constructs for RTL design, interfaces, assertions, and success stories. It then discusses the SystemVerilog-FPGA ecosystem. The presenter has over 13 years of experience in VLSI design and verification and has authored books on verification topics including SystemVerilog assertions. SystemVerilog is a superset of Verilog-2001 and offers enhanced constructs for modeling logic, interfaces, testbenches and connecting to C/C++.
An integrated approach for designing and testing specific processorsVLSICS Design
This paper proposes a validation method for the des
ign of a CPU on which, in parallel with the
development of the CPU, it is also manually describ
ed a testbench that performs automated testing on t
he
instructions that are being described. The testbenc
h consists of the original program memory of the CP
U
and it is also coupled to the internal registers, P
ORTS, stack and other components related to the pro
ject.
The program memory sends the instructions requested
by the processor and checks the results of its
instructions, progressing or not with the tests. Th
e proposed method resulted in a CPU compatible with
the
instruction set and the CPU registers present into
the PIC16F628 microcontroller. In order to shows th
e
usability and success of the depuration method empl
oyed, this work shows that the CPU developed is
capable of running real programs generated by compi
lers existing on the market. The proposed CPU was
mapped in FPGA, and using Cadence tools, was synthe
sized on silicon.
This document discusses code coverage and functional coverage. It defines code coverage as measuring how much of the source code is tested by verification. It describes different types of code coverage like statement coverage, block coverage, conditional coverage, branch coverage, path coverage, toggle coverage and FSM coverage. It then discusses functional coverage, which measures how much of the specification is covered, rather than just the code. It notes some advantages of functional coverage over code coverage.
The document provides an overview of QuickTest Professional (QTP) and its key features and functionality. It discusses QTP's basic features, the elements that make up the QTP tool window like the test pane, active screen, and data table. It also summarizes the QTP testing process which involves planning, generating tests through recording or programming, enhancing tests with checkpoints and parameters, debugging, running tests, and reporting results.
The document describes the implementation of various digital logic circuits like D latch, D flip flop, JK flip flop, multiplexers, decoders, counters etc. using VHDL. It includes the VHDL code, test benches and synthesis reports for each circuit. The aim is to design the circuits in behavioral and structural modeling and verify their functionality.
The document discusses using algorithmic test generation to improve functional coverage in existing verification environments. It describes limitations of current constrained random stimuli generation techniques for complex designs. Algorithmic test generation uses rule graphs and action functions to efficiently target coverage goals without requiring extensive changes to verification environments. A case study shows algorithmic test generation achieved coverage goals over 600x faster than constrained random for an AXI bus bridge design while requiring minimal changes to the testbench.
Processor Verification Using Open Source Tools and the GCC Regression Test SuiteDVClub
The document summarizes a case study using open source tools to verify the OpenRISC 1200 processor implementation against its reference architectural simulation using the GCC regression test suite. Key aspects included:
1) Using the 53,000+ test GCC regression test suite to verify the SystemC design model against the reference Or1ksim architectural simulator.
2) Initial results found errors in both the RTL implementation and Or1ksim reference model, helping to improve both.
3) Connecting the GNU Debugger to drive the SystemC model via a remote serial protocol server, allowing the GCC regression tests to be used for verification.
The document discusses Open-DO, an open source initiative for developing safety-critical software. It provides an overview of Open-DO concepts like FLOSS, agile development practices, and high-integrity certification. Updates on Open-DO include new community projects, conferences, and tools to support qualifications. Formal methods like Couverture and Hi-Lite are presented as ways to verify properties and generate verification conditions for proof.
This document describes a software-based technique for detecting soft errors in processor-based digital architectures. The technique works by transforming the original application code into a new version that duplicates variables and operations to check for bit flips. Experimental results from fault injection and radiation testing on a DSP processor show the technique can efficiently detect errors, with a detection rate around 90% and low failure rate. While the transformed code requires more memory and runs slower, the low-cost software-only approach makes it suitable for safety-critical low-cost applications.
This document describes a software-based technique for detecting soft errors in processor-based digital architectures. The technique works by transforming the original application code into a new version that duplicates variables and operations to check for bit flips. Experimental results from fault injection and radiation testing on a DSP processor show the technique can efficiently detect errors, with a detection rate of over 90% and low failure rate. While the transformed code requires more memory and runs slower, the low-cost software-only approach makes it suitable for safety-critical low-cost applications.
Bounded Model Checking for C Programs in an Enterprise EnvironmentAdaCore
This document discusses using bounded model checking to analyze C programs at scale in an enterprise environment. It describes compiling thousands of software packages using a tool called goto-cc that converts C code to an intermediate representation. This allows running verification tools to find bugs. Many bugs were found and reported, improving quality. The goal is to focus on developing verification methods and analyzing a large codebase to find more bugs and security issues.
The document is a tutorial that introduces the C-Script block in PLECS, which allows implementing custom controllers and components using C code. It discusses how the C-Script block interfaces with the simulation engine via function calls, its parameters including sample time settings, and provides exercises to implement a mathematical function and digital PI controllers with and without calculation delays.
1. The document discusses implementing distributed mclock in Ceph for quality of service (QoS). It describes implementing QoS units at the pool, RBD image, and universal levels.
2. It covers inserting delta/rho/phase parameters into Ceph classes for distributed mclock. Issues addressed include number of shards and background I/O.
3. An outstanding I/O based adaptive throttle is introduced to suspend mclock scheduling if the I/O load is too high. Testing showed it effectively maintained maximum throughput.
4. Future plans include improving the mclock algorithm, extending QoS to individual RBDs, adding metrics, and testing in various environments. Collaboration with the community is
DvClub 2102 tlm based software control of uvcs for vertical verification re...Amit Bhandu
In order to provide full controllability to the C test developer over the verification components, a virtual layer can be created using the capabilities of TLM 2.0 layer in both SystemC and UVM.
This Virtual layer exposes the sequences of the UVC into SystemC TLM2.0 which enables the embedded software engineers to configure and control the Verification IPs from embedded software and generate the same advanced stimulation or exhaustive coverage as provided by UVCs.
A TLM Vertical Verification ReUse Methodology that enables reuse of the IP verification environment and test cases to SOC verif/valid environment.
Fall 2016 Insurance Case Study – Finance 360Loss ControlLoss.docxlmelaine
The document provides instructions for a coursework assignment on analyzing loss control policies of an S&P 500 company. Students are asked to select a company and analyze its policies focusing on environmental loss prevention, catastrophic loss prevention, or employee-related risk management. The analysis should address the likelihood and potential impacts of losses, as well as the company's loss control activities. Requirements include an 8-12 page paper with discussion of the topic areas, works cited, and any attachments. The paper is due by the deadline for extra credit activities in the course.
Kroening et al, v2c a verilog to c translatorsce,bhopal
The document describes v2c, a tool that translates Verilog to C. v2c accepts synthesizable Verilog as input and generates equivalent C code called a "software netlist". The translation is based on Verilog's synthesis semantics and preserves cycle accuracy and bit precision. The generated C code can then be used for hardware property verification, co-verification, simulation, and equivalence checking by leveraging software verification techniques.
This document appears to be a thesis submitted by Conor McMenamin for their B.Sc. in Computational Thinking at Maynooth University. The thesis investigates existing standards for selecting elliptic curves for use in elliptic curve cryptography (ECC) and whether it is possible to manipulate the standards to exploit weaknesses. It provides background on elliptic curve theory, cryptography, and standards. The document outlines requirements and proposes designing a system to test manipulating the standards by choosing curves with a user-selected parameter ("BADA55") to simulate exploiting a weakness. It describes implementing and testing the system before concluding and discussing future work.
IRJET- Design and Characterization of MAEC IP CoreIRJET Journal
This document describes the design and implementation of a Message Authentication Code (MAC) with integrated error correction capability called MAEC. MAEC uses a cellular automata (CA) based error correcting code to provide resilience against random errors during transmission. The key steps are: (1) Data is padded and partitioned into blocks, (2) A random maximal-length CA rule is selected based on a key using Rabin's irreducibility test, (3) Checkbytes are computed by encoding the data blocks using the CA, (4) The checkbytes and a key are mixed using NMix to generate the MAC tag, (5) During verification, received checkbytes are compared to recomputed checkbytes using the CA to detect
HKG15-300: Art's Quick Compiler: An unofficial overviewLinaro
HKG15-300: Art's Quick Compiler: An unofficial overview
---------------------------------------------------
Speaker: Matteo Franchin
Date: February 11, 2015
---------------------------------------------------
★ Session Summary ★
One of the important technical novelties introduced with the recent release of Android Lollipop is the replacement of Dalvik, the VM which was used to execute the bytecode produced from Java apps, with ART, a new Android Run-Time. One interesting aspect in this upgrade is that the use of Just-In-Time compilation was abandoned in favour of Ahead-Of-Time compilation. This delivers better performance [1], also leaving a good margin for future improvements. ART was designed to support multiple compilers. The compiler that shipped with Android Lollipop is called the “Quick Compiler”. This is simple, fast, and is derived from Dalvik’s JIT compiler. In 2014 our team at ARM worked in collaboration with Google to extend ART and its Quick Compiler to add support for 64-bit and for the A64 instruction set. These efforts culminated with the recent release of the Nexus 9 tablet, the first 64-bit Android product to hit the market. Despite Google’s intention of replacing the Quick Compiler with the so-called “Optimizing Compiler”, the job for the the Quick Compiler is not yet over. Indeed, the Quick Compiler will remain the only usable compiler in Android Lollipop. Therefore, all competing parties in the Android ecosystem have a huge interest in investigating and improving this component, which will very likely be one of the battlegrounds in the Android benchmark wars of 2015. This talk aims to give an unofficial overview of ART’s Quick compiler. It will first focus on the internal organisation of the compiler, adopting the point of view of a developer who is interested in understanding its limitations and strengths. The talk will then move to exploring the output produced by the compiler, discussing possible strategies for improving the generated code, while keeping in mind that this component may have a limited life-span, and that any long-term work would be better directed towards the Optimizing Compiler. [1] The ART runtime, B. Carlstrom, A. Ghuloum, and I. Rogers, Google I/O 2014,https://www.youtube.com/watch?v=EBlTzQsUoOw
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250804
Video: https://www.youtube.com/watch?v=iho-e7EPHk0
Etherpad: N/A
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
This document discusses a framework for certifying workflows in a component-based cloud computing platform for high-performance computing (HPC) services. It presents a Scientific Workflow Component Certifier (SWC2) that uses the mCRL2 model checking toolset to verify workflows meet safety and liveness properties. It describes translating scientific workflows specified in SAFeSWL into the mCRL2 input language and evaluating default and application-specific properties. As a case study, it models and certifies a MapReduce workflow in the HPC Shelf system, checking 20 formal properties and measuring certification times for different components.
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...AMD Developer Central
Keynote presentation, Is There Anything New in Heterogeneous Computing, by Mike Muller, Chief Technology Officer, ARM, at the AMD Developer Summit (APU13), Nov. 11-13, 2013.
This document provides an overview of Verilog HDL (Hardware Description Language) for modeling digital circuits. It outlines different modeling styles in Verilog like gate-level, data-flow, and behavioral modeling. Gate-level modeling describes systems using basic logic gates. Data-flow modeling uses continuous assignments to model signal flow. Behavioral modeling describes designs using procedural constructs like always and initial blocks. Examples are provided for basic gates, a 4-to-1 multiplexer, 2-to-4 decoder, and behavioral modeling of conditions and case statements.
Accelerated Mac OS X Core Dump Analysis training public slidesDmitry Vostokov
The slides from Software Diagnostics Services Mac OS X core dump analysis training. The training description: "Learn how to analyse app crashes and freezes, navigate through process core memory dump space and diagnose corruption, memory leaks, CPU spikes, blocked threads, deadlocks, wait chains, and much more. We use a unique and innovative pattern-driven analysis approach to speed up the learning curve. The training consists of practical step-by-step exercises using GDB and LLDB debuggers highlighting more than 30 memory analysis patterns diagnosed in 64-bit process core memory dumps. The training also includes source code of modelling applications written in Xcode environment, a catalogue of relevant patterns from Software Diagnostics Institute, and an overview of relevant similarities and differences between Windows and Mac OS X user space memory dump analysis useful for engineers with Wintel background. Audience: software technical support and escalation engineers, system administrators, software developers, security professionals and quality assurance engineers."
RCA OCORA: Safe Computing Platform using open standardsAdaCore
The presentation discussed the development of a Safe Computing Platform (SCP) specification by the RCA and OCORA consortia for railway applications. PikeOS was identified as providing the hard real-time operating system and hypervisor core for the SCP. Connext DDS was presented as the real-time publish-subscribe middleware to enable safety-certified communications on the SCP. The SCP approach was also discussed as potentially applicable to other industries requiring mixed criticality cloud computing like automotive.
Long-lived software is a challenge. This was seen very clearly a couple of years ago in the “US COBOL crisis”, but the reasons are less clearly understood, and are worth exploring. The speaker works in Computer Algebra, where “younger” systems are 30-40 years old, and the algorithmic kernel of SageMath, the newest major system, is actually 55 years old, and the people who can debug it are in single figures. More recently, very substantial retooling was required to enable Line 14, the driverless line, of the Paris Métro to be extended. Having reviewed these cases, the speaker will make some tentative suggestions for the management of long-lived software.
Rust and the coming age of high integrity languagesAdaCore
This document discusses Rust and its role as a "high integrity language" that can help address memory safety issues. It provides an overview of Rust's key features like ownership and borrowing that enforce memory safety. It argues that memory safety is becoming an increasingly important issue and that tools like Rust may see more adoption as industries face growing pressure to address vulnerabilities caused by memory safety problems. While Rust's success depends on broader trends, its focus on memory safety positions it well to help industries grappling with this challenge.
SPARKNaCl: A verified, fast cryptographic libraryAdaCore
SPARKNaCl https://github.com/rod-chapman/SPARKNaCl is a new, freely-available, verified and fast reference implementation of the NaCl cryptographic API, based on the TweetNaCl distribution. It has a fully automated, complete and sound proof of type-safety and several key correctness properties. In addition, the code is surprisingly fast - out-performing TweetNaCl's C implementation on an Ed25519 Sign operation by a factor of 3 at all optimisation levels on a 32-bit RISC-V bare-metal machine. This talk will concentrate on how "Proof Driven Optimisation" can result in code that is both correct and fast.
Developing Future High Integrity Processing SolutionsAdaCore
Rolls-Royce has been developing high integrity digital processing solutions for its safety critical aerospace engine controllers since the 1980s. By the turn of the century, the electronics industry experienced an inflection point. This resulted in a shift to a consumer driven market and a much-reduced focus on the harsh environment electronics and the extended life cycles required by the aerospace industry. As a result, Rolls-Royce took the decision to design its own microprocessor, and for the last 25 years, has been successfully developing harsh environment safety critical processing solutions for all its aerospace engines.
Alongside the ever-increasing performance expectations, the past few years have seen cyber-security become a major driver in new processor developments. This presents new and interesting development challenges that will need to be addressed.
Taming event-driven software via formal verificationAdaCore
Event-driven software can be found everywhere, from low-level drivers, to software that controls and coordinates complex subcomponents, and even in GUIs. Typically, event-driven software is characterised as consisting of a number of stateful components that communicate by sending messages to each other. Event-driven software is notoriously difficult to test. There are often many different sequences of events, and because the exact order of the events will affect the state of the system, it can be easy for bugs to lurk in obscure un-tested sequences of events. Even worse, reproducing these bugs can be difficult due to the need to reproduce the exact sequence of events that led to the issue.
Formal verification is one method of solving this: rather than writing tests to check each of the different possible sequences of events, automated formal verification could be used to verify that the software is correct no matter what sequence of events is observed. In this talk, we will look at what capabilities are required to ensure that this will be successful, including what it means for event-driven software to be correct, and how to ensure that the verification can scale to industrial-sized software projects.
Pushing the Boundary of Mostly Automatic Program ProofAdaCore
With the large-scale verification of complex programs like compilers and microkernels, program proof has realised the grand challenge of creating a “verifying compiler” proposed by Sir Tony Hoare in 2003. Still, the effort and expertise required for developing the program and its proof to feed to the “verifying compiler” will exceed the V&V budget of most projects. Another approach gaining traction is to automate the proof as much as possible. More specifically, by tailoring the proof tool to the strengths of a target programming language, leveraging an array of automatic provers, and limiting the ambition of proof to those properties for which proof can be mostly automated. This is the approach we are following in SPARK. In this talk, we will survey what properties can be “mostly” proved automatically, and what this means in terms of effort and expertise.
RCA OCORA: Safe Computing Platform using open standardsAdaCore
The railway sector is facing a major transition as it moves towards more fully automated systems on both the train and infrastructure side. This in turn, requires the development of appropriate, future-proof connectivity and IT platforms.
The Reference Control Command and Signalling Architecture (RCA) and Open Control Command and Signalling Onboard Reference Architecture (OCORA) have developed a functional architecture for future trackside and onboard functions. The RCA OCORA open Control Command Signalling (CCS) on-board reference architecture introduces a standardized separation of safety-relevant and non-safety-relevant railway applications and the underlying IT platforms. This allows rail operators to decouple the very distinct life cycles of the domains and aggregate multiple railway applications on common IT platforms.
Based on a Safe Computing Platform (SCP), the architecture accommodates a Platform Independent Application Programming Interface (PI API) between safety-relevant railway applications and IT platforms. This approach supports the portability of railway applications among IT platform realisations from different vendors.
Two of its authors will discuss the RCA OCORA architecture with emphasis on its safe computing framework. The talk will review the required operating system standards and the discuss the newly-released DDS Reference Implementation for Safe Computing Platform Messaging. While designed for rail, this architecture will have elements of interest for other industries.
Product Lines and Ecosystems: from customization to configurationAdaCore
Digitalization is concerned with a fundamental shift in value delivery to customers from transactional to continuous. For R&D this requires adopting processes such as DevOps and continuous deployment. Systems engineering companies using platforms need to adjust their ways of working and be cognisant of the role of the ecosystem surrounding them to capitalize on this transformation. The keynote talk will discuss these developments and provide industrial examples from Software Center, a collaboration between 17 large, international companies and five universities with the intent of accelerating the digital transformation of the European software intensive industry.
Securing the Future of Safety and Security of Embedded SoftwareAdaCore
Daniel Rohrer, VP of Software Product Security at NVIDIA, discussed NVIDIA's journey to adopting the SPARK subset of the Ada programming language and the AdaCore tooling for improving software security and safety. NVIDIA was motivated by increasing complexity of systems, criticality of failures, and limitations of existing techniques. They selected SPARK and AdaCore due to the decidable nature of the language, credible ecosystem, emphasis on provability over testing, ability to scale, and responsiveness of AdaCore. NVIDIA piloted the use of SPARK on firmware to gain security and safety benefits while targeting a small codebase. The presentation covered benefits of SPARK for verification and alternatives considered.
Spark / Ada for Safe and Secure Firmware DevelopmentAdaCore
The document discusses using SPARK for secure and safe firmware development. It notes that firmware is written mostly in C, which is prone to security vulnerabilities. SPARK aims to address this by using formal verification methods, improved static analysis, and developer contracts to find and prevent bugs. The document outlines NVIDIA's usage of SPARK for security processors and safety-critical code. While SPARK faces challenges regarding adoption due to its differences from C, NVIDIA is taking a phased approach to adoption by starting with proof of concepts and increasing usage over time for its most critical firmware components.
Introducing the HICLASS Research Programme - Enabling Development of Complex ...AdaCore
The document discusses the HICLASS research program which aims to enable UK industry to build the most complex, connected, and cyber-secure avionic systems. It is a £32 million project over 4 years led by Rolls-Royce with 16 funded partners and 2 unfunded partners. The project will develop integrated solutions to address increasing challenges with system integrity, complexity, connectivity, security, and safety as systems continue to grow in scale and complexity. It will focus on developing technologies in areas like model-based development, verification, security, and future hardware platforms.
The Future of Aerospace – More Software Please!AdaCore
The document discusses the Aerospace Technology Institute's (ATI) role in leading technology development for the UK aerospace sector. It outlines ATI's strategy and funding portfolio worth £3.9 billion to 2026. Specific initiatives discussed include the SECT-AIR and HICLASS projects, which helped establish the UK's excellence in safety critical software. The document also notes opportunities for startups in sustainability and Industry 4.0 technologies through the ATI Boeing Horizon X Accelerator program.
Adaptive AUTOSAR - The New AUTOSAR ArchitectureAdaCore
Adaptive AUTOSAR is a new architecture from AUTOSAR that is designed to support more flexible, dynamic, and connected vehicle functions beyond what classic AUTOSAR currently supports. It features a dynamic operating system, strong application isolation, soft real-time capabilities, and higher resource availability compared to classic AUTOSAR. Both classic and adaptive AUTOSAR support functional safety through product measures like software partitioning, protection mechanisms, and diagnostics as well as process measures in development like requirements specification and testing.
Using Tiers of Assurance Evidence to Reduce the Tears! Adopting the “Wheel of...AdaCore
The document proposes an alternative approach called the "Wheel of Qualification" (WoQ) to assess software safety assurance for systems that cannot fully meet traditional regulatory standards. The WoQ uses a visual model to holistically represent different types of evidence from various sources, including process evidence, system integrator evidence, and government acceptance evidence. It aims to create an open dialogue about evidence availability and move beyond compliance to a more engineering-based assessment. The approach is being used in an existing platform qualification strategy and could help qualify legacy or non-traditional systems, but requires caution and further refinement.
Software Engineering for Robotics - The RoboStar TechnologyAdaCore
The document describes RoboStar Technology's approach to software engineering for robotics. It involves modeling robot behavior and physical systems, automatically generating simulations from models, and testing and verifying properties through model checking and simulation. The goal is to produce certified control software for safe robotic systems through this model-driven development approach.
MISRA C provides guidelines for using the C programming language in safety-critical systems. The document discusses how MISRA C relates to ISO 26262, which specifies functional safety standards for automotive systems. Key points include that MISRA C addresses many criteria specified in ISO 26262 for suitable programming languages, such as enforcing low complexity, using a language subset, strong typing, and defensive implementation techniques. The document also discusses how to achieve and demonstrate compliance with the MISRA C guidelines.
Application of theorem proving for safety-critical vehicle softwareAdaCore
The document discusses applying formal verification techniques like theorem proving to automotive software for safety-critical functions. It provides background on software safety requirements and discusses fault avoidance versus fault tolerance approaches. The document then presents a case study where theorem proving is used to verify a software function for autonomous vehicle control. It explains the process of breaking the software into portions and verifying each portion using logical proofs of pre and post conditions. The document highlights benefits of theorem proving over testing in providing a logical proof that software is bug-free, but also notes limitations like not verifying timing behavior.
The Application of Formal Methods to Railway Signalling SoftwareAdaCore
This document discusses the application of formal methods to railway signalling software. It provides an overview of Systerel, a company that creates solutions for real-time and safety critical systems using formal methods. The document then describes various formal techniques like Event-B and Software-B used for modeling systems a priori. It also discusses formal techniques like formal data validation and Systerel Smart Solver used a posteriori. It provides details on high-end tools developed by Systerel like Rodin Platform, B-to-C Translator, and OVADO2. It also gives an example of a large project where formal methods were applied to the development of zone controller subsystem of a communication-based train control system.
Multi-Core (MC) Processor Qualification for Safety Critical SystemsAdaCore
This document discusses challenges in qualifying multi-core processors for safety critical systems and presents initial research findings and an interim solution. It outlines that while multi-core processors provide performance benefits, they introduce new assurance challenges due to issues like cache coherence, interference between cores, and unpredictability. Practical experiments showed different software types being most impacted by different types of "enemy processes" designed to stress CPU, bus, or cache usage. The document proposes a stepped approach starting with restricting multi-core capabilities as understanding improves before enabling full multi-core functionality.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Designing Low-Latency Systems with Rust and ScyllaDB: An Architectural Deep DiveScyllaDB
Want to learn practical tips for designing systems that can scale efficiently without compromising speed?
Join us for a workshop where we’ll address these challenges head-on and explore how to architect low-latency systems using Rust. During this free interactive workshop oriented for developers, engineers, and architects, we’ll cover how Rust’s unique language features and the Tokio async runtime enable high-performance application development.
As you explore key principles of designing low-latency systems with Rust, you will learn how to:
- Create and compile a real-world app with Rust
- Connect the application to ScyllaDB (NoSQL data store)
- Negotiate tradeoffs related to data modeling and querying
- Manage and monitor the database for consistently low latencies
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
How Can I use the AI Hype in my Business Context?Daniel Lehner
DO-178C OOT supplement: A user's perspective
1. DO178C/ED12C OOT
A User’s Perspective
Cyrille Comar Hugues Bonnin Fred Rivard
Certification Together International Conference,
Toulouse, October 2010
2. CTIC 2010 2
Agenda
3 examples of DO178C/OOT usage
1. Inheritance : Liskov Substitution
Principle (LSP) with Ada
2. Virtualization : the Java Virtual Machine
case
3. Dynamic Memory Management : a Java
Garbage Collector example
4. CTIC 2010 4
Local Type Consistency (TC)
In order to mitigate inheritance vulnerabilities, local type
consistency has to be demonstrated. Indeed, this
property limits reliably inheritance mechanism.
TC is referred in :
◦ OO.4.4 n. : if reuse is planned, maintenance of TC shall be
described.
◦ OO.5.2.2 j. : in design activities, class hiearachy with TC must be
developped with associated LLR.
◦ OO.6.7 : specific verification for Local Type Consistency has to
be done, with added objective in table A-7 (OO-10).
5. CTIC 2010 5
Local Type Consistency (TC)
1. Formal Methods:
◦ Precondition weakening
◦ Postcondition Strengthening
2. Unit Testing (on LLRs associated with Class
methods)
◦ Run all tests associated with a class using objects of
child classes
3. Pessimistic Testing
◦ Verify that all dispatching calls are covered by tests
exercising all methods potentially reachable from a
dispatch point
6. CTIC 2010 6
TC by Formal Analysis
type Class1 is tagged private;
procedure Method (C : in out Class1; I : Integer) with
pre => I > 0;
post => C.Updated;
type Class2 is new Class1 with private;
procedure Method (C : in out Class2; I : Integer) with
pre => I >= 0;
post => C.Updated and C.Sorted;
Ada2012 syntax
must demonstrate that
• I > 0 I >= 0
• C.Updated and C.Sorted C.Updated
Liskov Substitution Principle:
• Precondition is weakened
• Postcondition is strengthened
7. CTIC 2010 7
TC by Formal Analysis (2)
Spark = Small Ada +
logical annotations
Spark supports limited
OO features
Spark already performs
this verification
type Class1 is tagged private;
function Updated (C : Class1) return Boolean;
function Sorted (C : Class1) return Boolean;
procedure Method (C : in out Class1; I : Integer);
--# pre I > 0;
--# post Updated(C);
type Class2 is new Class1 with private;
procedure Method (C : in out Class2; I : Integer);
--# pre I >= 0;
--# post Updated(C) and Sorted (C);
H1: updated(fld_inherit(c)) .
H2: sorted(fld_inherit(c)) .
->
C1: updated(fld_inherit(c)) .
H1: i > 0 .
->
C1: i >= 0 .
Spark produces 2 VCs
(Verification Conditions)
8. CTIC 2010 8
TC by Unit Testing
With proper organization of unit testing, verification is relatively
easy to put in place:
◦ Each class has a mirror “test” class
◦ Each method has a mirror “test” method
Low-Level Requirements are associated with methods
Corresponding testcases are associated to the “mirror” test method
◦ Group all the tests related to a class in a testsuite
◦ Apply this testsuite to objects of the class
◦ Apply this testsuite to objects of subclasses
Verify the LLRs
associated with
the class
Verify type
consistency
9. CTIC 2010 9
TC by Unit Testing
package Example is
type T1 is tagged private;
procedure M1 (X : T1);
function F1 (X : T1) return Integer;
type T2 is new T1 with private;
overriding procedure M1 (X : T2);
-- inherit F1 (X : T2)
end Example;
package Example.Unit_Tests is
type Test_T1 is new Root_Class_Test with
record Ptr : access_T1_Class; end record;
procedure Test_M1 (X : Test_T1);
procedure Test_F1 (X : Test_T1);
type Test_T2 is new Test_T1 with private;
overriding procedure Test_M1 (X : Test_T2);
-- inherit Test_F1 (X : Test_T2)
end Example.Unit_Tests;
LLR1_M1
LLR2_M1
LLR1_M1_TestCase1
LLR1_M1_TestCase2
…
+M1()
+F1()
T1
+M1()
T2
+Test_M1()
+Test_F1()
-Ptr
T1_Test
+Test_M1()
T2_Test
1
1
10. CTIC 2010 10
TC by Unit Testing
package body Example.Test_Suites is
procedure T1_Test_Suite (T : Test_T1) is …
procedure T2_Test_Suite (T : Test_T2) is
begin
Test_M1 (T);
Test_F1 (T); -- call inherited test
end T2_Test_Suite;
end Example.Test_Suites;
Procedure My_Test is
T2_Obj : Test_T2 := (Root_Class_Test with new T2);
begin
-- regular testing on T2
Example.Test_Suites.T2_Test_Suite (T2_Obj);
-- verify that T2 can substitute T1 safely
Example.Test_Suites.T1_Test_Suite (Test_T1(T2_Obj));
end My_Test;
11. CTIC 2010 11
TC by Pessimistic Testing
Locate all dispatching calls in the application
For each, infer every method that can be called
Verify that Req based testing cover all such
cases
12. CTIC 2010 12
TC by Pessimistic Testing
procedure Do_Something (Obj1 : T1’Class; Obj2 : T2’Class) is
begin
…
Obj1.M1;
…
Val := Obj2.F1;
…
end Do_Something;
T2’s F1
T2’s M1
T1’s M1
…
Do_Something (My_Obj1, My_Obj2);
…
Do_Something (My_Obj2, My_Obj2);
…
Enough to achieve stmt coverage but
Not enough for Type Consistency verif
Necessary to complete “pessimistic testing”
14. CTIC 2010 14
Multilayering needs
virtualization has multiple known interests for
productivity and industrialisation
◦ SW/HW independance
◦ simulation easier
◦ portability improved
but for safety too :
◦ breakdown of complexity (« divide and conquer »)
◦ in case of Java :
stability of Java Bytecode (10+ years)
formal properties of bytecode
but with DO178-B...
15. CTIC 2010 15
Executable (on
target)
Code
Design
Specification
Introduction
of
Virtualizatio
n
No
room
for
Java
Byte
Code
DO178-B approach
Executable (on
target)
Code
Design
Specification
Byte-Code (on
VM)
DO178C/OOT approach
OO.4 “The target environment is either a target computer or a combination
of virtualization software and a target computer. Virtualization software
also needs to comply with DO-178C/ED-12C and applicable supplements”
16. CTIC 2010 16
DO178C ref. on virtualization
OO.4.2 m.
◦ « Describe any planned use of virtualization » and « This
data [byte code] should be treated as executable code »
OO.C.7.7
◦ main vulnerability is « the code of a given virtualization
layer may be considered to be data, consequently,
tracing may be neglected, and verification may be
insufficient »
OO11.7 g., OO11.8 f.
◦ standards (design and code) must include contraints on
usage of virtualization
17. CTIC 2010 17
Development principle
for a Java Software (1/2)
Java
Application
JVM Platform
HW targetExecutable
Code
Design
Specification
Executable
Code
Design
Specification
18. CTIC 2010 18
Development principle
for a Java Software (2/2)
Tests principles : « IMA-like » process
Application on JVM
main part of appl. HLR,
LLR tests
JVM on target
main part of JVM HLR,
LLR tests
ApplicationonJVM
ontarget
smallpartof
integrationtests
Application exec. on JVM
JVM exec. on HW
19. CTIC 2010 19
Constraints on Application devt.
development of application is not changed
but « executable object code » is Java
bytecode, and the target is a JVM.
it allows to executes tests on any JVM,
considering that target environment is
representative of final HW target.
◦ standardisation of the JVM greatly helps for
this demonstration
20. CTIC 2010 20
Constraints on JVM devt. (1)
Devt. of the JVM must be done at least at the
same SW level as the application.
JVM HLR and LLR are principally described in
Java Virtual Machine specification (the « blue
book »).
Robust and deterministic algorithms must be
chosen, and described in LLRs, to implement the
JVM (see for example Garbage Collector in next
part)
◦ The simplest are the choices, the easiest is the
demonstration.
21. CTIC 2010 21
Constraints on JVM devt. (2) :
JVM Tests strategy
HW target
JVM Java tests
JVM
tests execution on JVM
JVM Java
Bytecode
JVM target
bytecode
JVM execution on a Test JVM
JVM execution on the
target
Test JVM
Single test
battery
Stage 1 Stage 2
23. CTIC 2010 23
DO178C ref. on Dynamic Memory
Management
OO.C.7.6
◦ vulnerabilities are listed and explained, with guidelines
OO.5.2.2 (design activities) :
◦ k. « As part of the software architecture, develop a
strategy for memory management »
OO.11.7 g. et OO.11.8 f.
◦ standards (design and code) must include contraints on
usage of memory management
OO.6.8
◦ specific verification for Dynamic Memory Management
has to be done, with added objective in table A-7 (OO-
11), covering all the vulnerabilities explained in OO.C
24. CTIC 2010 24
Memory ManagementTable OO.C.7.6.3 : where sub-objectives are addressed
MMI : Memory Management Infrastructure AC : Application
With automatic heap managament allocation, application
transfers dynamic memory management problems to the
infrastructure
this is a main advantage of using a Garbage Collector (GC)
a b c d e f g
Object pooling AC AC AC AC AC N/A MMI
Stack allocation AC MMI MMI AC AC N/A MMI
Scope allocation MMI MMI MMI AC AC MMI MMI
Manual heap allocation AC AC* AC AC AC N/A MMI
Automatic heap allocation MMI MMI MMI AC MMI MMI MMI
Sub-objectives (OO.6.8.2)
Technique
25. CTIC 2010 25
7 vulnerabilities in DMM
a. Ambiguous References
b. Fragmentation Starvation
c. Deallocation Starvation
d. Heap Memory Exhaustion
e. Premature Deallocation
f. Lost Update and Stale Reference
g. Time bound Allocation or Deallocation
MMI
MMI
MMI
MMI
MMI
MMI
AC
26. CTIC 2010 26
Verify GC by tests against
vulnerabilities
these verification points are a sort of minimal
requirements for a DMM infrastructure.
They all can be tested by adequate stress tests
For example, property e. « Premature Deallocation »
◦ 6.8.2.e states « Verify that reference consistency is maintained, that is,
each object is unique, and is only viewed as that object. »
◦ One test could be :
one thread fill an array with objects ;
another one compare randomly cells of the array
(a[x]==a[y]) ;
one third thread destroys the objects.
This process is repeated at a high rate and during a long
period.
The comparison must never be true.
27. CTIC 2010 27
Verify GC by analysis against
vulnerabilities (1/2)
The fine characteristics of the GC give
supplementary LLRs
◦ Stop-the the-world / concurrent
◦ Mark-sweep / copy
◦ Compact / not compact
◦ Exact / conservative pointers
◦ Work / time based ...
28. CTIC 2010 28
Verify GC by analysis against
vulnerabilities (2/2)
For example,
b. Fragmentation Starvation
c. Deallocation Starvation
g. Time bound Allocation or
Deallocation
are well
demonstrated by
Shoeberl works, for
concurrent-copy GC,
these charactristics can be used to give some
sound verification of vulnerabities
with periodic GC.
30. CTIC 2010 30
Conclusion
DO178C/OOT supplement is a real guide to go to
certification with OO features
◦ it gives the necessary constraints to make OO programs
safe
◦ it gives the sufficient genercity to accept any known OO
technology
◦ it gives didactical material (APP.C)
Thanks to this new DO178 version, modern OO
technology will finally be embedded in our
modern aircrafts.