Functional Safety in ML-based Cyber-Physical SystemsLionel Briand
This document discusses verification and validation of machine learning systems used in cyber-physical systems. It presents research on developing practical and scalable techniques to systematically verify the safety of deep neural network-based systems. The goals are to efficiently test for safety violations and explain any violations found to enable risk assessment. The document outlines challenges in verifying DNN components and proposes focusing on testing entire DNN-based systems. It reviews existing work and identifies limitations, such as focusing only on single images rather than scenarios involving object dynamics. Standards like ISO 26262 and SOTIF that require testing under different environmental conditions are also discussed. Explanations of any misclassifications found during testing are important for interpreting results and performing risk analysis.
Automated Testing of Autonomous Driving Assistance SystemsLionel Briand
This document discusses automated testing techniques for autonomous driving assistance systems (ADAS). It proposes using decision tree classification models and a multi-objective genetic search algorithm (NSGAII) to efficiently explore the complex scenario space of ADAS. The objectives are to identify critical, failure-revealing test scenarios by characterizing input conditions that lead to safety violations, such as the car hitting a pedestrian. Simulator-based testing of the automated emergency braking system is computationally expensive, so decision trees provide better guidance to the search by partitioning the input space into homogeneous regions.
This document summarizes four research projects conducted in collaboration with industry partners on search-based software testing. It discusses projects on testing PID controllers with Delphi, robustness testing a video conferencing system with Cisco, environment-based testing of a seismic acquisition system with WesternGeco, and stress testing safety-critical drivers in the oil and gas industry with Kongsberg. It also outlines lessons learned from the collaborations and discusses effective models of collaborative research and innovation between academia and industry.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
Research-Based Innovation with Industry: Project Experience and Lessons LearnedLionel Briand
The document discusses lessons learned from research projects conducted in collaboration with industry partners, focusing on defining problems, understanding context factors, developing domain models, and verifying requirements. It provides examples of projects in various domains like subsea systems, automotive, and satellite systems. The goal is to share success criteria and practical guidelines for performing industry-relevant engineering research.
Automated Testing of Autonomous Driving Assistance SystemsLionel Briand
This document discusses automated testing of autonomous driving assistance systems. It begins by introducing autonomous systems and their testing challenges due to large and complex input spaces and lack of explicit specifications. The document then describes an approach that combines evolutionary algorithms and decision tree classification models to guide testing towards critical scenarios. Evolutionary algorithms are used to search the input space while decision trees learn to predict scenario criticality and guide the search towards critical regions. The technique iteratively refines the decision tree model and focuses search on critical regions identified in the trees. The goal is to efficiently generate failure-revealing test cases and characterize input conditions that lead to critical situations.
OCLR: A More Expressive, Pattern-Based Temporal Extension of OCLLionel Briand
This document introduces OCLR, a temporal extension of the Object Constraint Language (OCL) for expressing temporal properties. It discusses limitations of existing temporal logics and extensions of OCL for expressing temporal properties. It then presents the grammar and key features of OCLR, including support for Dwyer's pattern system of temporal property patterns (e.g. universality, existence, absence, response, precedence patterns) and precise specification of event scopes and distances between events. OCLR aims to provide a more expressive and pattern-based way to specify temporal properties within the model-driven engineering approach compared to existing temporal extensions of OCL.
Scalable and Cost-Effective Model-Based Software Verification and TestingLionel Briand
This document describes research on using model-based techniques to generate stress test cases for embedded software. A constraint programming approach is used to model the software system, hardware platform, and performance requirements. The model includes properties of threads, activities, and the scheduling policy. The approach searches for values of tunable parameters, such as delays, that maximize CPU usage while satisfying constraints, in order to evaluate the system under worst-case conditions and help verify that it meets safety standards. The generated test cases effectively stress the system by selecting parameter values that guide the execution towards maximum resource consumption.
Automated Inference of Access Control Policies for Web ApplicationsLionel Briand
This document proposes an approach to automatically infer access control policies for web applications through dynamic analysis and machine learning. The approach involves exploring the application to discover resources, analyzing resource access data, inferring access rules using decision trees, assessing rule consistency, and targeted testing. An evaluation on two applications found the approach was effective in discovering resources and inferring correct policies for one application. Inconsistencies in the inferred rules also helped detect some access control issues in the applications.
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
Automated Test Suite Generation for Time-Continuous Simulink ModelsLionel Briand
This document summarizes an approach for automated test suite generation for Simulink models with time-continuous behaviors. It discusses two main challenges with existing Simulink testing techniques: incompatibility with the underlying SAT/SMT-based techniques which cannot handle features like time-continuous blocks, and low fault revealing ability when test oracles are manual. The proposed approach uses search-based test generation driven by output diversity and failure patterns to generate test cases that are more likely to reveal faults. An evaluation compares the fault detection capability of the approach to Simulink Design Verifier and finds that the proposed output diversity technique outperforms it. The approach is implemented in a tool called SimCoTest.
Testing the Untestable: Model Testing of Complex Software-Intensive SystemsLionel Briand
This document discusses model testing as an approach to testing complex, software-intensive systems that are difficult or impossible to fully automate. It presents model testing as shifting the focus of testing from implemented systems to executable models that capture relevant system behavior and properties. Model testing aims to find and execute high-risk test scenarios in large input spaces and help guide targeted testing of implemented systems. Challenges include defining testable models that include dynamic and uncertain behavior, performing effective test selection, and detecting failures under uncertainty.
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
Scalable Software Testing and Verification of Non-Functional Properties throu...Lionel Briand
This document discusses scalable software testing and verification of non-functional properties through heuristic search and optimization. It describes several projects with industry partners that use metaheuristic search techniques like hill climbing and genetic algorithms to generate test cases for non-functional properties of complex, configurable software systems. The techniques address issues of scalability and practicality for engineers by using dimensionality reduction, surrogate modeling, and dynamically adjusting the search strategy in different regions of the input space. The results provided worst-case scenarios more effectively than random testing alone.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
Test Case Prioritization for Acceptance Testing of Cyber Physical SystemsLionel Briand
1) The document describes a multi-objective search-based approach for minimizing and prioritizing acceptance test cases for cyber physical systems to address challenges like time overhead, uncertainties in execution time, and risks of hardware damage.
2) The approach models acceptance tests, minimizes test cases by removing redundant operations, and prioritizes test cases using Monte Carlo simulation to estimate execution times while optimizing for criticality and risk.
3) An empirical evaluation on an industrial case study of in-orbit satellite testing shows the approach generates test suites that cover more test cases and have lower hardware risk within time budgets compared to manually created test suites.
A practical guide for using Statistical Tests to assess Randomized Algorithms...Lionel Briand
This document provides guidance on using statistical tests to properly evaluate randomized algorithms in software engineering. It discusses challenges in determining if a technique is better than others and the need for multiple runs to avoid conclusions based solely on randomness. Statistical tests help identify if enough runs were performed but may find tiny differences significant with sufficient runs. The document advocates more runs and statistical tests to validate results rather than relying on luck, and presents examples where this was crucial, including search-based test generation and vulnerability testing techniques.
AN EMPIRICAL STUDY ON THE POTENTIAL USEFULNESS OF DOMAIN MODELS FOR COMPLETEN...Lionel Briand
The study empirically examined the potential usefulness of domain models for detecting incompleteness in natural language requirements (RQ1). Domain models were found to provide useful hints towards requirements omissions, with higher sensitivity to unspecified versus underspecified requirements. Additionally, the sensitivity of domain models in detecting omissions varied based on the intrinsic properties of requirements documents, such as the frequency of key phrases (RQ2). Specifically, key phrases in requirements provided an accurate way to predict domain model sensitivity to omissions without full analysis. Therefore, domain models show promise as an external source for requirements completeness checking.
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...Lionel Briand
This document discusses testing dynamic behavior in executable software models for cyber-physical systems. It presents challenges for model-in-the-loop (MiL) testing due to large input spaces, expensive simulations, and lack of simple oracles. The document proposes using search-based testing to generate critical test cases by formulating it as a multi-objective optimization problem. It demonstrates the approach on an advanced driver assistance system and discusses improving performance with surrogate modeling.
Metamorphic Security Testing for Web SystemsLionel Briand
Metamorphic testing is proposed to address the oracle problem in web security testing. Relations capture necessary properties between multiple inputs and outputs that must hold when a system is not vulnerable. Experiments on commercial and open source systems show the approach has high sensitivity (58.33-83.33%) and specificity (99.43-99.50%), detecting vulnerabilities without many false alarms. Extensive experiments with 22 relations achieved similar results for Jenkins and Joomla.
Analyzing Natural-Language Requirements: The Not-too-sexy and Yet Curiously D...Lionel Briand
The document discusses challenges in analyzing natural language requirements and how natural language processing (NLP) techniques can help address these challenges. It describes challenges faced by industry such as ensuring compliance with templates, handling domain knowledge, enabling traceability and change impact analysis, and configuring requirements. It then discusses approaches developed through collaborative research to help with template conformance checking, change impact analysis between requirements, and analyzing impact of changes from requirements to design. The approaches leverage NLP techniques such as text chunking, syntactic and semantic analysis. Evaluation with industrial partners found the approaches to be effective at analyzing hundreds of requirements with high accuracy and limiting unnecessary inspection effort during change.
HITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing...Lionel Briand
This document describes HITECS, a UML profile and analysis framework for specifying and analyzing hardware-in-the-loop (HiL) test cases for cyber-physical systems. HITECS addresses challenges with HiL testing such as risks of hardware damage, time budget constraints, and environmental uncertainties. It provides a modeling language to specify test platforms, behaviors, analyses, and schedules. HITECS also supports model checking of test case assertions and simulation of test execution times to help evaluate test cases prior to execution. An empirical evaluation on a satellite testing case study found that HITECS helps engineers define effective assertions, verify test cases efficiently, and accurately estimate execution times.
Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous C...Lionel Briand
This document summarizes a study comparing offline and online testing of deep neural networks (DNNs) used for autonomous driving. The study found that simulator-generated data can be reliably used as a substitute for real-world data when testing DNNs offline. When comparing offline and online test results, the study found that offline testing is more optimistic and misses many safety violations detected through online testing. The study concludes that online testing is preferable to offline testing for ensuring the safety of DNNs used in autonomous driving systems.
Testing Autonomous Cars for Feature Interaction Failures using Many-Objective...Lionel Briand
This document proposes a search-based testing approach to automatically detect undesired feature interactions in self-driving systems during early development stages. It defines hybrid test objectives that combine coverage-based, failure-based, and unsafe overriding criteria. A tailored many-objective search algorithm is used to generate test cases that satisfy the objectives. An empirical evaluation on two industrial case study systems found the hybrid objectives revealed significantly more feature interaction failures than baseline objectives. Domain experts validated the identified failures were previously unknown and suggested ways to improve the feature integration logic.
Artificial Intelligence for Automated Software TestingLionel Briand
This document provides an overview of applying artificial intelligence techniques such as metaheuristic search, machine learning, and natural language processing to problems in automated software testing. It begins with introductions to software testing, relevant AI techniques including genetic algorithms, machine learning, and natural language processing. It then discusses search-based software testing (SBST) as an application of metaheuristic search to problems in test case generation and optimization. Examples are provided of representing test cases as chromosomes for genetic algorithms and defining fitness functions to guide the search for test cases that maximize code coverage.
PUMConf: A Tool to Configure Product Specific Use Case and Domain Models in a...Lionel Briand
The document describes PUMConf, a tool that helps configure product-specific use case and domain models from product line models. PUMConf allows modeling variability directly in use case diagrams, specifications, and domain models without using feature models. It provides automated consistency checking of product line models, interactive configuration support with consistency checking of decisions, and automated generation of product-specific models from configured product line models. An evaluation in an industrial case study found the approach and tool to be practical and beneficial for configuring product models in industrial settings.
Decision Support for Security-Control Identification Using Machine LearningLionel Briand
The document discusses using machine learning to provide automated decision support for identifying relevant security controls based on analyzing historical risk assessment data from past projects. It outlines extracting features from historical assessment records and building a classification model to predict which security controls should apply to new projects based on similar past projects. The goal is to help address challenges around the time-consuming nature of manually identifying appropriate security controls.
Applying Product Line Use Case Modeling ! in an Industrial Automotive Embedde...Lionel Briand
1. The document describes a refined approach to product line use case modeling called PUM that was applied in an automotive embedded system project at IEE.
2. PUM models variability in use case diagrams, specifications, and domain models using extensions to existing modeling artifacts like use case diagrams and restricted use case modeling.
3. The approach aims to support change management and impact analysis in product lines while limiting additional modeling overhead.
Automating System Test Case Classification and Prioritization for Use Case-Dr...Lionel Briand
This document proposes an approach to automate system test case classification, creation of new test cases, and prioritization for use case-driven testing in product line engineering. The approach classifies previous test cases as reusable, retestable or obsolete based on use case scenario models. It generates guidance to update test cases based on differences between old and new scenarios. It also identifies new scenarios not covered by previous tests to define new test cases. Finally, it prioritizes the test suite for a new product using test execution history and other information. The overall goal is to maximize reuse of test assets from existing products and rely on requirements rather than source code analysis.
Scalable and Cost-Effective Model-Based Software Verification and TestingLionel Briand
This document describes research on using model-based techniques to generate stress test cases for embedded software. A constraint programming approach is used to model the software system, hardware platform, and performance requirements. The model includes properties of threads, activities, and the scheduling policy. The approach searches for values of tunable parameters, such as delays, that maximize CPU usage while satisfying constraints, in order to evaluate the system under worst-case conditions and help verify that it meets safety standards. The generated test cases effectively stress the system by selecting parameter values that guide the execution towards maximum resource consumption.
Automated Inference of Access Control Policies for Web ApplicationsLionel Briand
This document proposes an approach to automatically infer access control policies for web applications through dynamic analysis and machine learning. The approach involves exploring the application to discover resources, analyzing resource access data, inferring access rules using decision trees, assessing rule consistency, and targeted testing. An evaluation on two applications found the approach was effective in discovering resources and inferring correct policies for one application. Inconsistencies in the inferred rules also helped detect some access control issues in the applications.
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
Automated Test Suite Generation for Time-Continuous Simulink ModelsLionel Briand
This document summarizes an approach for automated test suite generation for Simulink models with time-continuous behaviors. It discusses two main challenges with existing Simulink testing techniques: incompatibility with the underlying SAT/SMT-based techniques which cannot handle features like time-continuous blocks, and low fault revealing ability when test oracles are manual. The proposed approach uses search-based test generation driven by output diversity and failure patterns to generate test cases that are more likely to reveal faults. An evaluation compares the fault detection capability of the approach to Simulink Design Verifier and finds that the proposed output diversity technique outperforms it. The approach is implemented in a tool called SimCoTest.
Testing the Untestable: Model Testing of Complex Software-Intensive SystemsLionel Briand
This document discusses model testing as an approach to testing complex, software-intensive systems that are difficult or impossible to fully automate. It presents model testing as shifting the focus of testing from implemented systems to executable models that capture relevant system behavior and properties. Model testing aims to find and execute high-risk test scenarios in large input spaces and help guide targeted testing of implemented systems. Challenges include defining testable models that include dynamic and uncertain behavior, performing effective test selection, and detecting failures under uncertainty.
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
Scalable Software Testing and Verification of Non-Functional Properties throu...Lionel Briand
This document discusses scalable software testing and verification of non-functional properties through heuristic search and optimization. It describes several projects with industry partners that use metaheuristic search techniques like hill climbing and genetic algorithms to generate test cases for non-functional properties of complex, configurable software systems. The techniques address issues of scalability and practicality for engineers by using dimensionality reduction, surrogate modeling, and dynamically adjusting the search strategy in different regions of the input space. The results provided worst-case scenarios more effectively than random testing alone.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
Test Case Prioritization for Acceptance Testing of Cyber Physical SystemsLionel Briand
1) The document describes a multi-objective search-based approach for minimizing and prioritizing acceptance test cases for cyber physical systems to address challenges like time overhead, uncertainties in execution time, and risks of hardware damage.
2) The approach models acceptance tests, minimizes test cases by removing redundant operations, and prioritizes test cases using Monte Carlo simulation to estimate execution times while optimizing for criticality and risk.
3) An empirical evaluation on an industrial case study of in-orbit satellite testing shows the approach generates test suites that cover more test cases and have lower hardware risk within time budgets compared to manually created test suites.
A practical guide for using Statistical Tests to assess Randomized Algorithms...Lionel Briand
This document provides guidance on using statistical tests to properly evaluate randomized algorithms in software engineering. It discusses challenges in determining if a technique is better than others and the need for multiple runs to avoid conclusions based solely on randomness. Statistical tests help identify if enough runs were performed but may find tiny differences significant with sufficient runs. The document advocates more runs and statistical tests to validate results rather than relying on luck, and presents examples where this was crucial, including search-based test generation and vulnerability testing techniques.
AN EMPIRICAL STUDY ON THE POTENTIAL USEFULNESS OF DOMAIN MODELS FOR COMPLETEN...Lionel Briand
The study empirically examined the potential usefulness of domain models for detecting incompleteness in natural language requirements (RQ1). Domain models were found to provide useful hints towards requirements omissions, with higher sensitivity to unspecified versus underspecified requirements. Additionally, the sensitivity of domain models in detecting omissions varied based on the intrinsic properties of requirements documents, such as the frequency of key phrases (RQ2). Specifically, key phrases in requirements provided an accurate way to predict domain model sensitivity to omissions without full analysis. Therefore, domain models show promise as an external source for requirements completeness checking.
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...Lionel Briand
This document discusses testing dynamic behavior in executable software models for cyber-physical systems. It presents challenges for model-in-the-loop (MiL) testing due to large input spaces, expensive simulations, and lack of simple oracles. The document proposes using search-based testing to generate critical test cases by formulating it as a multi-objective optimization problem. It demonstrates the approach on an advanced driver assistance system and discusses improving performance with surrogate modeling.
Metamorphic Security Testing for Web SystemsLionel Briand
Metamorphic testing is proposed to address the oracle problem in web security testing. Relations capture necessary properties between multiple inputs and outputs that must hold when a system is not vulnerable. Experiments on commercial and open source systems show the approach has high sensitivity (58.33-83.33%) and specificity (99.43-99.50%), detecting vulnerabilities without many false alarms. Extensive experiments with 22 relations achieved similar results for Jenkins and Joomla.
Analyzing Natural-Language Requirements: The Not-too-sexy and Yet Curiously D...Lionel Briand
The document discusses challenges in analyzing natural language requirements and how natural language processing (NLP) techniques can help address these challenges. It describes challenges faced by industry such as ensuring compliance with templates, handling domain knowledge, enabling traceability and change impact analysis, and configuring requirements. It then discusses approaches developed through collaborative research to help with template conformance checking, change impact analysis between requirements, and analyzing impact of changes from requirements to design. The approaches leverage NLP techniques such as text chunking, syntactic and semantic analysis. Evaluation with industrial partners found the approaches to be effective at analyzing hundreds of requirements with high accuracy and limiting unnecessary inspection effort during change.
HITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing...Lionel Briand
This document describes HITECS, a UML profile and analysis framework for specifying and analyzing hardware-in-the-loop (HiL) test cases for cyber-physical systems. HITECS addresses challenges with HiL testing such as risks of hardware damage, time budget constraints, and environmental uncertainties. It provides a modeling language to specify test platforms, behaviors, analyses, and schedules. HITECS also supports model checking of test case assertions and simulation of test execution times to help evaluate test cases prior to execution. An empirical evaluation on a satellite testing case study found that HITECS helps engineers define effective assertions, verify test cases efficiently, and accurately estimate execution times.
Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous C...Lionel Briand
This document summarizes a study comparing offline and online testing of deep neural networks (DNNs) used for autonomous driving. The study found that simulator-generated data can be reliably used as a substitute for real-world data when testing DNNs offline. When comparing offline and online test results, the study found that offline testing is more optimistic and misses many safety violations detected through online testing. The study concludes that online testing is preferable to offline testing for ensuring the safety of DNNs used in autonomous driving systems.
Testing Autonomous Cars for Feature Interaction Failures using Many-Objective...Lionel Briand
This document proposes a search-based testing approach to automatically detect undesired feature interactions in self-driving systems during early development stages. It defines hybrid test objectives that combine coverage-based, failure-based, and unsafe overriding criteria. A tailored many-objective search algorithm is used to generate test cases that satisfy the objectives. An empirical evaluation on two industrial case study systems found the hybrid objectives revealed significantly more feature interaction failures than baseline objectives. Domain experts validated the identified failures were previously unknown and suggested ways to improve the feature integration logic.
Artificial Intelligence for Automated Software TestingLionel Briand
This document provides an overview of applying artificial intelligence techniques such as metaheuristic search, machine learning, and natural language processing to problems in automated software testing. It begins with introductions to software testing, relevant AI techniques including genetic algorithms, machine learning, and natural language processing. It then discusses search-based software testing (SBST) as an application of metaheuristic search to problems in test case generation and optimization. Examples are provided of representing test cases as chromosomes for genetic algorithms and defining fitness functions to guide the search for test cases that maximize code coverage.
PUMConf: A Tool to Configure Product Specific Use Case and Domain Models in a...Lionel Briand
The document describes PUMConf, a tool that helps configure product-specific use case and domain models from product line models. PUMConf allows modeling variability directly in use case diagrams, specifications, and domain models without using feature models. It provides automated consistency checking of product line models, interactive configuration support with consistency checking of decisions, and automated generation of product-specific models from configured product line models. An evaluation in an industrial case study found the approach and tool to be practical and beneficial for configuring product models in industrial settings.
Decision Support for Security-Control Identification Using Machine LearningLionel Briand
The document discusses using machine learning to provide automated decision support for identifying relevant security controls based on analyzing historical risk assessment data from past projects. It outlines extracting features from historical assessment records and building a classification model to predict which security controls should apply to new projects based on similar past projects. The goal is to help address challenges around the time-consuming nature of manually identifying appropriate security controls.
Applying Product Line Use Case Modeling ! in an Industrial Automotive Embedde...Lionel Briand
1. The document describes a refined approach to product line use case modeling called PUM that was applied in an automotive embedded system project at IEE.
2. PUM models variability in use case diagrams, specifications, and domain models using extensions to existing modeling artifacts like use case diagrams and restricted use case modeling.
3. The approach aims to support change management and impact analysis in product lines while limiting additional modeling overhead.
Automating System Test Case Classification and Prioritization for Use Case-Dr...Lionel Briand
This document proposes an approach to automate system test case classification, creation of new test cases, and prioritization for use case-driven testing in product line engineering. The approach classifies previous test cases as reusable, retestable or obsolete based on use case scenario models. It generates guidance to update test cases based on differences between old and new scenarios. It also identifies new scenarios not covered by previous tests to define new test cases. Finally, it prioritizes the test suite for a new product using test execution history and other information. The overall goal is to maximize reuse of test assets from existing products and rely on requirements rather than source code analysis.
Incremental Reconfiguration of Product Specific Use Case Models for Evolving ...Lionel Briand
1. The document discusses an approach for incrementally reconfiguring product-specific use case models when configuration decisions evolve over time.
2. Product-specific models are regenerated by focusing only on the changed decisions and their side effects, preserving unaffected parts and existing trace links.
3. The approach involves matching decision model elements before and after changes, calculating differences, and using these to reconfigure use case diagrams and specifications while generating an impact report.
Improving layout and workload of manufacturing system using Delmia Quest simu...AM Publications
This paper describes a case study of analysis and optimization of the facility layout in a manufacturing cell
using a systematic search method and a Quest computer simulation model with graphical representation of the
manufacturing processes. The simulation model objective was to obtain Layout design to achieve a high productivity in the
flexible manufacturing system (FMS), to determine bottleneck locations and what the optimal batch size should be. The
Quest software proved to be a powerful tool in assessing what changes should be made to a manufacturing cell before
incurring manufacturing improvements and/or performing actual capital investments. The aim of this study is to get
an understanding of the cell and its behaviour regarding production and to use the simulation software to change,
analyse and improve the cell.
Automated and Scalable Solutions for Software Testing: The Essential Role of ...Lionel Briand
1) Modeling plays an essential role in enabling automated and scalable software testing solutions across many industrial domains like automotive, aerospace, and healthcare.
2) Models of requirements, system architecture, and environment behavior can be used to guide test generation, derive oracles, and enable early system testing through simulation.
3) Effective test automation solutions combine models with techniques like optimization, constraint solving, and natural language processing to address challenges of scalability, oracle generation, and exploring large test input spaces.
The Maestro framework implemented by the validation group at Cirrus Logic provides GUI-based test automation and management for mixed signal validation. It leads to a 66% reduction in testing time through a modular structure with configuration files, a MATLAB GUI, and reusable validation scripts. Key benefits include abstracted test development and execution, standardized methodologies, and a system for monitoring and logging test results.
The Cloud computing paradigm emerged by establishing new resources provisioning and consumption models. Together with the improvement of resource management techniques, these models have contributed to an increase in the number of application developers that are strong supporters of partially or completely migrating their application to a highly scalable and pay-per-use infrastructure. In this paper we derive a set of functional and non-functional requirements and propose a process-based approach to support the optimal distribution of an application in the Cloud in order to handle fluctuating over time workloads. Using the TPC-H workload as the basis, and by means of empirical workload analysis and characterization, we evaluate the application persistence layer's performance under different deployment scenarios using generated workloads with particular behavior characteristics.
Gale Technologies - A Leading Innovative Software Solutions Provider Explains...Galetech
This document outlines a 7-step process for automating a network test lab using physical-layer switches. The steps include: 1) Assessing automation needs; 2) Deciding what to automate; 3) Choosing infrastructure like switches; 4) Choosing a lab management solution; 5) Planning the architecture; 6) Setting the administrative framework; and 7) Integrating with existing automation. The goal is to enable dynamic test beds, shorten test cycles, and improve testing through remote access and parallel tests. Physical-layer switches allow automated reconfiguration to quickly set up and run multiple tests.
How Manual Testers Can Break into Automation Without Programming SkillsRanorex
Adoption of automating tests has not happened as quickly as organizations need. As more companies move toward implementing agile development as their software development lifecycle, more features are being implemented and released more quickly. This leaves less time for full regression testing of the system, nonetheless this should still be done. Manual testers need to transform into test automation testers as well.
Learn how to make this jump as a manual tester and focus on the right areas first e.g. automation test structure, object recognition and results interpretation.
An Algorithm Based Simulation Modeling For Control of Production SystemsIJMER
This document describes an algorithm-based simulation approach for modeling and controlling flexible production systems. The approach models both the physical production system and the control system to evaluate their integrated performance. Key features include:
1) The approach integrates control system design into the physical simulation to evaluate their combined impact.
2) The algorithm-based design is extensible and allows modeling of different control programs and production system designs.
3) Finite automata formalism provides a mathematical foundation for logical and quantitative analysis of the system.
4) The framework facilitates robust controller models that can resolve issues like deadlocks and accommodate failures.
5) Analysts can evaluate how different control programs and production system designs impact
Project FoX: A Tool That Offers Automated Testing Using a Formal ApproachIvo Neskovic
"Software Testing is the process of executing a program or system with the intent of finding errors.", Myers, 1979. The most important activity in this process is designing the required set of effective test cases. Thus, the problem is narrowed down to determining the exact number of required test cases and increasing their effectiveness.
Project FoX is a production ready tool developed in Java, which offers Java developers the opportunity to leverage the proven theories and concepts of formal testing using generalized state automata (X-Machines) as a theoretical model of computation. The formal testing strategy FoX is applying, is proven to generate a complete test set that ensures the correctness of the implementation with respect to the specification.
FoX enhances a novel testing process that is fully automated, ranging from complete test set generation, to test preparation and execution. This method can be applied to any Java based software system, regardless of its underlying technologies. Utilizing a formal approach will provide unambiguous test cases which are objective and not subjective to the tester's experience and intuition.
The formal testing strategy provides functional testing that tests not only for the desired system behaviour (the system does what it should) but also tests that the system has no undesired behaviour (the system does not do anything it should not do).
This short presentation will strive to give the audience an overview of the formal testing methodology and a demonstration of the tool (FoX). It will also showcase a real-life demo of the project FoX, applied to a Java SE application and will discuss how the methodology can be applied to any Java EE or ME application.
Anyone with a software engineering background will be able to easily follow the talk and understand the benefits which this process offers to modern day software engineering.
Advisian dynamic process simulation capability june 2019Advisian
Dynamic Process Simulation allows the prediction of not only how a system is expected to behave when it is operating at the targeted design point – it is capable of predicting how it will behave when away from its “design point”.
Making Model-Driven Verification Practical and Scalable: Experiences and Less...Lionel Briand
The document discusses experiences and lessons learned from making model-driven verification practical and scalable. It describes several projects collaborating with industry partners to develop model-based solutions for verification. Key challenges addressed include achieving applicability for engineers, scalability to large systems, and developing solutions informed by real-world problems. Lessons learned emphasize the importance of collaborative applied research, defining problems in context, and validating solutions realistically.
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsPooyan Jamshidi
This document provides background on Pooyan Jamshidi's research related to learning software performance models for dynamic and uncertain environments. It summarizes his past work developing techniques for modeling and optimizing performance across different systems and environments, including using transfer learning to reuse performance data from related sources to build more accurate models with fewer measurements. It also outlines opportunities for using transfer learning to adapt performance models to new environments and systems.
The document discusses two approaches - statechart execution and goal-oriented - for reconciling system requirements and runtime behavior in autonomic computing systems. The statechart execution approach uses state machine models defined at design time to select alternative component implementations at runtime. The goal-oriented approach identifies high-level goals, services, and constraints and monitors runtime behavior to determine when goals are not met and trigger remedial actions. Both approaches aim to leverage requirements and design to facilitate system adaptation while avoiding unpredictable behavior, but differ in how runtime support is provided.
A tale of bug prediction in software developmentMartin Pinzger
This document discusses using fine-grained source code changes (SCC) to predict bug-prone files in software projects. It presents research that analyzed SCC data from Eclipse projects to predict bugs. The research found that SCC correlated more strongly with bugs than traditional code churn measures, and SCC-based models better predicted bug-prone files and estimated the number of bugs in files compared to models using code churn.
The document summarizes the role of testing in the software development life cycle (SDLC). It discusses SDLC models like waterfall and V-model and covers the software testing life cycle. This includes test planning, use case scenarios, test cases, test types like unit, integration, and system testing. It also discusses test deliverables like scenarios and test cases and the bug life cycle.
Nowadays, we are surrounded by system of systems, autonomous systems, interconnected systems or distributed heterogeneous systems with an increase in architecture complexity.
Keeping these systems operational is a challenge as the number of potential failures which may affect their availability also increases drastically. In order to optimize availability, maintenance activities have to be designed within the design phase of the system.
Whatever the implementation choice, detection, diagnostic or prevention of failures require tests.
The goal for autonomous systems also pushes towards embedded detection and prevention capabilities and thus arguing and decision making between system engineers and maintenance engineers to share solutions in their respective activities.
In this presentation, we talk about the ability of a system designed with Capella to be tested, including in the maintenance phase. This means to interconnect several kinds of models representing different perspectives: System Design (MBSE), RAMS Analysis (Reliability, Availability, Maintainability and Safety) and Testability.
We present how a MBSE approach with Capella can be used to initiate a testability study performed with the eXpress tool from DSI International.
Precise and Complete Requirements? An Elusive GoalLionel Briand
The document discusses the challenges of achieving precise and complete requirements upfront in software development projects. It notes that while academics assume detailed requirements are needed, practitioners find this difficult to achieve in reality due to limited resources, uncertainty, and changing needs. The document provides perspectives from practice that emphasize starting with prototypes and visions rather than detailed specifications. It also summarizes research finding diverse requirements practices across different domains and organizations. The document concludes that while precise requirements may be desirable, they are often elusive goals, and the focus should be on achieving compliance and delivering working software.
Large Language Models for Test Case Evolution and RepairLionel Briand
Large language models show promise for test case repair tasks. LLMs can be applied to tasks like test case generation, classification of flaky tests, and test case evolution and repair. The paper presents TaRGet, a framework that uses LLMs for automated test case repair. TaRGet takes as input a broken test case and code changes to the system under test, and outputs a repaired test case. Evaluation shows TaRGet achieves over 80% plausible repair accuracy. The paper analyzes repair characteristics, evaluates different LLM and input/output formats, and examines the impact of fine-tuning data size on performance.
Metamorphic Testing for Web System SecurityLionel Briand
This document summarizes a presentation on metamorphic testing for web system security given by Nazanin Bayati on September 13, 2023. Metamorphic testing uses relations between the outputs of multiple test executions to test systems when specifying expected outputs is difficult. It was applied to web systems by generating follow-up inputs based on transformations of valid interactions and checking that output relations held. The approach detected over 60% of vulnerabilities in tested systems and addressed more vulnerability types than static and dynamic analysis tools. It provides an effective and automated way to test for security issues in web systems.
Simulator-based Explanation and Debugging of Hazard-triggering Events in DNN-...Lionel Briand
This document proposes a method called SEDE (Simulator-based Explanations for DNN Errors) to automatically generate explanations for errors in DNN-based safety-critical systems by constraining simulator parameters. SEDE first identifies clusters of error-inducing images, then uses an evolutionary algorithm to generate simulator images within each cluster, including failing, passing, and representative images. SEDE extracts rules characterizing the unsafe parameter space and uses the generated images to retrain DNNs, improving accuracy compared to alternative methods. The paper evaluates SEDE on head pose and face landmark detection DNNs in terms of generating diverse cluster images, delimiting unsafe spaces, and enhancing DNN performance.
This document summarizes a research paper on using grey-box fuzzing (MOTIF) for mutation testing of C/C++ code in cyber-physical systems (CPS). It introduces mutation testing and grey-box fuzzing, and proposes MOTIF which generates a fuzzing driver to test functions with live mutants. An empirical evaluation compares MOTIF to symbolic execution-based mutation testing on three subject programs. MOTIF killed more mutants within 10,000 seconds and was able to test programs that symbolic execution could not handle due to limitations like floating-point values. Seed inputs alone killed few mutants, showing the importance of fuzzing. MOTIF is an effective approach for mutation testing of CPS software.
Data-driven Mutation Analysis for Cyber-Physical SystemsLionel Briand
Data-driven mutation analysis is proposed to assess if test suites for cyber-physical systems properly exercise component interoperability. Fault models are developed for different data types and dependencies, and are used to automatically generate mutants by injecting faults. Empirical results on industrial systems demonstrate the feasibility and effectiveness of the approach in identifying test suite shortcomings and poor oracles.
Many-Objective Reinforcement Learning for Online Testing of DNN-Enabled SystemsLionel Briand
This document proposes MORLOT (Many-Objective Reinforcement Learning for Online Testing) to address challenges in online testing of DNN-enabled systems. MORLOT leverages many-objective search and reinforcement learning to choose test actions. It was evaluated on the Transfuser autonomous driving system in the CARLA simulator using 6 safety requirements. MORLOT was significantly more effective and efficient at finding safety violations than random search or other many-objective approaches, achieving a higher average test effectiveness for any given test budget.
ATM: Black-box Test Case Minimization based on Test Code Similarity and Evolu...Lionel Briand
1. The document presents ATM, a new approach for black-box test case minimization that transforms test code into abstract syntax trees and uses tree-based similarity measures and genetic algorithms to minimize test suites.
2. ATM was evaluated on the DEFECTS4J dataset and achieved a fault detection rate of 0.82 on average, significantly outperforming existing techniques, while requiring only practical execution times.
3. The best configuration of ATM used a genetic algorithm with a combined similarity measure, achieving a fault detection rate of 0.80 within 1.2 hours on average.
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction ...Lionel Briand
The document is a journal paper that proposes a method for black-box safety analysis and retraining of deep neural networks (DNNs) based on feature extraction and clustering of failure-inducing images. The method uses a pre-trained VGG16 model to extract features from failure images, clusters the features using DBSCAN, selects clusters that likely caused failures, and retrains the DNN to improve safety based on images in problematic clusters. An empirical evaluation on various DNNs for tasks like gaze detection showed the method effectively determined failure causes through clustering and improved models with fewer images than other approaches.
PRINS: Scalable Model Inference for Component-based System LogsLionel Briand
PRINS is a technique for scalable model inference of component-based system logs. It divides the problem into inferring individual component models and then stitching them together. The paper evaluates PRINS on several systems and compares its execution time and accuracy to MINT, a state-of-the-art model inference tool. Results show that PRINS is significantly faster than MINT, especially on larger logs, with comparable accuracy. However, stitching component models can result in larger overall system models. The paper contributes an empirical evaluation of the PRINS technique and makes its implementation publicly available.
Revisiting the Notion of Diversity in Software TestingLionel Briand
The document discusses the concept of diversity in software testing. It provides examples of how diversity has been applied in various testing applications, including test case prioritization and minimization, mutation analysis, and explaining errors in deep neural networks. The key aspects of diversity discussed are the representation of test cases, measures of distance or similarity between cases, and techniques for maximizing diversity. The document emphasizes that the best approach depends on factors like information access, execution costs, and the specific application context.
Applications of Search-based Software Testing to Trustworthy Artificial Intel...Lionel Briand
This document discusses search-based approaches for testing artificial intelligence systems. It covers testing at different levels, from model-level testing of individual machine learning components to system-level testing of AI-enabled systems. At the model level, search-based techniques are used to generate test inputs that target weaknesses in deep learning models. At the system level, simulations and reinforcement learning are used to test AI components integrated into complex systems. The document outlines many open challenges in AI testing and argues that search-based approaches are well-suited to address challenges due to the complex, non-linear behaviors of AI systems.
Autonomous Systems: How to Address the Dilemma between Autonomy and SafetyLionel Briand
Autonomous systems present safety challenges due to their complexity and use of machine learning. Two key approaches are needed to address these challenges: (1) design-time assurance cases to validate safety requirements and (2) run-time monitoring architectures to detect unsafe behavior. Automated testing techniques leveraging metaheuristics and machine learning can help provide evidence for assurance cases and learn conditions to guide run-time monitoring. However, more industrial experience is still needed to properly validate these approaches at scale for autonomous systems.
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...Lionel Briand
This document discusses the split identities of software engineering researchers between being mathematicians, social scientists, or engineers. It notes there are three main communities - formal methods and guarantees, human and social studies, and engineering automated solutions - that have different backgrounds, languages, and research methods. While diversity is good, the communities need to be better connected to work together to solve problems. The document calls for more demand-driven, collaborative research with industry to have a greater impact and produce practical solutions.
Reinforcement Learning for Test Case PrioritizationLionel Briand
1) The document discusses using reinforcement learning for test case prioritization in continuous integration environments. It compares different ranking models (listwise, pairwise, pointwise) and reinforcement learning algorithms.
2) Pairwise and pointwise ranking models generally perform better than listwise, and pairwise training times are better than pointwise. The best configuration is pairwise ranking with the ACER algorithm.
3) When compared to traditional machine learning ranking models, the best reinforcement learning configuration provides significantly better ranking accuracy than the state-of-the-art MART model.
4) However, relying solely on test execution history may not provide sufficient features for an accurate prioritization policy regardless of the approach. Enriched datasets with more features
Mutation Analysis for Cyber-Physical Systems: Scalable Solutions and Results ...Lionel Briand
The document summarizes a paper that presents Mutation Analysis for Space Software (MASS), a scalable and automated pipeline for mutation testing of cyber-physical systems software in the space domain. The pipeline includes steps to create mutants, sample and prioritize mutants, discard equivalent mutants, and compute mutation scores. An empirical evaluation on space software case studies found that MASS provides accurate mutation scores with fewer sampled mutants compared to other sampling approaches. It also enables significant time savings over non-optimized mutation analysis through test case prioritization and reduction techniques. MASS helps uncover weaknesses in test suites and ensures thorough software testing for safety-critical space systems.
On Systematically Building a Controlled Natural Language for Functional Requi...Lionel Briand
The document presents a qualitative methodology for systematically building a controlled natural language (CNL) for functional requirements. It describes extracting requirements from software requirements specifications, identifying codes within the requirements, labeling and grouping the requirements, creating a grammar by identifying the content in requirements and deriving grammar rules. An evaluation of the developed CNL called Rimay showed it could express 88% of requirements from unseen documents and reached stability after analyzing three documents.
Efficient Online Testing for DNN-Enabled Systems using Surrogate-Assisted and...Lionel Briand
This document proposes SAMOTA, a surrogate-assisted many-objective optimization approach for online testing of DNN-enabled systems. SAMOTA uses global and local surrogate models to replace expensive function evaluations. It clusters local data points and builds individual surrogate models for each cluster, rather than one model for all data. An evaluation on a DNN-enabled autonomous driving system shows SAMOTA achieves better test effectiveness and efficiency than alternative approaches, and clustering local data points leads to more effective local searches than using a single local model. SAMOTA is an effective method for online testing of complex DNN systems.
Avast Premium Security Crack FREE Latest Version 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
Avast Premium Security is a paid subscription service that provides comprehensive online security and privacy protection for multiple devices. It includes features like antivirus, firewall, ransomware protection, and website scanning, all designed to safeguard against a wide range of online threats, according to Avast.
Key features of Avast Premium Security:
Antivirus: Protects against viruses, malware, and other malicious software, according to Avast.
Firewall: Controls network traffic and blocks unauthorized access to your devices, as noted by All About Cookies.
Ransomware protection: Helps prevent ransomware attacks, which can encrypt your files and hold them hostage.
Website scanning: Checks websites for malicious content before you visit them, according to Avast.
Email Guardian: Scans your emails for suspicious attachments and phishing attempts.
Multi-device protection: Covers up to 10 devices, including Windows, Mac, Android, and iOS, as stated by 2GO Software.
Privacy features: Helps protect your personal data and online privacy.
In essence, Avast Premium Security provides a robust suite of tools to keep your devices and online activity safe and secure, according to Avast.
Exploring Wayland: A Modern Display Server for the FutureICS
Wayland is revolutionizing the way we interact with graphical interfaces, offering a modern alternative to the X Window System. In this webinar, we’ll delve into the architecture and benefits of Wayland, including its streamlined design, enhanced performance, and improved security features.
PDF Reader Pro Crack Latest Version FREE Download 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
PDF Reader Pro is a software application, often referred to as an AI-powered PDF editor and converter, designed for viewing, editing, annotating, and managing PDF files. It supports various PDF functionalities like merging, splitting, converting, and protecting PDFs. Additionally, it can handle tasks such as creating fillable forms, adding digital signatures, and performing optical character recognition (OCR).
PRTG Network Monitor Crack Latest Version & Serial Key 2025 [100% Working]saimabibi60507
Copy & Past Link 👉👉
https://dr-up-community.info/
PRTG Network Monitor is a network monitoring software developed by Paessler that provides comprehensive monitoring of IT infrastructure, including servers, devices, applications, and network traffic. It helps identify bottlenecks, track performance, and troubleshoot issues across various network environments, both on-premises and in the cloud.
DVDFab Crack FREE Download Latest Version 2025younisnoman75
⭕️➡️ FOR DOWNLOAD LINK : http://drfiles.net/ ⬅️⭕️
DVDFab is a multimedia software suite primarily focused on DVD and Blu-ray disc processing. It offers tools for copying, ripping, creating, and editing DVDs and Blu-rays, as well as features for downloading videos from streaming sites. It also provides solutions for playing locally stored video files and converting audio and video formats.
Here's a more detailed look at DVDFab's offerings:
DVD Copy:
DVDFab offers software for copying and cloning DVDs, including removing copy protections and creating backups.
DVD Ripping:
This allows users to rip DVDs to various video and audio formats for playback on different devices, while maintaining the original quality.
Blu-ray Copy:
DVDFab provides tools for copying and cloning Blu-ray discs, including removing Cinavia protection and creating lossless backups.
4K UHD Copy:
DVDFab is known for its 4K Ultra HD Blu-ray copy software, allowing users to copy these discs to regular BD-50/25 discs or save them as 1:1 lossless ISO files.
DVD Creator:
This tool allows users to create DVDs from various video and audio formats, with features like GPU acceleration for faster burning.
Video Editing:
DVDFab includes a video editing tool for tasks like cropping, trimming, adding watermarks, external subtitles, and adjusting brightness.
Video Player:
A free video player that supports a wide range of video and audio formats.
All-In-One:
DVDFab offers a bundled software package, DVDFab All-In-One, that includes various tools for handling DVD and Blu-ray processing.
TestMigrationsInPy: A Dataset of Test Migrations from Unittest to Pytest (MSR...Andre Hora
Unittest and pytest are the most popular testing frameworks in Python. Overall, pytest provides some advantages, including simpler assertion, reuse of fixtures, and interoperability. Due to such benefits, multiple projects in the Python ecosystem have migrated from unittest to pytest. To facilitate the migration, pytest can also run unittest tests, thus, the migration can happen gradually over time. However, the migration can be timeconsuming and take a long time to conclude. In this context, projects would benefit from automated solutions to support the migration process. In this paper, we propose TestMigrationsInPy, a dataset of test migrations from unittest to pytest. TestMigrationsInPy contains 923 real-world migrations performed by developers. Future research proposing novel solutions to migrate frameworks in Python can rely on TestMigrationsInPy as a ground truth. Moreover, as TestMigrationsInPy includes information about the migration type (e.g., changes in assertions or fixtures), our dataset enables novel solutions to be verified effectively, for instance, from simpler assertion migrations to more complex fixture migrations. TestMigrationsInPy is publicly available at: https://github.com/altinoalvesjunior/TestMigrationsInPy.
Cryptocurrency Exchange Script like Binance.pptxriyageorge2024
This SlideShare dives into the process of developing a crypto exchange platform like Binance, one of the world’s largest and most successful cryptocurrency exchanges.
Not So Common Memory Leaks in Java WebinarTier1 app
This SlideShare presentation is from our May webinar, “Not So Common Memory Leaks & How to Fix Them?”, where we explored lesser-known memory leak patterns in Java applications. Unlike typical leaks, subtle issues such as thread local misuse, inner class references, uncached collections, and misbehaving frameworks often go undetected and gradually degrade performance. This deck provides in-depth insights into identifying these hidden leaks using advanced heap analysis and profiling techniques, along with real-world case studies and practical solutions. Ideal for developers and performance engineers aiming to deepen their understanding of Java memory management and improve application stability.
Discover why Wi-Fi 7 is set to transform wireless networking and how Router Architects is leading the way with next-gen router designs built for speed, reliability, and innovation.
Who Watches the Watchmen (SciFiDevCon 2025)Allon Mureinik
Tests, especially unit tests, are the developers’ superheroes. They allow us to mess around with our code and keep us safe.
We often trust them with the safety of our codebase, but how do we know that we should? How do we know that this trust is well-deserved?
Enter mutation testing – by intentionally injecting harmful mutations into our code and seeing if they are caught by the tests, we can evaluate the quality of the safety net they provide. By watching the watchmen, we can make sure our tests really protect us, and we aren’t just green-washing our IDEs to a false sense of security.
Talk from SciFiDevCon 2025
https://www.scifidevcon.com/courses/2025-scifidevcon/contents/680efa43ae4f5
Join Ajay Sarpal and Miray Vu to learn about key Marketo Engage enhancements. Discover improved in-app Salesforce CRM connector statistics for easy monitoring of sync health and throughput. Explore new Salesforce CRM Synch Dashboards providing up-to-date insights into weekly activity usage, thresholds, and limits with drill-down capabilities. Learn about proactive notifications for both Salesforce CRM sync and product usage overages. Get an update on improved Salesforce CRM synch scale and reliability coming in Q2 2025.
Key Takeaways:
Improved Salesforce CRM User Experience: Learn how self-service visibility enhances satisfaction.
Utilize Salesforce CRM Synch Dashboards: Explore real-time weekly activity data.
Monitor Performance Against Limits: See threshold limits for each product level.
Get Usage Over-Limit Alerts: Receive notifications for exceeding thresholds.
Learn About Improved Salesforce CRM Scale: Understand upcoming cloud-based incremental sync.
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)Andre Hora
Software testing plays a crucial role in the contribution process of open-source projects. For example, contributions introducing new features are expected to include tests, and contributions with tests are more likely to be accepted. Although most real-world projects require contributors to write tests, the specific testing practices communicated to contributors remain unclear. In this paper, we present an empirical study to understand better how software testing is approached in contribution guidelines. We analyze the guidelines of 200 Python and JavaScript open-source software projects. We find that 78% of the projects include some form of test documentation for contributors. Test documentation is located in multiple sources, including CONTRIBUTING files (58%), external documentation (24%), and README files (8%). Furthermore, test documentation commonly explains how to run tests (83.5%), but less often provides guidance on how to write tests (37%). It frequently covers unit tests (71%), but rarely addresses integration (20.5%) and end-to-end tests (15.5%). Other key testing aspects are also less frequently discussed: test coverage (25.5%) and mocking (9.5%). We conclude by discussing implications and future research.
Interactive Odoo Dashboard for various business needs can provide users with dynamic, visually appealing dashboards tailored to their specific requirements. such a module that could support multiple dashboards for different aspects of a business
✅Visit And Buy Now : https://bit.ly/3VojWza
✅This Interactive Odoo dashboard module allow user to create their own odoo interactive dashboards for various purpose.
App download now :
Odoo 18 : https://bit.ly/3VojWza
Odoo 17 : https://bit.ly/4h9Z47G
Odoo 16 : https://bit.ly/3FJTEA4
Odoo 15 : https://bit.ly/3W7tsEB
Odoo 14 : https://bit.ly/3BqZDHg
Odoo 13 : https://bit.ly/3uNMF2t
Try Our website appointment booking odoo app : https://bit.ly/3SvNvgU
👉Want a Demo ?📧 [email protected]
➡️Contact us for Odoo ERP Set up : 091066 49361
👉Explore more apps: https://bit.ly/3oFIOCF
👉Want to know more : 🌐 https://www.axistechnolabs.com/
#odoo #odoo18 #odoo17 #odoo16 #odoo15 #odooapps #dashboards #dashboardsoftware #odooerp #odooimplementation #odoodashboardapp #bestodoodashboard #dashboardapp #odoodashboard #dashboardmodule #interactivedashboard #bestdashboard #dashboard #odootag #odooservices #odoonewfeatures #newappfeatures #odoodashboardapp #dynamicdashboard #odooapp #odooappstore #TopOdooApps #odooapp #odooexperience #odoodevelopment #businessdashboard #allinonedashboard #odooproducts
Get & Download Wondershare Filmora Crack Latest [2025]saniaaftab72555
Copy & Past Link 👉👉
https://dr-up-community.info/
Wondershare Filmora is a video editing software and app designed for both beginners and experienced users. It's known for its user-friendly interface, drag-and-drop functionality, and a wide range of tools and features for creating and editing videos. Filmora is available on Windows, macOS, iOS (iPhone/iPad), and Android platforms.
Full Cracked Resolume Arena Latest Versionjonesmichealj2
Resolume Arena is a professional VJ software that lets you play, mix, and manipulate video content during live performances.
This Site is providing ✅ 100% Safe Crack Link:
Copy This Link and paste it in a new tab & get the Crack File
↓
➡ 🌍📱👉COPY & PASTE LINK👉👉👉 👉 https://yasir252.my/
Landscape of Requirements Engineering for/by AI through Literature ReviewHironori Washizaki
Hironori Washizaki, "Landscape of Requirements Engineering for/by AI through Literature Review," RAISE 2025: Workshop on Requirements engineering for AI-powered SoftwarE, 2025.
Landscape of Requirements Engineering for/by AI through Literature ReviewHironori Washizaki
Supporting Change in Product Lines within the Context of Use Case-driven Development and Testing
1. .lusoftware verification & validation
VVS
Supporting Change in Product Lines
within the Context of Use Case-driven
Development and Testing
Lionel Briand
SnT Centre for Security, Reliability and Trust
University of Luxembourg, Luxembourg
5. Common “PLM” Practice
5
STO
requirements
for C1
STO
requirements
for C2
STO
requirements
for C3
STO test
suite for C1
STO test
suite for C2
STO test
suite for C3
evolves to
(copy and modify)
evolves to
derived from derived from derived from
Test engineer
Requirements
analyst
Customer C1 Customer C2 Customer C3
evolves to
(select, prioritize
and modify)
evolves to
(copy and modify)
(select, prioritize
and modify)
6. PLM Practice
• Despite many years of academic research, most companies
follow ad-hoc PL practices.
• Lack of systematic and convenient ways to handle variability
in requirements among customers
• Configuration of product specific (PS) requirements is manual,
expensive, and error prone
• Regression testing in product families is manual and
expensive
6
7. Root Causes?
• Representations of requirements and variability matter
• Assumptions about requirements modeling
• Overhead: Variability modeling (e.g., feature modeling),
traceability, etc.
• Lack of adequate, practical tooling providing benefits
• Solution: Make realistic assumptions, minimize overhead
7
8. Research Questions
• RQ1: How to model variability in use case and domain models without
additional traceability to feature models?
• RQ2: How to support requirements analysts in
- making requirements configuration decisions?
- generating product specific use case and domain models?
- performing change impact analysis in use case models of a product
family?
• RQ3: How to classify and prioritize system test cases for new products in
an effective manner? 8
9. 9
Proposed Methodology
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
10. 10
Product Line Use Case Modeling Method (PUM)
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
11. These approaches entail additional modeling and
traceability effort into practice
Related Work in Product Line Use
Case Driven Development
• Relating feature models and use cases [Griss et al., 1998;
Eriksson et al., 2009; Buhne et al., 2006]
• Modeling variability either in use case diagrams or use case
specifications [Azevedo et al., 2012; John and Muthig, 2004;
Halmans and Pohl, 2003]
11
12. Objective
• Modeling variability in use case models in a practical way by:
- relying only on commonly used artifacts in use case driven
development
- enabling automated guidance for product configuration and
testing
12
13. 13
2. Model variability
in use case
specifications
3. Model
variability in
domain models
Reuse existing work
1. Model
variability in use
case diagrams
Introduce new
extensions for use
case specifications
Reuse existing work
[Yue et al., TOSEM’13] [Ziadi and Jezequel, SPLC’06][Halmans and Phol, SoSyM’03]
A modeling method
that cover the above artifacts
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
Modeling Method: PUM
14. 14
Elicit Product
Line Models
1
<<s>>
<<p>>
<<p>>
<<m>>
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Product Line
Use Case Diagram
Product Line
Use Case Specifications
Product Line
Domain Model
Check
consistency
among artifacts
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
2
•• •• •• •• •• •• •• ••
List of
Inconsistencies
Uses Natural
Language
Processing
Uses UML Profiles
Overview of PUM
15. 15
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status
Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status
Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status
Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
Excerpt of STO Product Line Use
Case Diagram
Variation
Point
Variant
Use Case
16. 16
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
Identify System
Operating Status
Storing
Error
Status
<<Variant>>
Store Error
Status
<<include>>
0..1
Excerpt of STO Product Line Use
Case Diagram
17. 17
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
Excerpt of STO Product Line Use
Case Diagram
19. 19
STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
Storing
Error
Status
<<Variant>>
Store Error
Status
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
Excerpt of STO Product Line Use
Case Diagram
20. Elicit Product
Line Models
1
<<s>>
<<p>>
<<p>>
<<m>>
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Product Line
Use Case Diagram
Product Line
Use Case Specifications
Product Line
Domain Model
Check
conformance
among artifacts
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
2
•• •• •• •• •• •• •• ••
List of
Inconsistencies
Modeling Method: PUM
21. Restricted Use Case Modeling:
RUCM
• RUCM is a use case modeling approach that is based on:
- a use case template
- a set of well-defined restriction rules and keywords
21
[Yue et al. TOSEM’13]
22. RUCM Use Case Example
22
Use Case: Recognize Gesture
Brief Description: The System is recognizing a gesture
Primary Actor: Device Sensors
Secondary Actors: STO Controller
Basic Flow
1. The system REQUESTS the move capacitance FROM the sensor.
2. INCLUDE USE CASE Identify system operating status.
3. The system VALIDATES THAT the operating status is valid.
4. The system SENDS the valid kick status TO STO controller.
Postcondition: The kick status has been sent.
Specific Alternative Flow
RFS 3
1. ABORT
Postcondition: The operating status has been set as not OK and the STO Controller has not been
informed about the gesture.
23. RUCM Extension (1)
• Keyword: INCLUDE VARIATION POINT: ...
• Variation points can be included in basic or alternative flows of
use cases
23
Use Case: Identify system operating status
Basic Flow
1. The system VALIDATES THAT the ROM is valid.
2. The system VALIDATES THAT the RAM is valid.
3. The system VALIDATES THAT the sensors are valid.
4. The system VALIDATES THAT there is no error detected
Specific Alternative Flow
RFS 4
1. INCLUDE VARIATION POINT: Storing error status.
2. ABORT
24. RUCM Extension (2)
Keyword: VARIANT for non mandatory use cases
24
Variant Use Case: Clear Error Status
Basic Flow
1. The Tester SENDS the clear error status request TO the System.
2. INCLUDE VARIATION POINT: Method of clearing error status.
Postcondition: The stored errors have been cleared and the clear error status
answer for successful clearing has been provided to the Tester.
25. RUCM Extension (3)
Keyword: OPTIONAL for non-mandatory steps and non-mandatory
alternative flows
25
Variant Use Case: Provide System User Data via Standard Mode
Basic Flow
1. OPTIONAL STEP: The system VALIDATES THAT the “switch off communication” feature is disabled.
2. OPTIONAL STEP: The system SENDS calibration data TO the Tester.
3. OPTIONAL STEP: The system SENDS trace data TO the Tester.
4. OPTIONAL STEP: The system SENDS error data TO the Tester.
5. OPTIONAL STEP: The system SENDS sensor data TO the Tester.
OPTIONAL Specific Alternative Flow
RFS 1
1. ABORT.
Postcondition: the switch off communication feature has been enabled.
26. RUCM Extension (4)
Keyword: V for variant order of steps
26
Variant Use Case: Provide System User Data via Standard Mode
Basic Flow
1. OPTIONAL STEP: The system VALIDATES THAT the “switch off communication” feature is disabled.
V1. OPTIONAL STEP: The system SENDS calibration data TO the Tester.
V2. OPTIONAL STEP: The system SENDS trace data TO the Tester.
V3. OPTIONAL STEP: The system SENDS error data TO the Tester.
V4. OPTIONAL STEP: The system SENDS sensor data TO the Tester.
OPTIONAL Specific Alternative Flow
RFS 1
1. ABORT.
Postcondition: the switch off communication feature has been enabled.
27. Elicit Product
Line Models
1
<<s>>
<<p>>
<<p>>
<<m>>
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Product Line
Use Case Diagram
Product Line
Use Case Specifications
Product Line
Domain Model
Check
consistency
among artifacts
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
2
•• •• •• •• •• •• •• ••
List of
Inconsistencies
Modeling Method: PUM
29. Elicit Product
Line Models
1
<<s>>
<<p>>
<<p>>
<<m>>
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Product Line
Use Case Diagram
Product Line
Use Case Specifications
Product Line
Domain Model
Check
consistency
among artifacts
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
2
•• •• •• •• •• •• •• ••
List of
Inconsistencies
Modeling Method: PUM
30. Natural Language Processing
30
Variant Use Case: Clear Error Status
Basic Flow
1. The Tester SENDS the clear error status
request TO the System.
2. INCLUDE VARIATION POINT: Method of
clearing error status.
Postcondition: The stored errors have been
cleared and the clear error status answer for
successful clearing has been provided to the
Tester.
Input Step
Variation Point
Variant Use Case
Post Condition
31. Consistency Checks
• The tool for our modeling method:
- checks conformance of the use case specifications with
extended RUCM template
- checks use case diagram and use case specifications
consistency
- checks use case specifications and the domain model
consistency
31
32. 32
Use Case Models Consistency
Example
Use case diagram Use case specifications
33. Evaluation
• We evaluate our modeling method in terms of adoption effort,
expressiveness, and tool support through a questionnaire
study and a case study
• Presentation of our: 1) modeling method, 2) detailed examples
from STO, and 3) tool demo
• Participants were also encouraged to provide open, written
comments
33
34. Model Sizes
34
# of
use
cases
# of
variation
points
# of
basic
flows
# of
alternative
flows
# of
steps
# of
condition
steps
Essential Use Cases 15 5 15 70 269 75
Variant Use Cases 14 3 14 132 479 140
Total 29 8 29 202 748 215
# of classes
Essential 42
Variant 12
Total 54
35. Participants
• Interviews with seven participants who:
- hold various roles at IEE: software development and process
manager, software engineer, and system engineer
- have substantial industry experience, ranging from seven to
thirty years
- have previous experience with use case-driven development
and modeling
35
36. Example Questions from the
Questionnaire
36
Do you think that our modeling
method provide useful assistance
for capturing and analyzing
variability ?
0
1
2
3
4
5
6
Very probably
Probably
Probably not
Surely not
Would you see added value in
adopting our modeling
method?
0
1
2
3
4
5
6
7
Very probably
Probably
Probably not
Surely not
37. Questionnaire Results: Positive
Aspects
• The extensions:
- are simple enough to facilitate communication between analysts
and customers
- provide enough expressiveness to conveniently capture
variability
• The effort required for learning PUM is reasonable
• The tool provides useful assistance for minimizing inconsistencies
in artifacts
37
38. 38
Automated Configuration
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
39. Related Work
• Relating feature models to use case artifacts [Eriksson et
al., 2009; Czarnecki et al., 2005; Alferez et al.,2009]
• Using new artifact: decision model [John and Muthig,
2004; Faulk, 2001]
39
40. Limitations of Existing
Configurators
• Most configurators rely on feature models and require variability to
be expressed in a generic notation or language
• Generic configurators require considerable effort and tool-specific
internal knowledge to be customized for use case models
• Most use case configurators do not provide any automated
decision-making support
- For example, automated detection and explanation of
contradicting decisions
40
41. Overview of Configuration
Approach
41
Elicitation of
Configuration Decisions
with Consistency Checking
Generation of
Product Specific Use
Case and Domain Models
1 2
PL Use Case
Diagram
PL Domain
Model
<<s>>
<<p>>
<<p>>
<<m>>
PL Use Case
Specifications
YesAre decisions
consistent and
complete?List of Contradicting
Decisions
No
PS Use Case
Diagram
PS Domain
Model
PS Use Case
Specifications
Actor
Reques
t Order
Show
catalog
Pay For
•• •• •• •• •• •• •• ••
42. Elicitation of Decisions with
Consistency Checking
42
List of Contradicting Decisions
List of VPs
Filtering VPs¨ ≠Collecting a
Decision
Checking
Decision
Consistency
ÆVP1
VP2
VP3
Decision
for VP
VP1
Are
Decisions
Consistent?[Yes]
[No]
43. Filtering and Ordering Example
43
Tester
Clearing
Error Status
<<Variant>>
Clear Error
Status
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error Status
via IEE QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
44. Checking Decisions Consistency
44
List of Contradicting Decisions
List of VPs
Filtering VPs¨ ≠Collecting a
Decision
Checking
Decision
Consistency
ÆVP1
VP2
VP3
Decision
for VP
VP1
Are
Decisions
Consistent?[Yes]
[No]
45. STO System
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<Variant>>
Clear Error Status
via Diagnostic
Mode
<<Variant>>
Clear Error
Status via IEE
QC Mode
0..1
<<include>>
Method of
Clearing
Error Status
1..1
<<require>>
STO Controller
<<include>>
45
Checking Decisions Consistency
Example (1)
47. 47
• Variability relations and multiplicities are mapped to a set of
propositional logic formulas
• In our example, A requires B becomes A implies B
• We infer the root cause of a given conflict using the initial elements that
were used for deriving the propositional formula
Checking Decisions Consistency
Example (3)
48. 48
Elicitation of
Configuration Decisions
with Consistency Checking
Generation of
Product Specific Use
Case and Domain Models
1 2
PL Use Case
Diagram
PL Domain
Model
<<s>>
<<p>>
<<p>>
<<m>>
PL Use Case
Specifications
YesAre decisions
consistent and
complete?List of Contradicting
Decisions
No
PS Use Case
Diagram
PS Domain
Model
PS Use Case
Specifications
Actor
Reques
t Order
Show
catalog
Pay For
•• •• •• •• •• •• •• ••
Generation of Product Specific
Use Case and Domain Models
49. Generation of PS Use Case Diagram
and Specifications
49
UC
<<Variant>>
UCn
<<include>>
Variation
Point X
1..n
<<Variant>>
UC1 …
Actor
PL Model Decision PS Model
Select UC1
and UCn
No optional
step
selected
UC
UCn
<<include>>
UC1
<<include>>
…
Actor
Dependency INCLUDE VARIATION POINT X.
Basic Flow
Steps Flow of events 1.
Step INCLUDE VARIATION
POINT X.
Steps OPTIONAL STEP: Flow
of events 2.<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
Dependency INCLUDE UC1, INCLUDE UCn
Basic Flow
Steps Flow of events 1.
Step VALIDATES THAT
Pre-condition of
UC1.
Step INCLUDE UC1.
Specific
Alternative
Flow
Step RFS Validation step
in Basic Flow
Step INCLUDE UCn.<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
51. Example Questions from the
Questionnaire
51
Do you think that the configurator
provides useful assistance for
identifying and resolving inconsistent
decisions in PL use case diagram?
0
1
2
3
4
5
6
7
Very probably
Probably
Probably not
Surely not
Would you see added value in
adopting our configurator?
0
1
2
3
4
5
6
Very probably
Probably
Probably not
Surely not
52. Questionnaire Results
• The configurator:
- provides useful assistance for configuring PS use case
models compared to IEE current practice
- facilitates communication between analysts and stakeholders
during configuration
• The effort required to learn and apply our configurator is
reasonable
52
53. 53
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
Change Impact Analysis for Configuration
Decision Changes
54. Related Work
• Impact analysis approaches for product lines using feature
models [Thüm et al., 2009; Seidl et al., 2012; Dintzner et
al., 2014]
• Reasoning approaches for product lines [Benavides et al.,
2010; Durán et al., 2017; White et al., 2008, 2010]
54
55. Main Limitation of Existing Work
Existing approaches identify only the impacted
decisions and do not provide any explanation
regarding the cause of the impact of decision changes
55
56. Motivation
• Identify the cause of the impact of changing decisions for PL use
case diagrams
- Violation of dependency relations (i.e., requires and conflicts)
- Unsatisfiability of cardinality constraints of variation points
- Restrictions on subsequent decisions
• Improve decision making process by informing analysts about
the causes of change impacts on configuration decisions
56
57. Overview of Change Impact
Analysis Approach
57
Identify the Change
Impact on Other
decisions
Propose a Change
for a Decision
Apply the Proposed
Change
Proposed
Change
Do you want to apply
the proposed change?
•• •• •• •• •• •• •• ••
Impacted
Decisions
[No]
[Yes]
•• •• •• •• •• •• •• ••
Added/Removed
Updated Decisions
1 2
3[No]
[Yes]
Do you want to propose a
change for any other decision?
58. Overview of Change Impact
Analysis Approach
58
Identify the Change
Impact on Other
decisions
Propose a Change
for a Decision
Apply the Proposed
Change
Proposed
Change
Do you want to apply
the proposed change?
•• •• •• •• •• •• •• ••
Impacted
Decisions
[No]
[Yes]
•• •• •• •• •• •• •• ••
Added/Removed
Updated Decisions
1
3[No]
[Yes]
Do you want to propose a
change for any other decision?
2
59. Identification of Change Impact on
Other Decisions
• Step1: check contradictions with prior decisions
• Step2: infer restrictions on subsequent decisions
èdelimits future selection of variant use cases in still
undecided variation points
• Step3: check if the inferred restrictions bring new
contradictions
59
60. Inferring Decision Restrictions for
Subsequent Decisions
60
• Proposed decision change for VP1: select UC1
and unselect UC2
• Inferred (future) restrictions by (recursive)
traversal of dependencies:
- UC3 must be selected
- UC5 should not be selected
- UC7 should not be selected
è Contradiction: cardinality of VP3 cannot be
satisfied
<<Variant>>
UC1
0..1
<<Variant>>
UC3
<<Variant>>
UC4
0..1
VP2
1..1
<<require>>
VP1
<<Variant>>
UC2
<<Variant>>
UC5
<<Variant>>
UC6
VP3
2..3
<<Variant>>
UC7
<<conflict>>
<<require>>
61. Generation of Impact Report
61
Violet
Cardinality*
constraints*that*
can*no*longer*be*
sa2sfied
Variant*use*cases*
that*must*be*
selected*due*to*
requires*rela2on*
and*decision*
change
Brown Red
Variant use
cases that are
selected in the
decision
change
Orange
Variant use
cases that are
unselected in
the decision
change
Explanation
Colors
Variant*use*cases*
that*must*be*
unselected*due*to*
conflict*rela2on*
and*decision*
change
Green
<<Variant>>
UC1
<<Variant>>
UC2
<<Variant>>
UC5
<<Variant>>
UC6
<<Variant>>
UC3
<<Variant>>
UC4
VP2
<<Variant>>
UC7
0..1
<<conflict>>
<<require>>
VP1
VP3
2..3
<<require>>
Subsequent decision for VP3
Subsequent decision for VP2
Impacted Decisions
63. Example Questions from the
Questionnaire
63
Do you think that the steps in our
change impact analysis method are easy
to follow, given appropriate training?
Do you think that the effort required to
learn how to apply the change impact
analysis method is reasonable?
0
1
2
3
4
5
6
Strongly agree
Agree
Disagree
Strongly disagree
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Strongly agree
Agree
Disagree
Strongly disagree
64. Questionnaire Results
• Our approach is sufficient to determine and explain the
impact of decision changes for PL use case diagrams
• Strong agreement among participants about the value of
adopting our change impact analysis approach
• Very positive feedback about the approach and the impact
analysis reports provided by the tool
64
65. 65
Incremental Reconfiguration of PS models
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
66. Motivation
• Analysts manually assign traces from the generated PS
models to other external documents
• Each time that a configuration decision changes, analysts
must manually re-assign all the traces again
• We aim to reduce the effort of manually assigning traceability
links between product PS models and external documents
66
Incrementally regenerate PS use case models
by focusing only on impacted decisions
67. Model Differencing and
Regeneration Pipeline
67
Decision Model
before Changes
(M1)
Decision Model
after Changes
(M2)
Matching Decision
Model Elements
1
Correspondences
•• •• •• •• •• •• •• ••
2 Change
Calculation
Reconfiguration
of PS Models
3
Decision-level
Changes
PS Use Case
Diagram and
Specifications
Reconfigured
PS Use Case
Diagram and
Specifications
Impact
Report
68. Model Differencing and
Regeneration Pipeline
68
Decision Model
before Changes
(M1)
Decision Model
after Changes
(M2)
Matching Decision
Model Elements
1
Correspondences
•• •• •• •• •• •• •• ••
2 Change
Calculation
Reconfiguration
of PS Models
3
Decision-level
Changes
PS Use Case
Diagram and
Specifications
Reconfigured
PS Use Case
Diagram and
Specifications
Impact
Report
69. Decision Model Example
69
:DecisionModel
- name = “Provide System User Data”
:EssentialUseCase
- name = “Method of Providing Data”
:MandatoryVariationPoint
- name = “Provide System User
Data via Standard Mode”
- isSelected = True
:VariantUseCase
- name = “Provide System User Data
via Diagnostic Mode”
- isSelected = True
:VariantUseCase
- name = “Provide System User
Data via IEE QC Mode”
- isSelected = True
:VariantUseCase
variants
- number = 1
:BasicFlow
- name = “V”
:VariantOrder
- orderNumber = 4
- variantOrderNumber = 1
- isSelected = True
:OptionalStep
- orderNumber = 1
- variantOrderNumber = 2
- isSelected = True
:OptionalStep
- orderNumber = 0
- variantOrderNumber = 3
- isSelected = False
:OptionalStep
usecases
variationpoint
70. Matching Decision Model
Elements
• The structural differencing of M1 and M2 is done by searching
for the correspondences in M1 and M2
• A correspondence between two elements E1 and E2 denotes
that E1 and E2 represent decisions for the same variation in
M1 and M2
70
71. Matching Decision Model
Elements Example
71
Excerpt of Decision Model M1
(before the change)
Excerpt of Decision Model M2
(after the change)
- name = “Provide System
User Data via Standard Mode”
- isSelected = True
B11:VariantUseCase
- number = 1
B12:BasicFlow
- orderNumber = 0
- variantOrderNumber = 2
- isSelected = False
B14:OptionalStep
Triplet
(use case,
flow, step )
- name = “Provide System User
Data via Standard Mode”
- isSelected = True
C11:VariantUseCase
- number = 1
C12:BasicFlow
- orderNumber = 1
- variantOrderNumber = 2
- isSelected = True
C14:OptionalStep
- name =
“Provide
System User
Data via
Diagnostic
Mode”
- isSelected =
True
C9:VariantUse
Case
72. Matching Decision Model
Elements Example
72
Excerpt of Decision Model M1
(before the change)
Excerpt of Decision Model M2
(after the change)
- name = “Provide System
User Data via Standard Mode”
- isSelected = True
B11:VariantUseCase
- number = 1
B12:BasicFlow
- orderNumber = 0
- variantOrderNumber = 2
- isSelected = False
B14:OptionalStep
- name =
“Provide
System User
Data via
Diagnostic
Mode”
- isSelected =
True
C9:VariantUse
Case
- name = “Provide System User
Data via Standard Mode”
- isSelected = True
C11:VariantUseCase
- number = 1
C12:BasicFlow
- orderNumber = 1
- variantOrderNumber = 2
- isSelected = True
C14:OptionalStep
73. Model Differencing and
Regeneration Pipeline
73
Decision Model
before Changes
(M1)
Decision Model
after Changes
(M2)
Matching Decision
Model Elements
1
Correspondences
•• •• •• •• •• •• •• ••
2 Change
Calculation
Reconfiguration
of PS Models
3
Decision-level
Changes
PS Use Case
Diagram and
Specifications
Reconfigured
PS Use Case
Diagram and
Specifications
Impact
Report
74. Change Calculation
• Identifies decision-level changes from the corresponding
model elements
• Identifies deleted, added, and updated decisions for use case
diagram and specification
74
75. Change Calculation Example
75
Excerpt of Decision Model M1
(before the change)
Excerpt of Decision Model M2
(after the change)
- name = “Provide System
User Data via Standard Mode”
- isSelected = True
B11:VariantUseCase
- number = 1
B12:BasicFlow
- orderNumber = 0
- variantOrderNumber = 2
- isSelected = False
B14:OptionalStep
- name = “Provide System User
Data via Standard Mode”
- isSelected = True
C11:VariantUseCase
- number = 1
C12:BasicFlow
- orderNumber = 1
- variantOrderNumber = 2
- isSelected = True
C14:OptionalStep
- name =
“Provide
System User
Data via
Diagnostic
Mode”
- isSelected =
True
C9:VariantUse
Case
76. 76
Decision Model
before Changes
(M1)
Decision Model
after Changes
(M2)
Matching Decision
Model Elements
1
Correspondences
•• •• •• •• •• •• •• ••
2 Change
Calculation
Reconfiguration
of PS Models
3
Decision-level
Changes
PS Use Case
Diagram and
Specifications
Reconfigured
PS Use Case
Diagram and
Specifications
Impact
Report
Reconfiguration of Product
Specific Use Case Models
77. Reconfiguration of Product
Specific Use Case Models
• Regenerates the product specific use case diagram and
specifications only for parts impacted by at least an added,
deleted, and updated decision
• Generates a report for the impacted and regenerated parts of
the product specific models
77
78. Reconfiguration Example (1)
78
Variant Use Case: Provide System User Data via Standard Mode
Basic Flow
1. OPTIONAL STEP: The system SENDS the calibration data TO the Tester.
2. OPTIONAL STEP: The system SENDS the traceability data TO the Tester.
3. OPTIONAL STEP: The system SENDS the error traceability data TO the Tester.
4. OPTIONAL STEP: The system SENDS the traceability of calibration Data TO the Tester.
5. OPTIONAL STEP: The system SENDS the measurement data TO the Tester.
Old Decision New Decision
Steps 1 to 3: selected
Steps 4 and 5: unselected
Steps 1 to 4: selected
Step 5: unselected
80. Reconfiguration of PS Models
Example
80
Traceability link
preserved
Only one step is
inserted of the
regenerated model
81. • We configured product specific models for four customers
• We chose the most recent product to be used for
reconfiguration
• The selected product includes:
- 36 traces in the PS use case diagram
- 278 traces in PS use case specifications
• We considered 8 diverse change scenarios
81
Evaluation: Industrial Case Study
82. Research Questions
• RQ1: To what extent our approach can preserve trace links?
• RQ2: Does our approach significantly reduce manual effort?
82
83. Improving Trace Reuse
83
0
20
40
60
80
100
120
S1 S2 S3 S4 S5 S6 S7 S8
Decision Change Scenarios
% of preserved traces for
PS use case diagram
% of preserved traces for
PS use case specification
In average, 96% of the use case diagram and specification
traces were preserved
84. Reducing Manual Effort
84
0
50
100
150
200
250
300
350
S1 S2 S3 S4 S5 S6 S7 S8
Decision Change Scenarios
# of manually assigned
traces in use case
specifications without
using our approach
# of manually added
traces in use case
specifications using our
approach
On average, 4% of the use case specification traces need to be
manually assigned (when using our approach)
85. 85
Sensors
Recognize
Gesture
Identify System
Operating Status Storing
Error
Status
Provide System
Operating Status
Tester
<<include>>
<<Variant>>
Store Error
Status
<<include>>
Clearing
Error
Status
<<Variant>>
Clear Error
Status
0..1
0..1
<<require>>
STO Controller
<<include>>
Model Variability in
Use Case and Domain Models
PL Use Case
Diagram
PL Use Case
Specifications
PL Domain
Model
¨
<<s>>
<<p>>
<<p>>
<<m>>
Interactive Configuration
of PS Use Case and Domain
Models
≠
Automated
Configuration of PS Use
Case and Domain
Models
Change Impact
Analysis for
Configuration
Decision Changes
Incremental
Reconfiguration
of PS Models
{<latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit><latexitsha1_base64="(null)">(null)</latexit>
PS Use Case
Diagram
Impact Analysis
Report
PS Use Case
Specifications
PS Domain
Model
PL: Product Line
PS: Product Specific
Automated Test Case
Classification and Prioritization
Æ
Prioritized Test Suite
for new Product
Test Case Classification and Prioritization
86. Current Practice
86
STO
requirement
s
for C1
STO
requirements
for C2
STO
requirements
for C3
STO test
suite for C1
STO test
suite for C2
STO test
suite for C3
evolves to
(copy and modify)
evolves to
derived from derived from derived from
Test engineer
Requirements
analyst
Customer C1 Customer C2 Customer C3
(copy and modify)
evolves to
(select, prioritize
and modify)
evolves to
(select, prioritize
and modify)
87. Related Work
• Regression testing techniques using system design artifacts or feature
models [Wang et al., 2016; Runeson and Engstrom, 2012; Muccini et al.,
2006]
• Test case selection approaches based on test cases generation for the
product family [Lity et al., 2012, 2016; Lochau et al., 2014]
• Search-based approaches for multi-objective test case prioritization in
product lines [Parejo et al., 2016; Wang et al., 2014; Pradhan et al., 2011]
• Test case prioritization techniques using user knowledge through machine
learning algorithms [Lachmann et al., 2016; Tonella et al., 2006]
87
88. Limitations of Existing Work
• Some exiting approaches require:
- all test cases of the product line be derived upfront even if
some of them may never be executed
- detailed system design artifacts (e.g., finite state machines
and UML sequence diagrams), rather than requirements in
natural language
88
89. Objective
Support the definition and the prioritization of the test
suite for a new product by maximizing the reuse of test
suites of existing products in the product line
89
90. Main Challenge
Avoid relying on behavioral system models and
early generation of test cases for the product
family when testing new products in product lines
90
91. 91
Overview of Test Case
Classification and Prioritization
1. Classify System
Test Cases for
the new Product
2. Create New
Test Cases
Using Guidance
3. Prioritize System
Test Cases for the
New Product
• PS Models and Decision Model
for the New Product
• Test Cases, PS Models and their
Traces, and Decision Models for
Previous Product(s)
• Partial Test Suite for
the New Product
• Guidance to Update
Test Cases
Test suite for
the new Product
• Test Execution History
• Variability Information
• Size of Use Case Scenarios
• Classification of Test Cases
Prioritized Test Suite
for the New Product
92. 92
Approach Overview
1. Classify System
Test Cases for
the new Product
2. Create New
Test Cases
Using Guidance
3. Prioritize System
Test Cases for the
New Product
• PS Models and Decision Model
for the New Product
• Test Cases, PS Models and their
Traces, and Decision Models for
Previous Product(s)
• Partial Test Suite for
the New Product
• Guidance to Update
Test Cases
Test suite for
the new Product
• Test Execution History
• Variability Information
• Size of Use Case Scenarios
• Classification of Test Cases
Prioritized Test Suite
for the New Product
93. Test Case Classification Approach
93
Matching
Decision Model
Elements
Change
Calculation
Impact Report
Generation
Test Case
Classification
Decision
Models
of Previous
Products Decision Models
of New Product
•• •• •• •• •• •• •• ••
Correspondence
Decision
Change
Test Cases, Trace
Links, and PS Use
Cases of the Previous
Products
Classified
Test Cases for the
New Product
[Yes]
[No]
Is there any other
previous product?
Impact
Report
1
2
34
94. Test Case Classification Approach
94
Matching
Decision Model
Elements
Change
Calculation
Impact Report
Generation
Test Case
Classification
Decision
Models
of Previous
Products Decision Models
of New Product
•• •• •• •• •• •• •• ••
Correspondence
Decision
Change
Test Cases, Trace
Links, and PS Use
Cases of the Previous
Products
Classified
Test Cases for the
New Product
[Yes]
[No]
Is there any other
previous product?
Impact
Report
1
2
34
These two steps are
also used for incremental
reconfiguration of PS use
case models
95. Test Case Classification Approach
95
Matching
Decision Model
Elements
Change
Calculation
Impact Report
Generation
Test Case
Classification
Decision
Models
of Previous
Products Decision Models
of New Product
•• •• •• •• •• •• •• ••
Correspondence
Decision
Change
Test Cases, Trace
Links, and PS Use
Cases of the Previous
Products
Classified
Test Cases for the
New Product
[Yes]
[No]
Is there any other
previous product?
Impact
Report
1
2
34
96. Test Case Classification
• To test a new product, we classify system test cases of
previous product(s) into reusable, retestable, and obsolete
• The classification is based on:
- Decision changes
- Trace links between the system test cases and the PS use
case specifications
96
97. Use Case Scenario Model (1)
97
1. The system requests the move
capacitance from the sensors
2. INCLUDE USE CASE Identify
System Operating Status
1. The system increments the
OveruseCounter by the increment step
3. The system VALIDATES THAT
the operating status is valid
4. The system VALIDATES THAT
the movement is a valid kick
1. ABORT
5. The system SENDS the valid
kick status TO the STO Controller
2. ABORT
true
true
false
false
A
B
C
D
E
F
H
G
This use case scenario model
covers 3 scenarios:
1. A->B->C->D->E
2. A->B->C->D->G->H
3. A->B->C->F
98. Use Case Scenario Model (2)
98
A
B
C
D
F
E
G
Test Case Id Covered scenario
TC1 A->B->C->D->E
(Basic Flow)
TC2 A->B->C->D->G->H
(Basic Flow + SAF2)
TC3 A->B->C->F
(Basic Flow + SAF1)
TC1
Description
Test caseTo check: Recognize gesture-Basic Flow-Valid
kick detected - Success
4.1.1.1.1
Objective:
- To check if the operating status is OK.
- To check if the ECU can recognize a valid kick gesture.
Method:
- Trigger a valid kick gesture.
- Check if the operating status is OK and the overuse protection status
is not active.
- Check if the valid kick gesture can be recognized.
Traceability table
H
99. Test Cases Classification for New
Product (1)
99
A
B
C
D
F
E
G
Old product
How to classify TC1 for a new product?
A
B
C
D
F
E
G
New product
No decision change
TC1 -> reusable
H H
exercises an execution
sequence of use case
steps that has remained
valid in the new product
100. Test Cases Classification for New
Product (2)
100
A
B
C
D
F
E
G
Old product
How to classify TC1 for a new product?
A
B
C
D
F
E
G
New product
Decision change
does not impact the
tested scenario
TC1 -> reusable
XH H
101. Test Cases Classification for New
Product (3)
101
A
B
C
D
F
E
G
Old product
How to classify TC1 for a new product?
New product
Decision change
impacts the tested
scenario + type of
impacted steps
TC1 -> obsolete
C
D
F
E
G
A
Input step
H
B
H
exercises an invalid
execution sequence of
use case steps in the
new product
102. Test Cases Classification for New
Product (4)
102
B
C
D
F
E
G
Old product
How to classify TC1 for a new product?
New product
Decision change
impacts the tested
scenario + type of
impacted steps
TC1 -> retestable
A
C
D
F
E
G
B
Internal step
H
A
H
exercises execution sequence
of use case steps that has
remained valid in the new
product except for internal
steps
103. 103
Create New Test Cases Using
Guidance
1. Classify System
Test Cases for
the new Product
2. Create New Test
Cases Using Guidance
3. Prioritize System
Test Cases for the
New Product
• PS Models and Decision Model
for the New Product
• Test Cases, PS Models and their
Traces, and Decision Models for
Previous Product(s)
• Partial Test Suite for the New
Product
• Guidance to Update Test Cases
Test suite for
the new Product
• Test Execution History
• Variability Information
• Size of Use Case Scenarios
• Classification of Test Cases
Prioritized Test Suite
for the New Product
104. Identification of New Scenarios
The identification of new scenarios for the new product is a
two-fold process:
1. A new scenario has to be derived for each retestable and
obsolete test cases (from the old product(s))
2. Identification of scenarios that were never covered by a test
case
104
105. Identification of New Scenarios (1)
105
B
C
D
F
E
G
New product
H
X
Z Id Classification Covered
scenario
New scenario to
cover
TC1 Reusable A->B->C
->D->E
-
TC2 Retestable A->B->C
->D->G->H
A->B->C->D->H
TC3 Obsolete A->B->C->F A->B->C->X->F
A
106. Identification of New Scenarios (2)
106
B
C
D
F
E
H
X
Z
ATrue (TC1) and
false (TC3) paths
already covered
True (TC1) and
false (TC2) paths
already covered
Only the true path is covered
by the new scenario
extracted from TC3
1. We keep track of already covered
steps (blue nodes)
2. For each conditional step, we also
store the (true and false) paths that
were already visited
3. We traverse the use case scenario
model and use the stored
information to extract uncovered
scenarios.
• Here, A->B->C->X->Z is uncovered
107. Guidance Example
107
1. The system requests the move
capacitance from the sensors
2. INCLUDE USE CASE Identify
System Operating Status
1. The system increments the
OveruseCounter by the increment step
3. The system VALIDATES THAT
the operating status is valid
4. The system VALIDATES THAT
the movement is a valid kick
2. ABORT
true
false
A
B
C
D
H
G
Please update the existing test
case “TC2” to account for the
fact that the step in red was
deleted from the scenario of the
use case specifications of the
previous product.
108. 108
Prioritize System Test Cases for
the New Product
1. Classify System
Test Cases for
the new Product
2. Create New Test
Cases Using Guidance
3. Prioritize System
Test Cases for the
New Product
• PS Models and Decision Model
for the New Product
• Test Cases, PS Models and their
Traces, and Decision Models for
Previous Product(s)
• Partial Test Suite for the New
Product
• Guidance to Update Test Cases
Test suite for
the new Product
• Test Execution History
• Variability Information
• Size of Use Case Scenarios
• Classification of Test Cases
Prioritized Test Suite
for the New Product
109. Test Case Prioritization
• Sort test cases as to maximize the likelihood of executing failing test
cases first
• Sorting is based on a set of factors that correlate with the presence of
faults:
- Size of the scenario
- Degree of variability in the use case scenario
- Number of products and versions in which the test case failed
- Classification of the test case, i.e., reusable, retestable
109
110. Test Case Prioritization Approach
110
Identifying Significant
Factors
1
Size of Use Case
Scenarios and
Variability Information
Classification
of Test Cases
Prioritizing Test Cases Based on
Significant Factors2
List of Significant
Factors
Selected and
Modified
Test Cases
Prioritized Test Suite
Test Execution
History
Rely on Logistic
Regression
111. Logistic Regression
• Predictive analysis aimed to determine the relationship between
one dependent binary variable and one or more independent
variables
- Dependent binary variable: the failure of a given test case
- Independent variables: the chosen predictors, e.g., number of
the products in which the test case failed
111
112. Logistic Regression Model
Ln(P/(1-P)) = B0+B1*S+B2*V+B3*FP+B4*FV+B5*R
112
Size
Failing
Products
Degree
of variability
Failing
Versions
Is Retestable?
The logistic regression model estimates the logarithm
of the odds that a test case fails
Intercept
Estimated
Coefficient
113. Logistic Regression Model
113
The logistic regression model estimates the logarithm
of the odds that a test case fails
The generated logistic regression model is a predictive model
that returns the probability of failure of a test case
114. Test Case Prioritization Approach
114
Identifying Significant
Factors
1
Size of Use Case
Scenarios and
Variability Information
Classification
of Test Cases
Prioritizing Test Cases Based on
Significant Factors2
List of Significant
Factors
Selected and
Modified
Test Cases
Prioritized Test Suite
Test Execution
History
115. Identifying Significant Factors
• We rely on the p-value computed by Wald Test on the logistic
regression model trained by including all the factors
• We keep the factors whose p-value is smaller than the
threshold (0.05)
115
116. Test Case Prioritization Approach
116
Identifying Significant
Factors
1
Size of Use Case
Scenarios and
Variability Information
Classification
of Test Cases
Prioritizing Test Cases Based on
Significant Factors2
List of Significant
Factors
Selected and
Modified
Test Cases
Prioritized Test Suite
Test Execution
History
117. Prioritizing Test Cases
1. We derive a logistic regression model that includes only
significant factors
2. Probabilities of failures are calculated from the regression
model
3. Test cases are sorted in descending order of probability
117
119. Empirical Evaluation
• RQ1: Does the proposed approach provide correct test case
classification results?
• RQ2: Does the proposed approach accurately identify new
scenarios that are relevant for testing a new product?
• RQ3: Does the proposed approach successfully prioritize test
cases?
• RQ4: Can the proposed approach significantly reduce testing
costs compared to current industrial practice?
119
121. Our approach identifies more than 80% of the failures by
executing less than 50% of the test cases
RQ3: Effectiveness of Test Case
Prioritization
121
Classified
Test Suites
Product to
be Tested
% Test Case Executed to
Identify
% Failures
Detected
with 50% of
Test Cases
All Failures 80% of
Failures
P1 P2 72.09 38.37 97.43
P1, P2 P3 41.66 22.91 100
P1, P2, P3 P4 51.80 22.89 95
P1, P2, P3, P4 P5 26.54 18.58 100
122. • We compared results from our approach with the ideal
situation where all failing test cases are executed first
• For both cases:
- We computed the Area Under Curve (AUC) for the cumulative
percentage of failures triggered by executed test cases
- We computed the AUC ratio
122
RQ3: Effectiveness of Test Case
Prioritization
124. References
• [Hajri et al., 2019], “Automating Test Case Classification and
Prioritization for Use Case-Driven Testing in Product Lines”,
arXiv:1905.11699, 2019
• [Hajri et al., 2018], “Change impact analysis for evolving
configuration decisions in product line use case models”, JSS.
• [Hajri et al., 2018], “Configuring use case models in product
families”, SoSyM.
124
125. References
• [Hajri et al., 2017], “Incremental reconfiguration of product
specific use case models for evolving configuration decisions”,
REFSQ 2017.
• [Hajri et al., 2016], “PUMConf: A tool to configure product specific
use case and domain models in a product line”, FSE 2016.
• [Hajri et al., 2015], “Applying product line use case modeling in
an industrial automotive embedded system: Lessons learned and
a refined approach”, MoDELS 2015.
125
126. .lusoftware verification & validation
VVS
Supporting Change in Product Lines
within the Context of Use Case-driven
Development and Testing
Lionel Briand
SnT Centre for Security, Reliability and Trust
University of Luxembourg, Luxembourg