Evaluating software vulnerabilities using fuzzing methods Victor Ionel
The document discusses a method for combining whitebox and blackbox fuzzing techniques to improve the discovery of software vulnerabilities. It proposes an architecture with four components: 1) a path predicates collector that uses symbolic execution to generate constraints, 2) an input data generator that uses these constraints to generate test cases, 3) a delivery mechanism that tests the software, and 4) a monitoring system to detect crashes or errors. The method uses the KLEE symbolic execution tool to generate constraints, the PPL library to generate test cases from these constraints, and the ZZuf fuzzer to deliver tests and monitor for issues.
This paper analyzes the effects of different Network-on-Chip (NoC) modeling styles in SystemC on simulation speed compared to a reference VHDL model. Two approximately timed (AT) and loosely timed (LT) transaction level (TL) models achieved 13-40x and 20-30x speedups respectively over the VHDL model with less than 10% error. The AT model offered a notable speedup with modest error and is recommended over the LT model which did not provide significant additional speedup despite larger estimation errors, especially under higher loads. Increasing transfer size and raising the abstraction level to transaction-level modeling were found to be effective methods to significantly improve simulation performance for evaluating NoC designs.
Model Based Software Timing Analysis Using Sequence Diagram for Commercial Ap...iosrjce
This document presents a framework for software timing analysis using UML sequence diagrams. The framework involves first gathering requirements and creating a sequence diagram. The sequence diagram is then converted to a label transition graph. Algorithms are applied to reduce the graph to a path expression and determine the minimum and maximum path lengths, representing the minimum and maximum timings. A case study applying this process to the timing analysis of a purchasing process in a mall is presented as an example. The advantage of this approach is that timing requirements can be identified early in the requirements stage from the UML models.
This document contains the table of contents for a course on Object Oriented Programming in C++, Data Structures, Database Management Systems, Boolean Algebra, and Networking. The course is divided into 5 units covering these topics. Unit I focuses on Object Oriented Programming in C++ and covers chapters on classes, objects, inheritance, and file handling. Unit II covers data structures like arrays, stacks, queues and linked lists. Unit III is about database management systems and SQL. Unit IV covers Boolean algebra. Unit V is about networking and communication technology.
Synthesizing specifications for real time applications that involve distributed communication protocol
entities from a service specification, which is modeled in the UML state machine with composite states, is a
time-consuming and labor-intensive task. Existing synthesis techniques for UML-based service
specifications do not account for timing constrains and, therefore, cannot be used in real time applications
for which the timing constraints are crucial and must be considered. In this paper, we address the problem
of time assignment to the events defined in the service specification modeled in UML state machine. In
addition, we show how to extend a technique that automatically synthesizes UML-based protocol
specifications from a service specification to consider the timing constraints given in the service
specification. The resulting synthesized protocol is guaranteed to conform to the timing constraints given in
the service specification.
Comparison of the Formal Specification Languages Based Upon Various ParametersIOSR Journals
This document compares various formal specification languages based on different parameters. It describes Z notation, OCL, VDM, SDL and Larch languages. Z notation uses set theory and logic to model state using schemas. OCL uses constraints to describe UML models. VDM uses basic types and functions to formally specify models. SDL specifies systems as communicating finite state machines. Larch uses an interface language and shared language to specify behaviors. The languages differ based on whether they are process-oriented, sequential-oriented, model-oriented or property-oriented and the underlying mathematics used like set theory, logic or algebra.
Simple Obfuscation Tool for Software ProtectionQUESTJOURNAL
ABSTRACT: This paper discusses the issue of source code obfuscation and also the creation of a tool for automatic obfuscation of source code written in C language. The result is a tool that performs both data flow and control flow obfuscation and allows the user to configure the applied transformation algorithm. For easier and better usability the tool provides a graphical user interface, which brings possibility to control and configure transformation process.
Open Problems in Automatically Refactoring Legacy Java Software to use New Fe...Raffi Khatchadourian
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. In this talk, I will first overview several new, key features of Java 8 that can help make programs easier to read, write, and maintain, especially in regards to collections. These features include Lambda Expressions, the Stream API, and enhanced interfaces, many of which help bridge the gap between functional and imperative programming paradigms and allow for succinct concurrency implementations. Next, I will discuss several open issues related to automatically migrating (refactoring) legacy Java software to use such features correctly, efficiently, and as completely as possible. Solving these problems will help developers to maximally understand and adopt these new features thus improving their software.
The document provides an overview of parallel programming using MPI and OpenMP. It discusses key concepts of MPI including message passing, blocking and non-blocking communication, and collective communication operations. It also covers OpenMP parallel programming model including shared memory model, fork/join parallelism, parallel for loops, and shared/private variables. The document is intended as lecture material for an introduction to high performance computing using MPI and OpenMP.
The document discusses model comparison approaches for delta-compression. It describes comparing models at the element level by matching elements between models and identifying differences. It also discusses representation of differences for compression purposes and experiments comparing EMF Compare and EMF Compress on reverse engineered models from Git repositories.
Kroening et al, v2c a verilog to c translatorsce,bhopal
The document describes v2c, a tool that translates Verilog to C. v2c accepts synthesizable Verilog as input and generates equivalent C code called a "software netlist". The translation is based on Verilog's synthesis semantics and preserves cycle accuracy and bit precision. The generated C code can then be used for hardware property verification, co-verification, simulation, and equivalence checking by leveraging software verification techniques.
De-virtualizing virtual Function Calls using various Type Analysis Technique...IOSR Journals
This document discusses techniques for optimizing virtual function calls in object-oriented programming languages. Virtual function calls are indirect calls that involve lookup through a virtual function table (VFT) at runtime, which has performance overhead compared to direct calls. Various static analysis techniques like Class Hierarchy Analysis (CHA) and Rapid Type Analysis (RTA) aim to resolve some virtual calls by determining the possible target types and replacing indirect calls with direct calls if a single target is possible. CHA uses the class hierarchy and declared types to determine possible target types, while RTA also considers instantiated types in the program to further reduce possible targets. The document analyzes examples to demonstrate how CHA and RTA can optimize some virtual calls.
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. This talk will detail several new key features of Java 8 that can help make programs easier to read, write, and maintain. Java 8 comes with many features, especially related to collection libraries. We will cover such new features as Lambda Expressions, the Stream API, enhanced interfaces, and more.
A function pointer points to executable code in memory rather than data values. When dereferenced, a function pointer can invoke the function it points to and pass it arguments like a normal function call. Function pointers allow selecting a function to execute at runtime based on variable values. In C, a function pointer variable contains the address of the function. C++ function pointers can also refer to class member functions. Function pointers provide a way to pass functions as arguments to other functions.
Automated Refactoring of Legacy Java Software to Default Methods Talk at GMURaffi Khatchadourian
Java 8 default methods, which allow interfaces to contain (instance) method implementations, are useful for the skeletal implementation software design pattern. However, it is not easy to transform existing software to exploit default methods. In this talk, I discuss an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software.
Ece iv-fundamentals of hdl [10 ec45]-notessiddu kadiwal
This document outlines the syllabus for a course on fundamentals of hardware description languages (HDL). It covers 8 units: 1) Introduction to HDLs including VHDL and Verilog, 2) Data flow descriptions, 3) Behavioral descriptions, 4) Structural descriptions, 5) Procedures, tasks and functions, 6) Mixed-type descriptions, 7) Mixed-language descriptions, and 8) Synthesis basics. Each unit covers different HDL modeling concepts and techniques over 6-7 hours. The introduction unit provides an overview of HDLs, different levels of abstraction, basic VHDL structure including entities and architectures, and behavioral/structural modeling styles.
This document contains a 10 question quiz about VB.Net. It asks multiple choice questions about basic VB.Net concepts like data types, type conversion methods, access modifiers, and common statements. The questions cover topics such as basic data types, type conversion, access modifiers like Public and Private, and statements to declare variables, constants, enums, classes and structures.
A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHMVLSICS Design
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.
Multi-dimensional exploration of API usage - ICPC13 - 21-05-13Coen De Roover
Presented at the 21st IEEE International Conference on Program Comprehension (ICPC 2013), San Francisco (USA). Website of the paper: http://softlang.uni-koblenz.de/explore-API-usage/
This document proposes using parallel finite automata (PFA) to model the execution of processors and instructions. PFA allow modeling processors with long pipelines, parallelism, and out-of-order execution in a compact way compared to traditional finite state automata. The approach partitions a processor's resources into sub-automata that can execute transitions independently, while synchronization ensures coordinated execution. This partitioning reduces states while still capturing pipeline effects, hazards, and parallelism seen in modern processors.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
Pattern-based Definition and Generation of Components for a Synchronous React...ijeukens
This document discusses a method for specifying components in a synchronous reactive actor-oriented language and automatically generating code from those specifications to guarantee the components are correct with respect to the language's semantics. The method enhances component interfaces with patterns of required input data that capture possible conditions for output generation. Algorithms are described for generating code from interfaces with patterns to ensure the code is consistent with the synchronous reactive semantics. A case study demonstrates the usefulness of the approach.
This document compares different techniques for software architecture recovery using include dependencies and symbol dependencies. It finds that symbol dependencies provide more accurate results than include dependencies. The document analyzes several large, open source projects using various recovery techniques and different dependency methods. It measures the accuracy of the recovered architectures against ground truths. The results show that the quality of the recovered architecture is improved when using symbol dependencies as input compared to include dependencies. The best performing technique also sometimes changes based on which dependency method is used. In conclusion, the quality of the input affects the quality of the output for software architecture recovery.
This document discusses components in real-time systems. It defines real-time systems as those with tight timing constraints where responses must occur within strict deadlines. It describes the components of real-time systems as modular and cohesive software packages that communicate via interfaces. The document outlines a process for developing component-based real-time systems, including top-level design, detailed design, scheduling, worst-case execution time verification, and system implementation and testing. It provides examples of real-time components from the Rubus operating system.
Comparison of the Formal Specification Languages Based Upon Various ParametersIOSR Journals
This document compares various formal specification languages based on different parameters. It describes Z notation, OCL, VDM, SDL and Larch languages. Z notation uses set theory and logic to model state using schemas. OCL uses constraints to describe UML models. VDM uses basic types and functions to formally specify models. SDL specifies systems as communicating finite state machines. Larch uses an interface language and shared language to specify behaviors. The languages differ based on whether they are process-oriented, sequential-oriented, model-oriented or property-oriented and the underlying mathematics used like set theory, logic or algebra.
Simple Obfuscation Tool for Software ProtectionQUESTJOURNAL
ABSTRACT: This paper discusses the issue of source code obfuscation and also the creation of a tool for automatic obfuscation of source code written in C language. The result is a tool that performs both data flow and control flow obfuscation and allows the user to configure the applied transformation algorithm. For easier and better usability the tool provides a graphical user interface, which brings possibility to control and configure transformation process.
Open Problems in Automatically Refactoring Legacy Java Software to use New Fe...Raffi Khatchadourian
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. In this talk, I will first overview several new, key features of Java 8 that can help make programs easier to read, write, and maintain, especially in regards to collections. These features include Lambda Expressions, the Stream API, and enhanced interfaces, many of which help bridge the gap between functional and imperative programming paradigms and allow for succinct concurrency implementations. Next, I will discuss several open issues related to automatically migrating (refactoring) legacy Java software to use such features correctly, efficiently, and as completely as possible. Solving these problems will help developers to maximally understand and adopt these new features thus improving their software.
The document provides an overview of parallel programming using MPI and OpenMP. It discusses key concepts of MPI including message passing, blocking and non-blocking communication, and collective communication operations. It also covers OpenMP parallel programming model including shared memory model, fork/join parallelism, parallel for loops, and shared/private variables. The document is intended as lecture material for an introduction to high performance computing using MPI and OpenMP.
The document discusses model comparison approaches for delta-compression. It describes comparing models at the element level by matching elements between models and identifying differences. It also discusses representation of differences for compression purposes and experiments comparing EMF Compare and EMF Compress on reverse engineered models from Git repositories.
Kroening et al, v2c a verilog to c translatorsce,bhopal
The document describes v2c, a tool that translates Verilog to C. v2c accepts synthesizable Verilog as input and generates equivalent C code called a "software netlist". The translation is based on Verilog's synthesis semantics and preserves cycle accuracy and bit precision. The generated C code can then be used for hardware property verification, co-verification, simulation, and equivalence checking by leveraging software verification techniques.
De-virtualizing virtual Function Calls using various Type Analysis Technique...IOSR Journals
This document discusses techniques for optimizing virtual function calls in object-oriented programming languages. Virtual function calls are indirect calls that involve lookup through a virtual function table (VFT) at runtime, which has performance overhead compared to direct calls. Various static analysis techniques like Class Hierarchy Analysis (CHA) and Rapid Type Analysis (RTA) aim to resolve some virtual calls by determining the possible target types and replacing indirect calls with direct calls if a single target is possible. CHA uses the class hierarchy and declared types to determine possible target types, while RTA also considers instantiated types in the program to further reduce possible targets. The document analyzes examples to demonstrate how CHA and RTA can optimize some virtual calls.
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. This talk will detail several new key features of Java 8 that can help make programs easier to read, write, and maintain. Java 8 comes with many features, especially related to collection libraries. We will cover such new features as Lambda Expressions, the Stream API, enhanced interfaces, and more.
A function pointer points to executable code in memory rather than data values. When dereferenced, a function pointer can invoke the function it points to and pass it arguments like a normal function call. Function pointers allow selecting a function to execute at runtime based on variable values. In C, a function pointer variable contains the address of the function. C++ function pointers can also refer to class member functions. Function pointers provide a way to pass functions as arguments to other functions.
Automated Refactoring of Legacy Java Software to Default Methods Talk at GMURaffi Khatchadourian
Java 8 default methods, which allow interfaces to contain (instance) method implementations, are useful for the skeletal implementation software design pattern. However, it is not easy to transform existing software to exploit default methods. In this talk, I discuss an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software.
Ece iv-fundamentals of hdl [10 ec45]-notessiddu kadiwal
This document outlines the syllabus for a course on fundamentals of hardware description languages (HDL). It covers 8 units: 1) Introduction to HDLs including VHDL and Verilog, 2) Data flow descriptions, 3) Behavioral descriptions, 4) Structural descriptions, 5) Procedures, tasks and functions, 6) Mixed-type descriptions, 7) Mixed-language descriptions, and 8) Synthesis basics. Each unit covers different HDL modeling concepts and techniques over 6-7 hours. The introduction unit provides an overview of HDLs, different levels of abstraction, basic VHDL structure including entities and architectures, and behavioral/structural modeling styles.
This document contains a 10 question quiz about VB.Net. It asks multiple choice questions about basic VB.Net concepts like data types, type conversion methods, access modifiers, and common statements. The questions cover topics such as basic data types, type conversion, access modifiers like Public and Private, and statements to declare variables, constants, enums, classes and structures.
A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHMVLSICS Design
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.
Multi-dimensional exploration of API usage - ICPC13 - 21-05-13Coen De Roover
Presented at the 21st IEEE International Conference on Program Comprehension (ICPC 2013), San Francisco (USA). Website of the paper: http://softlang.uni-koblenz.de/explore-API-usage/
This document proposes using parallel finite automata (PFA) to model the execution of processors and instructions. PFA allow modeling processors with long pipelines, parallelism, and out-of-order execution in a compact way compared to traditional finite state automata. The approach partitions a processor's resources into sub-automata that can execute transitions independently, while synchronization ensures coordinated execution. This partitioning reduces states while still capturing pipeline effects, hazards, and parallelism seen in modern processors.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
Pattern-based Definition and Generation of Components for a Synchronous React...ijeukens
This document discusses a method for specifying components in a synchronous reactive actor-oriented language and automatically generating code from those specifications to guarantee the components are correct with respect to the language's semantics. The method enhances component interfaces with patterns of required input data that capture possible conditions for output generation. Algorithms are described for generating code from interfaces with patterns to ensure the code is consistent with the synchronous reactive semantics. A case study demonstrates the usefulness of the approach.
This document compares different techniques for software architecture recovery using include dependencies and symbol dependencies. It finds that symbol dependencies provide more accurate results than include dependencies. The document analyzes several large, open source projects using various recovery techniques and different dependency methods. It measures the accuracy of the recovered architectures against ground truths. The results show that the quality of the recovered architecture is improved when using symbol dependencies as input compared to include dependencies. The best performing technique also sometimes changes based on which dependency method is used. In conclusion, the quality of the input affects the quality of the output for software architecture recovery.
This document discusses components in real-time systems. It defines real-time systems as those with tight timing constraints where responses must occur within strict deadlines. It describes the components of real-time systems as modular and cohesive software packages that communicate via interfaces. The document outlines a process for developing component-based real-time systems, including top-level design, detailed design, scheduling, worst-case execution time verification, and system implementation and testing. It provides examples of real-time components from the Rubus operating system.
ToxOtis: A Java Interface to the OpenTox Predictive Toxicology NetworkPantelis Sopasakis
The ToxOtis suite serves a double purpose in the quest for painless integration: First off, it is a Java interface to any OpenTox compliant web service and facilitates access control (Authentication and Authorization), the parsing of RDF (Resource Description Framework) documents that are exchanged with the web services, and the consumption of Model Building, Toxicity Prediction and other ancillary web services (e.g. computation of molecular similarity). Second, it facilitates the database management, the serialization of resources in RDF and provides all that is necessary to a web service provider to join the OpenTox network and offer predictive toxicology web services.
The document introduces two MATLAB-based LTE simulators for link and system level simulations. The source codes are available under an academic license, allowing researchers to reproduce wireless communications research. The link level simulator models physical layer aspects like channel estimation and MIMO detection. The system level simulator focuses on network issues like scheduling and interference. Together the simulators enable the investigation and comparison of algorithms in a standardized LTE environment.
Formal Abstraction & Interface Layer for Application Development in Automatio...ijcncjournal019
This paper presents a novel, formal language semantics and an abstraction layer for developing application code focussed on running on agents or nodes of a multi-node distributed system aimed at providing any IoT service, automation, control or monitoring in the physical environment. The proposed semantics are rigorously validated by K-Framework alongside a simulation with code produced using the said semantics. Furthermore, the paper proposes a clocking strategy for systems built on the framework, potential conflict resolution designs and their trade-offs, adherence to CAP Theorem and verification of the atomic semantic using Fischer’s Protocol. A negative test-case experiment is also included to verify the
correctness of the atomic semantic.
Formal Abstraction & Interface Layer for Application Development in Automatio...IJCNCJournal
This paper presents a novel, formal language semantics and an abstraction layer for developing application code focussed on running on agents or nodes of a multi-node distributed system aimed at providing any IoT service, automation, control or monitoring in the physical environment. The proposed semantics are rigorously validated by K-Framework alongside a simulation with code produced using the said semantics. Furthermore, the paper proposes a clocking strategy for systems built on the framework, potential conflict resolution designs and their trade-offs, adherence to CAP Theorem and verification of the atomic semantic using Fischer’s Protocol. A negative test-case experiment is also included to verify the correctness of the atomic semantic.
The document is an assignment submission that describes the 7 layers of the OSI model. It begins with assignment details such as the name, course, and student submitting it. It then provides a 3 paragraph overview of the OSI model, describing it as a conceptual framework for networking systems that characterizes functions into 7 abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. Each layer is then briefly defined in 1-2 sentences explaining its basic function.
Designing Run-Time Environments to have Predefined Global DynamicsIJCNCJournal
The stability and the predictability of a computer network algorithm's performance are as important as the
main functional purpose of networking software. However, asserting or deriving such properties from the
finite state machine implementations of protocols is hard and, except for singular cases like TCP, is not
done today. In this paper, we propose to design and study run-time environments for networking protocols
which inherently enforce desirable, predictable global dynamics. To this end we merge two complementary
design approaches: (i) A design-time and bottom up approach that enables us to engineer algorithms based
on an analyzable (reaction) flow model. (ii) A run-time and top-down approach based on an autonomous
stack composition framework, which switches among implementation alternatives to find optimal operation
configurations. We demonstrate the feasibility of our self-optimizing system in both simulations and realworld
Internet setups
The AgentMatcher system matches learners and learning objects (LOs) using a tree-structured representation of metadata. It extracts metadata from LOs using LOMGen and stores it in a database. Learners can enter query parameters as a weighted tree, which is compared to LO metadata trees to find similar LOs. Top matches above a similarity threshold are returned to the learner. LOMGen semi-automatically generates metadata using keywords and allows an administrator to refine selections. This enhances precision over simple keyword searches.
Automatic Synthesis and Formal Verification of Interfaces Between Incompatibl...IDES Editor
In this work, we are concerned with automatic
synthesis and formal verification of interfaces between
incompatible soft intellectual properties (IPs) for System On
Chip (SOC) design. IPs Structural and dynamic aspects are
modeled via UML2.x diagrams such as structural, timing and
Statecharts diagrams. From these diagrams, interfaces are
generated automatically between incompatible IPs following
an interface synthesis algorithm. Interfaces behaviors
verification is performed by the model checker that is
integrated in Maude language. A Maude specification
including interface specification and properties for verification
are generated automatically from UML diagrams.
The document discusses an approach to automatically recover pointcut expressions (PCEs) in evolving aspect-oriented software. It presents an algorithm that derives structural patterns from program elements selected by an original PCE. These patterns capture commonalities between the elements. The patterns are then applied to later versions to suggest new elements that may need to be included in an updated PCE. An evaluation of the approach on three programs found it was able to accurately infer 90% of new elements selected by PCEs in subsequent versions. The approach aims to assist developers in maintaining PCEs as software evolves.
Over time, Machine Learning inference workloads became more and more demanding in terms of latency and throughput, with multiple models being deployed in the system. This scenario provides large rooms for optimizations of runtime and memory, which current systems fall short in exploring because they employ a black-box model of ML models and tasks.
On the opposite side, Pretzel adopts a white-box description of ML models, which allows the framework to perform optimizations over deployed models and running tasks, saving memory and increasing the overall system performance. In this talk we will show the motivations behind Pretzel, its current design and possible future developments.
This document discusses the process of lexical analysis in compiling or interpreting a program. It begins with an abstract discussing how lexical analysis involves turning a string of letters into tokens like keywords, identifiers, constants, and operators. It then provides background on lexical analysis, explaining that it reads input characters one by one and groups them into tokens that are passed to a parser. Key techniques for lexical analysis mentioned include using regular expressions and finite automata to identify tokens. The document also reviews related work on parallelizing lexical analysis and includes diagrams of the lexical analysis process and sample output tokens. It concludes by discussing limitations and opportunities for future work improving lexical analysis.
The document describes an extended method for synthesizing distributed protocol specifications from UML-based service specifications that include timing constraints. The method first assigns timing intervals to transitions in the service specification. It then extends an existing UML-based protocol synthesis technique to consider channel delays between communicating protocol entities when deriving transition events and timing constraints. The resulting synthesized protocol specifications are guaranteed to conform to the timing constraints of the original service specification.
Synthesizing specifications for real time applications that involve distributed communication protocol
entities from a service specification, which is modeled in the UML state machine with composite states, is a
time-consuming and labor-intensive task. Existing synthesis techniques for UML-based service
specifications do not account for timing constrains and, therefore, cannot be used in real time applications
for which the timing constraints are crucial and must be considered. In this paper, we address the problem
of time assignment to the events defined in the service specification modeled in UML state machine. In
addition, we show how to extend a technique that automatically synthesizes UML-based protocol
specifications from a service specification to consider the timing constraints given in the service
specification. The resulting synthesized protocol is guaranteed to conform to the timing constraints given in
the service specification.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
This document discusses how to download and play the mobile game Subway Surfers on a personal computer. It describes using BlueStacks, an Android emulator, to install and run the game normally played on phones and tablets. BlueStacks allows users to access Google Play to download Subway Surfers and other Android apps. Once installed through BlueStacks, the game can be played offline on a PC like a mobile game, allowing users to enjoy Subway Surfers on a larger screen without being limited to a phone.
Transaction handling in com, ejb and .netijseajournal
The technology evolution has shown a very impressive performance in the last years by introducing several
technologies that are based on the concept of component. As time passes, new versions of Component-
Based technologies are released in order to improve services provided by previous ones. One important
issue that regards these technologies is transactional activity. Transactions are important because they
consist in sending different small amounts of information collected properly in a single combined unit
which makes the process simpler, less expensive and also improves the reliability of the whole system,
reducing its chances to go through possible failures. Different Component-Based technologies offer
different ways of handling transactions. In this paper, we will review and discuss how transactions are
handled in three of them: COM, EJB and .NET. It can be expected that .NET offers more efficient
mechanisms due to the fact of being released later than the other two technologies. Nevertheless, COM and
EJB are still present in the market and their services are still widely used. Comparing transaction handling
in these technologies will be helpful to analyze the advantages and disadvantages of each of them. This
comparison and evaluation will be seen in two main perspectives: performance and security.
The role of the lexical analyzer
Specification of tokens
Finite state machines
From a regular expressions to an NFA
Convert NFA to DFA
Transforming grammars and regular expressions
Transforming automata to grammars
Language for specifying lexical analyzers
Value Stream Mapping Worskshops for Intelligent Continuous SecurityMarc Hornbeek
This presentation provides detailed guidance and tools for conducting Current State and Future State Value Stream Mapping workshops for Intelligent Continuous Security.
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxRishavKumar530754
LiDAR-Based System for Autonomous Cars
Autonomous Driving with LiDAR Tech
LiDAR Integration in Self-Driving Cars
Self-Driving Vehicles Using LiDAR
LiDAR Mapping for Driverless Cars
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
ADVXAI IN MALWARE ANALYSIS FRAMEWORK: BALANCING EXPLAINABILITY WITH SECURITYijscai
With the increased use of Artificial Intelligence (AI) in malware analysis there is also an increased need to
understand the decisions models make when identifying malicious artifacts. Explainable AI (XAI) becomes
the answer to interpreting the decision-making process that AI malware analysis models use to determine
malicious benign samples to gain trust that in a production environment, the system is able to catch
malware. With any cyber innovation brings a new set of challenges and literature soon came out about XAI
as a new attack vector. Adversarial XAI (AdvXAI) is a relatively new concept but with AI applications in
many sectors, it is crucial to quickly respond to the attack surface that it creates. This paper seeks to
conceptualize a theoretical framework focused on addressing AdvXAI in malware analysis in an effort to
balance explainability with security. Following this framework, designing a machine with an AI malware
detection and analysis model will ensure that it can effectively analyze malware, explain how it came to its
decision, and be built securely to avoid adversarial attacks and manipulations. The framework focuses on
choosing malware datasets to train the model, choosing the AI model, choosing an XAI technique,
implementing AdvXAI defensive measures, and continually evaluating the model. This framework will
significantly contribute to automated malware detection and XAI efforts allowing for secure systems that
are resilient to adversarial attacks.
The Fluke 925 is a vane anemometer, a handheld device designed to measure wind speed, air flow (volume), and temperature. It features a separate sensor and display unit, allowing greater flexibility and ease of use in tight or hard-to-reach spaces. The Fluke 925 is particularly suitable for HVAC (heating, ventilation, and air conditioning) maintenance in both residential and commercial buildings, offering a durable and cost-effective solution for routine airflow diagnostics.
International Journal of Distributed and Parallel systems (IJDPS)samueljackson3773
The growth of Internet and other web technologies requires the development of new
algorithms and architectures for parallel and distributed computing. International journal of
Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to
publish high quality scientific papers arising from original research and development from
the international community in the areas of parallel and distributed systems. IJDPS serves
as a platform for engineers and researchers to present new ideas and system technology,
with an interactive and friendly, but strongly professional atmosphere.
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. Originally applied to water (hydromechanics), it found applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of various fluids at rest, and fluid dynamics.
Fluid statics, also known as hydrostatics, is the study of fluids at rest, specifically when there's no relative motion between fluid particles. It focuses on the conditions under which fluids are in stable equilibrium and doesn't involve fluid motion.
Fluid kinematics is the branch of fluid mechanics that focuses on describing and analyzing the motion of fluids, such as liquids and gases, without considering the forces that cause the motion. It deals with the geometrical and temporal aspects of fluid flow, including velocity and acceleration. Fluid dynamics, on the other hand, considers the forces acting on the fluid.
Fluid dynamics is the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Fundamentally, every fluid mechanical system is assumed to obey the basic laws :
Conservation of mass
Conservation of energy
Conservation of momentum
The continuum assumption
For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.
The continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale
π0.5: a Vision-Language-Action Model with Open-World GeneralizationNABLAS株式会社
今回の資料「Transfusion / π0 / π0.5」は、画像・言語・アクションを統合するロボット基盤モデルについて紹介しています。
拡散×自己回帰を融合したTransformerをベースに、π0.5ではオープンワールドでの推論・計画も可能に。
This presentation introduces robot foundation models that integrate vision, language, and action.
Built on a Transformer combining diffusion and autoregression, π0.5 enables reasoning and planning in open-world settings.
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfMohamedAbdelkader115
Glad to be one of only 14 members inside Kuwait to hold this credential.
Please check the members inside kuwait from this link:
https://www.rics.org/networking/find-a-member.html?firstname=&lastname=&town=&country=Kuwait&member_grade=(AssocRICS)&expert_witness=&accrediation=&page=1
ELectronics Boards & Product Testing_Shiju.pdfShiju Jacob
This presentation provides a high level insight about DFT analysis and test coverage calculation, finalizing test strategy, and types of tests at different levels of the product.
railway wheels, descaling after reheating and before forgingJavad Kadkhodapour
Exploring Models of Computation through Static Analysis
1. Exploring Models of Computation through Static Analysis
Ivan Jeukens1
Department of Electronic Sytems
University of São Paulo
[email protected]
Marius Strum
Department of Electronic Sytems
University of São Paulo
[email protected]
Abstract
In this work we present the first version of a tool designed for determining the valid models of
computation of an executable specification. This is done by parsing the source code of the spe-
cification and checking if the restrictions imposed by a given model of computation are respected.
We have successfully analyzed 62 library components out of 103. We also successfully analyzed
a benchmark specification.
1 Introduction
The traditional approach for creating an executable specification of a system is based on
electing a suitable specification language. The suitability of a language is determined by a match
between characteristics of the systems under specification (SuS) and the language's model of
computation (MoC). With the necessity of designing heterogeneous systems, novel specifications
languages and frameworks have appeared contemplating the modeling with multiple MoC. Among
others, Ptolemy II [BHA02] is an example of such framework. It provides the designer with twelve
different MoC, and allows for easy addition of new ones.
A typical approach for heterogeneous modeling is based on a general decomposition of the
SuS into subsystems and then choosing a MoC for describing each subsystem. This choice is done via
elimination of candidate MoCs by general characteristics, such as model of time or the type of interac-
tion with other subsystems and the environment. For example, a subsystem requiring a continuous
time model rule out all untimed models, and all models that are unable to interact with a continuous
one. This solution is an extension to the traditional single model language approach.
We believe that the choice of MoC should be considered in a design project, since more than
one MoC might be suitable for capturing a part of the system. Different MoCs have different charac-
teristics (expressiveness, synthesizability, verifiability, etc), therefore affecting the final quality of the
design and its design time. This selection should be done by means of a systematic exploration of dif-
ferent solutions. In fact, there are two interrelated tasks: the decomposition of an specification into dif-
ferent subsystems and the election of a MoC for describing each subsystem. Depending on the decom-
position, the use of a particular MoC might be impossible. In the same way, a MoC can restrict the
possible decompositions of a system.
In this paper, we describe an approach for selecting models of computation based on statically
analysis of the specification's source code. This is done by checking if the restrictions imposed by a
certain MoC are met by the specification. We are currently contemplating two models of computation:
synchronous data flow (SDF) and synchronous reactive (SR). Section 2 presents a brief description of
1 Funded by the Fundação de Amparo a Pesquisa do Estado de São Paulo (FAPESP), under contract
00/12146-0.
2. the Ptolemy II framework. Section 3 comments on a related study. Section 4 addresses the major steps
of the analysis tool that we have developed. The algorithms for determining the validity of the SDF
and SR models for a specification are given in section 5 and 6, respectively. Section 7 addresses the
analysis of heterogeneous specifications. Finally, results and conclusions are presented.
2 Background
This work employs the Ptolemy II framework as a tool for creating executable specifications.
It is based on the coordinated interaction between components.
At its lower level, it provides the means for describing a clustered graph. The main class is the
Entity. Entities can have Ports, that can be interconnected. Each interconnection creates a channel
between ports. A hierarchic version of the Entity class is available2
.
On top of this abstract syntax, an abstract semantics is implemented providing an
infrastructure for data exchange and execution. For communication, an interface called Receiver
contains 12 methods used for data transfer between components, such as:
get(): returns a data from the receiver;
put(): puts a given data into the receiver;
hasToken(): tests for the presence of data in a receiver;
hasRoom(): tests for the availability of storage room in a receiver;
isKnown(): tests if the receiver knows if there are data in it.
The Port class is specialized into the IOPort class. An IOPort specifies the direction of data
transfer. A Receiver is associated with every input IOPort.
The Ptolemy II framework divides execution into a number of iterations. An interface called
Execution provides 9 methods that are called in a certain order, such as:
initialize(): called once at the beginning of execution;
prefire(): called once in an iteration, before the fire() and postfire() methods;
fire(): called once in an iteration, before the postfire() methods;
postfire(): last method called during an iteration;
wrapup(): called once at the end of execution;
The Executable interface is implemented by two different classes: the Director and the
AtomicActor. The Director is responsible for controlling the execution. Typically, a Director contains
a scheduler and information shared3
between components, such as a model of time. A Director can be
associated with only a hierarchic actor. Different hierarchical actors might be associated with different
directors.
The AtomicActor class is a specialization of the Entity class, and provides generic
implementations for the execution methods. A designer implements the behavior of a component by
overriding some execution methods of the AtomicActor class. The following code shows a fire()
method that computes the sum of the data available on all channels of a input port:
2 One level of hierarchy is also called a topology.
3 There are no shared variables in Ptolemy.
3. fire() {
Token sum = new Token();
for(int i = 0;i < input.getWidth();i++) {
if(input.hasToken(i)) {
Token t = input.get(i);
sum = sum.add(t);
}
}
output.broadcast(sum);
}
The getWidth() method of a port returns its number of channels. A Token is an object that
stores a data to be exchanged. The Ptolemy II framework provides several different types of Tokens,
carrying different data types. The above code operates on all possible values, since it uses a
polymorphic method for addition.
In order to implement a MoC in Ptolemy II, one has to specialize the Director class and
implement the Receiver interface.
3 Related Work
In [LEE02], a technique is presented for determining if a component is compatible with a giv-
en MoC. Their approach is based on extending the concept of a type system. An automata based form-
alism is employed to model the calls to methods of the Ptolemy API by a component,a Director, and a
Receiver. Different automatons may be constructed depending on desired the level of detail. Compat-
ibility is then checked by composing the obtained automatons.
Their work share some similarity with ours, since we also check the validity of a component
by looking at the calls to the Ptolemy API. However, when possible, we try to precisely identify a
component as valid or not to a MoC. For instance, a valid SDF component requires a constant number
of data being exchanged. It is not clear if it is possible to model this type of information using the
automata formalism. Also, we deal with whole specifications, i.e, atomic components and hierarchic
components, whether in [LEE02] only atomic components are addressed. It is mentioned that it is pos-
sible to use the same technique for checking proprieties of a specification, but no detail is given.
4 MoC Validation
Our objective is given an executable specification4
written for the Ptolemy II framework,
determine what MoC are valid for it. A valid specification is one that respects the restrictions imposed
by a MoC5
. A specification may violate the rules of a MoC at three locations: 1) within the
component's source code; 2) at one level of hierarchy; 3) between levels of hierarchy. This naturally
leads to a bottom-up verification of the specification.
We have developed a tool for automatically performing such validation. The input to the
tool is composed of each component's source code and a file containing the interconnection
information. The result produced by the tool is a set of messages to the designer and possibly
a modified version of the input specification. The generated messages indicates warnings and
errors in the specification relative to a MoC. Figure 1 presents the main subtasks of the
validation process.
4 We are considering only static and untimed specifications.
5 The restrictions come from the MoC semantics and from Ptolemy's implementation of the MoC.
4. Figure 1 – The subtasks of the MoC validation.
The first subtask parses the input data (Java classes for components and a XML[BHA02] file
for the interconnection information) and constructs the internal data representations. For hierarchic
components, a list of contained atomic and hierarchic components is created. For atomic components,
data structures [APP98] such as control-flow graph (CFG) and symbol table are generated.
Once the internal representation is created, a preprocessing phase is performed. It is divided
into two steps: 1)determination if an atomic component is specifically written to a MoC; 2) identifying
if the designer assigned a component as valid or invalid for a MoC. The first step searches the
component's code for the presence of methods and constructs specific to a MoC. For example, a
special type of atomic component of the SR model is specified by the presence of a marker attribute.
The second looks for the presence of two attributes. One identifies the component as valid for a set of
MoC and the other as invalid. At the end of this subtask, the components are classified as generic or
specific to a MoC. A list of components to be further analyzed is constructed.
The third subtask is the simplification of the internal representation of the atomic components.
This is necessary in order to present a more precise data to the following subtasks. Currently, two
algorithms are employed: constant propagation and dead code elimination. These algorithms run on
the static single analysis form (SSA) [APP98] of each CFG. For some MoC, it is possible to perform
additional substitutions. For instance, the hasToken() method under SDF equals to the boolean value
true. These MoC specific substitutions are done right before the simplification subtask.
After simplification is completed, each component has two (one for SDF, one for SR)
simplified SSA CFG for each of its methods. For every simplified CFG, a new graph is generated,
called Method Graph (MG). The nodes of a MG are method call nodes of the CFG to the Ptolemy II
API. There is an edge between two nodes in the MG if there is a path between the respective nodes in
the CFG. Auxiliary nodes are used in the MG to indicate a loop beginning and a call to another
method of the source code. All analysis of atomic components described in sections 5 and 6 are based
on method graphs. Figure 2 shows the MG under SDF and SR models for the fire() method depicted in
section 2.
Figure 2 – Method graphs under SDF (left hand side) and SR (right hand side).
.xml
.java
Parsing and
DB construction
Preprocessing
Code
simplification
MoC independent
checks
Component
Validation
Hierarchy
Validation.xml
.java
Messages
Source
Sink
Loop
Get
Broadcast
Source
Sink
Loop
Hastoken
Broadcast
Get
5. After the simplification subtask, three restrictions to the source code are checked. These
restrictions are imposed by our tool in order to be able to analyze the source code. They are:
data exchange methods can be employed only within countable loops;
the field of a data exchange method specifying the port's channel should be a constant;
when a vector of data is transmitted between components, its length should be specified by a
constant.
If all the above restrictions are met, the MoC validation algorithms are executed.
5 Synchronous Data Flow
The synchronous data flow (SDF)[LEE87] is an extension to the data flow model of computa-
tion. In this model, the system is considered to be a graph where a node represents the function being
computed by a component, and an edge indicates the data dependency between functions. The execu-
tion of each component is divided into a series of atomic activations. A component is activated when
data is available for all its inputs. In the SDF model, integer values (sample rates) are used to specify
the amount of data required by each input, and the amount of data produced by each output at the end
of one activation. Conceptually, a well defined SDF specification should be executed on infinite input
streams without termination. It is show that a SDF specification possesses a static scheduling, i.e, the
number of times each component should be activated, and the order of components activation can be
determined before execution. In Ptolemy II, a component activation calls the prefire(), fire() and post-
fire() methods in that order.
Following our strategy for validating a specification, we have to ensure that three conditions
are met:
1. A component must produce and consume a constant number of data;
2. The sample rates of all components must be consistent [LEE87];
3. The specification must be deadlock free.
Our tool checks the first condition by applying an algorithm that computes the minimum and
maximum values for the sample rates of each component's port. The algorithm traverses the method
graph of the the initialize(), prefire(), fire() and postfire() methods. For each node, the values are com-
puted based on the values of all its inputs nodes, and the method being called by the node. When en-
countering a graph link node, the algorithm is applied to the method graph associated to that node.
When a loop head node is found, the algorithm first processes the loop body. The result is obtained at
the sink node of the graph. For a constant sample rate, the minimum and maximum values have to be
equal.
This and other algorithms employed in our tool are based on traversing the paths of method
graphs. By doing so, we consider that all paths are executable. Since the MG is generated from a CFG,
there might be false paths. It is easy to show that by applying an algorithm that tries to remove false
paths from the CFG, the result of the tool can only be improved. This means that a component detected
as SDF valid, will not became invalid, or with altered sample rates. What might happen is that a com-
ponent once detected as non valid by the tool, is discovered as valid after the removal of some false
paths.
We check the second and third conditions if all components have constant sample rates. This
is done by trying to compute a schedule for the topology. If the sample rates are not consistent, it will
6. not be possible to compute the number of required activations of each actor. If there is a deadlock situ-
ation6
, the scheduler will fail to compute the order of activations. Both faults are reported by the sched-
uler. We use the available SDF scheduler of the Ptolemy II framework.
6 Synchronous Reactive
The synchronous reactive (SR) model [EDW97] is based on the synchrony hypothesis: given a
set of external stimulus, the system can compute an answer infinitely fast. Each reaction of the system
is instantaneous and atomic. Time is divided into a series of discrete instants.
The Ptolemy's semantics for the SR model treats the components and their interconnections as
a system of equations. At each instant, one has to find the least fixed point solution to this system. In
order to guarantee the existence of such a solution and ensure a deterministic execution, two character-
istics are imposed: 1) a signal should carry a flat7
pointed complete partial order (CPO); 2) and a com-
ponent should compute a monotonic function.
The first condition means that a signal has two states: undefined and defined. At the beginning
of each instance, except for some system inputs, all signals are at the undefined state. A defined state
is one were the value, or the absence of a value, is known. The second condition restricts a function to
once an output signal is set to the defined state, it can not return to an undefined state or has its value
changed, given a more defined set of inputs.
Within Ptolemy's implementation of the SR model, each instant is composed of a series of
component activations. An activation is atomic and executes the prefire() and fire() methods. The or-
der of component activation can be computed statically. At the end of one instant, the postfire() meth-
od of each component is executed. A component is classified into two types: strict and non strict. A
strict component requires all inputs to be defined prior to activation, and a non strict component does
not. Therefore, a strict component is activated at most once during an instant, whether a non strict
component may be activated several times.
From the above characteristics of the SR model, we have to check two restrictions of an atomic
component:
1. a components should generate at most one data element per each output during an instant;
2. a component must implement a monotonic function.
The first restriction is determined by using the algorithm that computes sample rates de-
veloped for the SDF model. Here, the acceptable values for an output are (0,0), (0,1) or (1,1). For out-
puts, values greater than one generate an error message, and rejects the component. For inputs, a value
greater than one prompts a warning message, indicating that the same value will be read.
Determining if a component implements a monotonic function just by looking at its source
code is a difficult task. This is because one has to find all conditions under which an output is defined,
which may depends on the presence of signals and their values. For a non strict component, these con-
ditions have to be determined considering a finite number of activations.
We have adopted a more restricted version of the SR model, were an output of a component
requires the known state of all inputs that it depends on. A component is activated only when is pos-
6 The interconnection of components contains at least one loop without initialization values.
7 Flat means that the signals carries only scalar values. This is a restriction imposed by Ptolemy's
implementation, not the semantics of the model.
7. sible to define some new output. We no longer classify a component as strict or non strict. Also, it is
not allowed to define an output based on an unknown state of an input signal, i.e, an output definition
that depends on the false value of the method isKnown(). It is easy to show that with such restrictions,
only monotonic functions can be captured.
An output depends on an input under two circumstances:
1. the data being generated requires a known state of the input signal;
2. a conditional expression on the control flow path that leads to the definition of an output
requires a known state of an input signal.
The two conditions for dependency are determined by searching use-definition chains
[APP98] that were created during the simplification phase of the analysis. The following pseudo-code
shows a simplified version of the algorithm that determines the dependency:
FindDependencies( MethodGraph MG, FlowGraph FG) {
forall node N in MG do
if N is an output signal definition
forall uses USE of variable V in N do
get definition DEF site of USE
searchDataDependencies(DEF)
forall expressions EXPR in the path leading to N do
forall use USE of variable V in EXPR do
get definition DEF site of USE
searchDataDependencies(DEF)
}
searchDataDependencies(DefinitionSite DEF) {
if DEF is a method call of an input signal IN
add IN to the dependecy list
else
if DEF is a method call MC
set MG the Method Graph of MC
set FG the Flow Graph of MC
FindDependencies(MC, FG)
else
forall use of variable V in the definition statement DEF
get definition DEF' site of V
searchDataDependencies(DEF')
}
Once all components are validated, checking the topology is done by trying to compute a
schedule. A graph representing the dependencies between input and output signals is created. A node
in this graph is a component's port. There is an edge between two nodes when: 1) both nodes belong to
the same component, and there is a input/output dependency; 2) the nodes belong to different compon-
ents, and there is a link in the topology connecting them. A valid SR topology is one were the obtained
graph is acyclic. A topological order of the graph gives a schedule of activation.
8. 7 Heterogeneous specifications
The validation process is performed bottom-up. First, all atomic components are analyzed.
Then, each topology is validated, starting from the leaf ones, i.e., those that do not contain any hier-
archic component. Once a hierarchic component is validated for a group of MoCs, it will be seen as an
atomic component at the next level of hierarchy.
For each hierarchic component contained in a topology, we have to validate their interaction.
For the SDF and SR models, we have:
A SR hierarchic component contained in an SDF topology: the SR component must be ho-
mogeneous, i.e., all sample rates of its ports must be equal to 1.
A SR hierarchic component contained in a SR topology: all output ports of the SR compon-
ent are considered dependent on all its inputs.
A SDF hierarchic component contained in an SR topology: the SDF component must be ho-
mogeneous and all output ports of the SDF component are considered dependent on all its
inputs.
A SDF hierarchic component contained in an SDF topology: the hierarchic component will
be scheduled with the computed sample rates. These rates are obtained from the connected
ports on the inside topology.
8 Experimental results
The first experiment we have conducted tested the analysis of atomic components. The
Ptolemy II framework comes with libraries of such components. Most of them are designed not to a
particular model of computation8
, i.e., they are written only with methods from the abstract semantics
layer. We obtained the following results:
Library Total Nº of
Components
Success Unable Not possible
Array 6 2 2 2
Conversion 17 16 1 0
Flow Control 16 3 3 10
Logic 5 1 1 3
Math 16 9 6 1
Random 4 3 0 1
Signal Processing 16 8 7 1
Sink 12 12 0 0
Source 11 8 1 2
8 This is called a domain polymorphic component [BHA02].
9. The first column specifies a component library from Ptolemy II. The third column accounts
for the number of components correctly analyzed. The forth column indicates the number of compon-
ents that the tool was unable to analyze, or produce the correct output, due to its limitations. The last
column shows the number of components that are not possible to analyze.
For each component that the tool was unable to produce the correct result, we have inspected
the reason. There were only two different situations:
1. the consumption/production was dependent on a component's parameter value;
2. the constant propagation algorithm was not powerful enough.
These two situations can be fixed by a simple improvement of the tool. Currently, we do not
analyze parameters. These values can be considered as constants.
Looking at the components that were impossible to analyzed, we have identified three differ-
ent situations:
1. the component's source code does not follow the restrictions imposed by our tool;
2. exceptions were being throw at the component's execution methods;
3. the component's ports are specified by the user as parameters.
The first situation can be addressed by using more powerful algorithms that analyze the source
code, trying to identify the possible values of a variable. The second and third cases can only be re-
moved by rewriting the source code of the component.
The second experiment is the analysis of a hierarchic specification. We have selected a
Ptolemy's demo available for the SR model, that models a cyclic token-ring arbiter. Figure 3 shows the
topology of the specification.
Figure 3 – Cyclic token-ring arbiter.
R G
PI
TI
PO
TO
R G
PI
TI
PO
TO
R G
PI
TI
PO
TO
NAND
OR AND
AND
Delay
R
PI
TI
G
PO
TO
10. The specification contains two levels of hierarchy. Three leaf hierarchic components are con-
nected at the top level. Each leaf hierarchic component contains four components computing logic
functions, and a component implementing a delay. The delay component produces the value from the
previous iteration.
The version available from Ptolemy II is designed for the SR model. Each logic function is im-
plemented by a non strict component. For instance, a logic OR function can define its output as soon
as one input has a known value equals to true. Under our restriction of the SR model, this is not pos-
sible. All logic function components will require the known state of all input signals.
All the atomic components were correctly identified as valid, for both SDF and SR. The leaf
hierarchic component was also identified as valid for both MoCs. The top level hierarchic component
was identified as invalid. This is because the presence of a cycle that goes through the PI and PO sig-
nals. The original specification did not have this problem, since the logic function components were
non strict. In order to fix this problem, an extra delay actor has to be inserted before a PI signal. Its ini-
tial value should be set to true. After this modification, the tool correctly identify the top level hier-
archic component as valid for both models.
9 Conclusions and Future work
In this paper, we have described a tool that given an executable specification, tries to identify
the valid models of computation for it. This is done by the static analysis of the specification's code.
This strategy has a problem that it is not always possible to produce a correct answer, since it is based
on a static analysis of source code. Hence, our algorithms were designed to be conservative.
Despite the intrinsic difficulty of the problem, our initial results indicates the usefulness of the
tool. Out of 103 components, 62 were correctly analyzed. This number can be easily risen to 83 com-
ponents. We have also successfully analyzed some toy specifications.
The most immediate task that remains to be done is the improvement of the implementation of
the tool, considering the situations described in section 8. Also, the analysis of specifications of actual
systems will be performed.
Another important future work is the use of parameters indicating semantic characteristics of a
component or signal, without implying a particular model of computation. This will prune the possible
candidate MoCs.
10 References
[APP98] A. W. Appel, Modern Compiler Implementation in ML, Cambridge University
Press, ISBN 0-521-58274-1, 1998.
[BHA02] S. S. Bhattacharryya, et al, Heterogeneous Concurrent Modeling and Design in
Java, Memorandum UCB/ERL M02/23, University of California, Berkeley, August,
2002.
[EDW97] S. A. Edwards, The Specification and Execution of Synchronous Reactive Systems,
PhD Dissertation, Report ERL, No. M97/31, Dept. EECS, University of California,
Berkeley, 1997.
[LEE87] E. A. Lee, D. G. Messerschmitt, Synchronous Data Flow, Proceedings of the IEEE,
Vol. 75, No. 9, September, 1987.
11. [LEE02] E. A. Lee, Y. Xiong, Behavioral Types for Component-Based Design, Memorandum
UCB/ERL M02/29, University of California, Berkeley, September, 2002.