0% found this document useful (0 votes)
8 views

Analysis Model (1)

software engineering topic analysis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Analysis Model (1)

software engineering topic analysis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Software Engineering-Specification Principles

There is no doubt that the mode of specification has much to do with the quality of solution. Software engineers who
have been forced to work with incomplete, inconsistent, or misleading specifications have experienced the
frustration and confusion that invariably results. The quality, timeliness, and completeness of the software suffers as
a consequence.

Specification Principles
Specification, regardless of the mode through which we accomplish it, may be viewed as a representation process.
Requirements are represented in a manner that ultimately leads to successful software implementation. A number of
specification principles, adapted from the work of Balzer and Goodman, can be proposed:

1. Separate functionality from implementation.

2. Develop a model of the desired behavior of a system that encompasses data and the functional responses of a
system to various stimuli from the environment.

3. Establish the context in which software operates by specifying the manner in which other system components
interact with software.

4. Define the environment in which the system operates and indicate how “a highly intertwined collection of agents
react to stimuli in the environment (changes to objects) produced by those agents".

5. Create a cognitive model rather than a design or implementation model. The cognitive model describes a system
as perceived by its user community.

6. Recognize that “the specifications must be tolerant of incompleteness and augmentable.” A specification is
always a model—an abstraction—of some real (or envisioned) situation that is normally quite complex. Hence, it
will be incomplete and will exist at many levels of detail.

7. Establish the content and structure of a specification in a way that will enable it to be amenable to change.

This list of basic specification principles provides a basis for representing software requirements. However,
principles must be translated into realization.

Representation
We have already seen that software requirements may be specified in a variety of ways. However, if requirements
are committed to paper or an electronic presentation medium (and they almost always should be!) a simple set of
guidelines is well worth following:

Representation format and content should be relevant to the problem. A general outline for the contents of a
Software Requirements Specification can be developed. However, the representation forms contained within the
specification are likely to vary with the application area. For example, a specification for a manufacturing
automation system might use different symbology, diagrams and language than the specification for a programming
language compiler.

Information contained within the specification should be nested. Representations should reveal layers of
information so that a reader can move to the level of detail required. Paragraph and diagram numbering schemes
should indicate the level of detail that is being presented. It is sometimes worthwhile to present the same information
at different levels of abstraction to aid in understanding.
Diagrams and other notational forms should be restricted in number and consistent in use. Confusing or
inconsistent notation, whether graphical or symbolic, degrades understanding and fosters errors.

Representations should be revisable. The content of a specification will change. Ideally, CASE tools should be
available to update all representations that are affected by each change.

Investigators have conducted numerous studies on human factors associated with specification. There appears to be
little doubt that symbology and arrangement affect understanding. However, software engineers appear to have
individual preferences for specific symbolic and diagrammatic forms. Familiarity often lies at the root of a person's
preference, but other more tangible factors such as spatial arrangement, easily recognizable patterns, and degree of
formality often dictate an individual's choice.

What Does Process Specification Mean?


A process specification is a method used to document, analyze and explain the decision-making logic and formulas
used to create output data from process input data. Its objective is to flow down and specify regulatory/engineering
requirements and procedures. High-quality, consistent data requires clear and complete process specifications.
A process specification reduces ambiguity, allowing an individual or organization to obtain a precise description of
executed tasks and accomplishments and validate system design, including the data dictionary and data flow
diagrams.

Process Specification
Process specifications are created for primitive processes and data flow diagram processes of a higher level
(minispecs). Process logic is best represented through structured English, decision tables, decision trees or specified
formulas or algorithms and is used to communicate engineering requirements and procedures to businesses involved
in the creation of a process. Process descriptions may exist on a form or in a computer aided software engineering
(CASE) tool repository.
Process specifications are not created for processes requiring physical input or output, processes representing simple
data validation or processes with preexisting and prewritten code.

The control specification (CSPEC) represents the behavior of the system (at the level from which it has been
referenced) in two different ways. The CSPEC contains a state transition diagram that is a sequential specification of
behavior. It can also contain a program activation table—a combinatorial specification of behavior. It is now time to
consider an example of this important modeling notation for structured analysis.

Figure below depicts a state transition diagram for the level 1 control flow model for SafeHome. The labeled
transition arrows indicate how the system responds to events as it traverses the four states defined at this level. By
studying the STD, a software engineer can determine the behavior of the system and, more important, can ascertain
whether there are "holes" in the specified behavior. For example, the STD indicates that the only transition from the
reading user input state occurs when the start/stop switch is encountered and a transition to the monitoring system
status state occurs. Yet, there appears to be no way, other than the occurrence of sensor event, that will allow the
system to return to reading user input. This is an error in specification and would, we hope, be uncovered during
review and corrected.
Examine the STD to determine whether there are any other anomalies. A somewhat different mode of
behavioral representation is the process activation table. The PAT represents information contained in the
STD in the context of processes, not states. That is, the table indicates which processes (bubbles) in the
flow model will be invoked when an event occurs. The PAT can be used as a guide for a designer who
must build an executive that controls the processes represented at this level. A PAT for the level 1 flow
model of SafeHome software is shown in figure below. The CSPEC describes the behavior of the system,
but it gives us no information about the inner working of the processes that are activated as a result of this
behavior.

The Process Specification

The process specification (PSPEC) is used to describe all flow model processes that appear at the final level of
refinement. The content of the process specification can include narrative text, a program design language (PDL)
description of the process algorithm, mathematical equations, tables, diagrams, or charts. By providing a PSPEC to
accompany each bubble in the flow model, the software engineer creates a "minispec" that can serve as a first step in
the creation of the Software Requirements Specification and as a guide for design of the software component that
will implement the process.

To illustrate the use of the PSPEC, consider the process password transform represented in the flow model for
SafeHome. The PSPEC for this function might take the form:

PSPEC: process password

The process password transform performs all password validation for the SafeHome system. Process password
receives a four-digit password from the interact with user function. The password is first compared to the master
password stored within the system. If the master password matches, <valid id message = true> is passed to the
message and status display function. If the master password does not match, the four digits are compared to a table
of secondary passwords (these may be assigned to house guests and/or workers who require entry to the home when
the owner is not present). If the password matches an entry within the table, <valid id message = true> is passed to
the message and status display function. If there is no match, <valid id message = false> is passed to the message
and status display function.

The Software Requirement Specification and Reviews Analysis Modeling


Analysis Model is a technical representation of the system. It acts as a link between system description
and design model. In Analysis Modelling, information, behavior, and functions of the system are
defined and translated into the architecture, component, and interface level design in the design
modeling.
The analysis model is the first technical representation of a system. Analysis modeling uses a
combination of text and diagrams to represent software requirements (data, function, and
behavior) in an understandable way. Two types of analysis modeling are commonly used:
structured analysis and object-oriented analysis. Data modeling uses entity-relationship diagrams
to define data objects, attributes, and relationships. Functional modeling uses data flow diagrams
to show how data are transformed inside the system. Behavioral modeling uses state transition
diagrams to show the impact of events. Analysis work products must be reviewed for
completeness, correctness, and consistency.
Objectives of Analysis Modelling:
 It must establish a way of creating software design.
 It must describe the requirements of the customer.
 It must define a set of requirements that can be validated, once the software is built.

Elements of Analysis Model:


 Data Dictionary:
It is a repository that consists of a description of all data objects used or produced by the software.
It stores the collection of data present in the software. It is a very crucial element of the analysis
model. It acts as a centralized repository and also helps in modeling data objects defined during
software requirements.

 Entity Relationship Diagram (ERD):


It depicts the relationship between data objects and is used in conducting data modeling activities.
The attributes of each object in the Entity-Relationship Diagram can be described using Data object
description. It provides the basis for activity related to data design.

 Data Flow Diagram (DFD):


It depicts the functions that transform data flow and it also shows how data is transformed when
moving from input to output. It provides the additional information which is used during the
analysis of the information domain and serves as a basis for the modeling of function. It also
enables the engineer to develop models of functional and information domains at the same time.

 State Transition Diagram:


It shows various modes of behavior (states) of the system and also shows the transitions from one
state to another state in the system. It also provides the details of how the system behaves due to the
consequences of external events. It represents the behavior of a system by presenting its states and
the events that cause the system to change state. It also describes what actions are taken due to the
occurrence of a particular event.

 Process Specification:
It stores the description of each function present in the data flow diagram. It describes the input to a
function, the algorithm that is applied for the transformation of input, and the output that is
produced. It also shows regulations and barriers imposed on the performance characteristics that are
applicable to the process and layout constraints that could influence the way in which the process
will be implemented.

 Control Specification:
It stores additional information about the control aspects of the software. It is used to indicate how
the software behaves when an event occurs and which processes are invoked due to the occurrence
of the event. It also provides the details of the processes which are executed to manage events.

 Data Object Description:


It stores and provides complete knowledge about a data object present and used in the software. It
also gives us the details of attributes of the data object present in the Entity Relationship Diagram.
Hence, it incorporates all the data objects and their attributes.

1. What is data modeling?


Data modeling is the process of creating a visual representation of either a whole information
system or parts of it to communicate connections between data points and structures. The goal is
to illustrate the types of data used and stored within the system, the relationships among these
data types, the ways the data can be grouped and organized and its formats and attributes.
Data models are built around business needs. Rules and requirements are defined upfront through
feedback from business stakeholders so they can be incorporated into the design of a new system
or adapted in the iteration of an existing one.

Data can be modeled at various levels of abstraction. The process begins by collecting
information about business requirements from stakeholders and end users. These business rules
are then translated into data structures to formulate a concrete database design. A data model can
be compared to a roadmap, an architect’s blueprint or any formal diagram that facilitates a
deeper understanding of what is being designed.

Data modeling employs standardized schemas and formal techniques. This provides a common,
consistent, and predictable way of defining and managing data resources across an organization,
or even beyond.

Ideally, data models are living documents that evolve along with changing business needs. They
play an important role in supporting business processes and planning IT architecture and
strategy. Data models can be shared with vendors, partners, and/or industry peers.

1.1 Types of data models

Like any design process, database and information system design begins at a high level of
abstraction and becomes increasingly more concrete and specific. Data models can generally be
divided into three categories, which vary according to their degree of abstraction. The process
will start with a conceptual model, progress to a logical model and conclude with a physical
model. Each type of data model is discussed in more detail below:

 Conceptual data models. They are also referred to as domain models and offer a big-
picture view of what the system will contain, how it will be organized, and which
business rules are involved. Conceptual models are usually created as part of the process
of gathering initial project requirements. Typically, they include entity classes (defining
the types of things that are important for the business to represent in the data model),
their characteristics and constraints, the relationships between them and relevant security
and data integrity requirements. Any notation is typically simple.

 Logical data models. They are less abstract and provide greater detail about the concepts
and relationships in the domain under consideration. One of several formal data modeling
notation systems is followed. These indicate data attributes, such as data types and their
corresponding lengths, and show the relationships among entities. Logical data models
don’t specify any technical system requirements. This stage is frequently omitted in agile
or DevOps practices. Logical data models can be useful in highly procedural
implementation environments, or for projects that are data-oriented by nature, such
as data warehouse design or reporting system development.

 Physical data models. They provide a schema for how the data will be physically stored
within a database. As such, they’re the least abstract of all. They offer a finalized design
that can be implemented as a relational database, including associative tables that
illustrate the relationships among entities as well as the primary keys and foreign keys
that will be used to maintain those relationships. Physical data models can include
database management system (DBMS)-specific properties, including performance tuning.

1.2 Types of data modeling

Data modeling has evolved alongside database management systems, with model types
increasing in complexity as businesses' data storage needs have grown. Here are several model
types:

 Hierarchical data models represent one-to-many relationships in a treelike format. In


this type of model, each record has a single root or parent which maps to one or more
child tables. This model was implemented in the IBM Information Management System
(IMS), which was introduced in 1966 and rapidly found widespread use, especially in
banking. Though this approach is less efficient than more recently developed database
models, it’s still used in Extensible Markup Language (XML) systems and geographic
information systems (GISs).
 Relational data models were initially proposed by IBM researcher E.F. Codd in 1970.
They are still implemented today in the many different relational databases commonly
used in enterprise computing. Relational data modeling doesn’t require a detailed
understanding of the physical properties of the data storage being used. In it, data
segments are explicitly joined through the use of tables, reducing database complexity.

Relational databases frequently employ structured query language (SQL) for data management.
These databases work well for maintaining data integrity and minimizing redundancy. They’re
often used in point-of-sale systems, as well as for other types of transaction processing.

 Entity-relationship (ER) data models use formal diagrams to represent the relationships
between entities in a database. Several ER modeling tools are used by data architects to
create visual maps that convey database design objectives.
 Object-oriented data models gained traction as object-oriented programming and it
became popular in the mid-1990s. The “objects” involved are abstractions of real-world
entities. Objects are grouped in class hierarchies, and have associated features. Object-
oriented databases can incorporate tables, but can also support more complex data
relationships. This approach is employed in multimedia and hypertext databases as well
as other use cases.
 Dimensional data models were developed by Ralph Kimball, and they were designed to
optimize data retrieval speeds for analytic purposes in a data warehouse. While relational
and ER models emphasize efficient storage, dimensional models increase redundancy in
order to make it easier to locate information for reporting and retrieval. This modeling is
typically used across OLAP systems.
1.3 Data modeling process

As a discipline, data modeling invites stakeholders to evaluate data processing and storage in
painstaking detail. Data modeling techniques have different conventions that dictate which
symbols are used to represent the data, how models are laid out, and how business requirements
are conveyed. All approaches provide formalized workflows that include a sequence of tasks to
be performed in an iterative manner. Those workflows generally look like this:

1. Identify the entities. The process of data modeling begins with the identification of the
things, events or concepts that are represented in the data set that is to be modeled. Each
entity should be cohesive and logically discrete from all others.
2. Identify key properties of each entity. Each entity type can be differentiated from all
others because it has one or more unique properties, called attributes. For instance, an
entity called “customer” might possess such attributes as a first name, last name,
telephone number and salutation, while an entity called “address” might include a street
name and number, a city, state, country and zip code.
3. Identify relationships among entities. The earliest draft of a data model will specify the
nature of the relationships each entity has with the others. In the above example, each
customer “lives at” an address. If that model were expanded to include an entity called
“orders,” each order would be shipped to and billed to an address as well. These
relationships are usually documented via unified modeling language (UML).
4. Map attributes to entities completely. This will ensure the model reflects how the
business will use the data. Several formal data modeling patterns are in widespread use.
Object-oriented developers often apply analysis patterns or design patterns, while
stakeholders from other business domains may turn to other patterns.
5. Assign keys as needed, and decide on a degree of normalization that balances the
need to reduce redundancy with performance requirements. Normalization is a
technique for organizing data models (and the databases they represent) in which
numerical identifiers, called keys, are assigned to groups of data to represent relationships
between them without repeating the data. For instance, if customers are each assigned a
key, that key can be linked to both their address and their order history without having to
repeat this information in the table of customer names. Normalization tends to reduce the
amount of storage space a database will require, but it can at cost to query performance.
6. Finalize and validate the data model. Data modeling is an iterative process that should
be repeated and refined as business needs change.

1.4 Benefits of data modeling

Data modeling makes it easier for developers, data architects, business analysts, and other
stakeholders to view and understand relationships among the data in a database or data
warehouse. In addition, it can:

 Reduce errors in software and database development.


 Increase consistency in documentation and system design across the enterprise.
 Improve application and database performance.
 Ease data mapping throughout the organization.
 Improve communication between developers and business intelligence teams.
 Ease and speed the process of database design at the conceptual, logical and physical
levels.
1.5Data modeling tools

Numerous commercial and open source computer-aided software engineering (CASE) solutions
are widely used today, including multiple data modeling, diagramming and visualization tools.
Here are several examples:

 erwin Data Modeler is a data modeling tool based on the Integration DEFinition for
information modeling (IDEF1X) data modeling language that now supports other
notation methodologies, including a dimensional approach.
 Enterprise Architect is a visual modeling and design tool that supports the modeling of
enterprise information systems and architectures as well as software applications and
databases. It’s based on object-oriented languages and standards.
 ER/Studio is database design software that’s compatible with several of today’s most
popular database management systems. It supports both relational and dimensional data
modeling.
 Free data modeling tools include open source solutions such as Open ModelSphere.

2. Functional modeling and Information Flow modeling


In the Functional Model, software converts information. and to accomplish this, it must perform
at least three common tasks- input, processing and output. When functional models of an
application are created, the software engineer emphasizes problem specific tasks. The functional
model begins with a single reference level model (i.e., be manufactured). In a series of iterations,
more and more functional detail is given, until all system functionality is fully represented.

Information is converted because it flows from a computer-based system. The system takes input
in various forms; Hardware, software, and human elements are applied to replace it; And
produces in various forms. The transformation (s) or function may be composed of a single
logical comparison, a complex numerical method, or a rule- the invention approach of an expert
system. The output can light an LED or provide a 200 page report. Instead, we can create a
model or flow model for any computer- based system, regardless of size and complexity.
Structural analysis started as an Information Flow Modelling technique. A computer-based
system can be modeled as an information transform function as shown in figure.
A rectangle represents an external unit. That is, a system element, such as a hardware, a person
or another system that provides information for transformation by the software or receives
information provided by the software. A circle is used to represent a process or transform or a
function that is applied to data and changes it in some way. An arrow is used to represent one or
more data items.
All arrows should be labeled in a DFD. The double line is used to represent data store. There
may be implicit procedure or sequence in the diagram but explicit logical details are generally
delayed until software design.

Function Oriented Design


Function Oriented design is a method to software design where the model is decomposed into a
set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.

Design Notations
Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:

Function Oriented Design


Function Oriented design is a method to software design where the model is decomposed into a
set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.

Design Notations
Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:
2.1 Data Flow Diagram

Data-flow design is concerned with designing a series of functional transformations that convert
system inputs into the required outputs. The design is described as data-flow diagrams. These
diagrams show how data flows through a system and how the output is derived from the input
through a series of functional transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They
show end-to-end processing. That is the flow of processing from when data enters the system to
where it leaves the system can be traced.

Data-flow design is an integral part of several design methods, and most CASE tools support
data-flow diagram creation. Different ways may use different icons to represent data-flow
diagram entities, but their meanings are similar.

The notation which is used is based on the following symbols


The report generator produces a report which describes all of the named entities in a data-flow
diagram. The user inputs the name of the design represented by the diagram. The report
generator then finds all the names used in the data-flow diagram. It looks up a data dictionary
and retrieves information about each name. This is then collated into a report which is output by
the system.

3. Behavioral Modeling
It indicates how software will respond to external events or stimuli. In behavioral model, the behavior of
the system is represented as a function of specific events and time.
To create behavioral model following things can be considered:

 Evaluation of all use-cases to fully understand the sequence of interaction within the system.
 Identification of events that drive the interaction sequence and understand how these events relate
to specific classes.
 Creating sequence for each use case.
 Building state diagram for the system.
 Reviewing the behavioral model to verify accuracy and consistency.

It describes interactions between objects. It shows how individual objects collaborate to achieve the
behavior of the system as a whole.InUML behavior of a system is shown with the help of usecase
diagram, sequence diagram and activity diagram
 A use case focuses on the functionality of a system i.e. what a system does for users. It shows
interaction between the system and outside actors.Ex: Student, librarians are actors, issue book
use case.
 A sequence diagram shows the objects that interact and the time sequence of their
interactions.Ex: Student, librarians are objects. Time sequence enquires for book check
availability –with respect time.
 An activity diagram specifies important processing steps. It shows operations required for
processing steps. It shows operations required for processing.Ex issue book, check availability
does not show objects

Behavioral Model is specially designed to make us understand behavior and factors that influence
behavior of a System. Behavior of a system is explained and represented with the help of a diagram.
This diagram is known as State Transition Diagram. It is a collection of states and events. It usually
describes overall states that a system can have and events which are responsible for a change in state of
a system.
State Transition Diagram
 State transition diagrams represent the system states and events that trigger state transitions
 STD's indicate actions (e.g. process activation) taken as a consequence of a particular event
 A state is any observable mode of behavior
 Hatley and Pirbhai control flow diagrams (CFD) can also be used for behavioral modeling

Creating Entity Relationship Diagrams


 Customer asked to list "things" that application addresses, these things evolve into input objects,
output objects, and external entities
 Analyst and customer define connections between the objects
 One or more object-relationship pairs is created for each connection
 The cardinality and modality are determined for an object-relationship pair
 Attributes of each entity are defined
 The entity diagram is reviewed and refined
Data Flow Diagram
 Level 0 data flow diagram should depict the system as a single bubble
 Primary input and output should be carefully noted
 Refinement should begin by consolidating candidate processes, data objects, and data stores to be
represented at the next level
 Label all arrows with meaningful names
 Information flow must be maintained from one level to level
 Refine one bubble at a time
 Write a PSPEC (a "mini-spec" written using English or another natural language or a program
design language) for each bubble in the final DFD
A Data Flow Diagram (DFD) is a classic visual representation of a system's information flows. It can be
manual, automatic, or a hybrid of the two. It demonstrates how data enters and exits the system, what
alters the data, and where data is stored. A DFD's goal is to represent the breadth and bounds of a system
as a whole. It can be used as a method of communication between a system analyst and developers. The
data flow diagram (DFD) is also known as a data flow graph or bubble chart.

DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD. It also gives insight into the inputs and outputs of each entity and the process
itself. DFD does not have control flow and no loops or decision rules are present. Specific operations
depending on the type of data can be explained by a flowchart.
It is a graphical tool, useful for communicating with users, managers and other personnel. it is useful
for analyzing existing as well as proposed system. It provides an overview of what data is system
processes. What transformation is performed? What data are stored? What results are produced, etc?

Data Flow Diagram can be represented in several ways. The DFD belongs to structured-analysis
modeling tools. Data Flow diagrams are very popular because they help us to visualize the major steps
and data involved in software-system processes.

Components of DFD
The Data Flow Diagram has 4 components:
Process Input to output transformation in a system takes place because of process function. The
symbols of a process are rectangular with rounded corners, oval, rectangle or a circle. The process is
named a short sentence, in one word or a phrase to express its essence
Data Flow Data flow describes the information transferring between different parts of the systems. The
arrow symbol is the symbol of data flow. A relatable name should be given to the flow to determine the
information which is being moved. Data flow also represents material along with information that is
being moved. Material shifts are modeled in systems that are not merely informative. A given flow
should only transfer a single type of information. The direction of flow is represented by the arrow
which can also be bi-directional.
Warehouse The data is stored in the warehouse for later use. Two horizontal lines represent the symbol
of the store. The warehouse is simply not restricted to being a data file rather it can be anything like a
folder with documents, an optical disc, a filing cabinet. The data warehouse can be viewed independent
of its implementation. When the data flow from the warehouse it is considered as data reading and
when data flows to the warehouse it is called data entry or data updating.
Terminator The Terminator is an external entity that stands outside of the system and communicates
with the system. It can be, for example, organizations like banks, groups of people like customers or
different departments of the same organization, which is not a part of the model system and is an
external entity. Modeled systems also communicate with terminator.
Rules for creating DFD
The name of the entity should be easy and understandable without any extra assistance(like comments).
The processes should be numbered or put in ordered list to be referred easily.
The DFD should maintain consistency across all the DFD levels.
A single DFD can have a maximum of nine processes and a minimum of three processes.

Symbols Used in DFD


Square Box: A square box defines source or destination of the system. It is also called entity. It is
represented by rectangle.
Arrow or Line: An arrow identifies the data flow i.e. it gives information to the data that is in motion.
Circle or bubble chart: It represents as a process that gives us information. It is also called processing box.
Open Rectangle: An open rectangle is a data store. In this data is store either temporary or permanently.

Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be created. Levels of DFD are as
follows:
0-level DFD: It represents the entire system as a single bubble and provides an overall picture of the
system.
1-level DFD: It represents the main functions of the system and how they interact with each other.
2-level DFD: It represents the processes within each function of the system and how they interact with
each other.
3-level DFD: It represents the data flow within each process and how the data is transformed and stored.

Advantages of DFD
It helps us to understand the functioning and the limits of a system.
It is a graphical representation which is very easy to understand as it helps visualize contents.
Data Flow Diagram represent detailed and well explained diagram of system components.
It is used as the part of system documentation file.
Data Flow Diagrams can be understood by both technical or nontechnical person because they are very
easy to understand.

Disadvantages of DFD
At times DFD can confuse the programmers regarding the system.
Data Flow Diagram takes long time to be generated, and many times due to this reasons analysts are
denied permission to work on it.
Levels of Data Flow Diagrams
In software engineering, DFDs (data flow diagrams) can be used to illustrate systems at various degrees
of abstraction. Higher-level DFDs are subdivided into lower-level DFDs containing additional information
and functional features. DFD levels are numbered 0, 1, 2, and higher.
0-Level DFD
It's also referred to as a context diagram. It is intended to be an abstract view, depicting the system as a
single process with external elements. It represents the complete system as a single bubble with
incoming and outgoing arrows indicating input and output data. Here's an example of a 0-level DFD.

1-Level DFD
In 1-level DFD, the context diagram is broken down into many bubbles and processes. At this level, we
highlight the system's essential functions and divide the high-level process of 0-level DFD into
subprocesses. Here's an example of a 1-level DFD.

2-Level DFD
2-level DFD delves deeper into aspects of 1-level DFD. It can be used to design or record
specific/necessary details about how the system works. Here's an example of a 2-level DFD.
Control Flow Diagrams
Begin by stripping all the data flow arrows form the DFD
Events (solid arrows) and control items (dashed arrows) are added to the diagram
Add a window to the CSPEC (contains an STD that is a sequential specification of the behavior) for each
bubble in the final CFD
A Control Flow Graph (CFG) is the graphical representation of control flow or computation during
the execution of programs or applications. Control flow graphs are mostly used in static analysis as
well as compiler applications, as they can accurately represent the flow inside of a program unit. The
control flow graph was originally developed by Frances E. Allen.

Characteristics of Control Flow Graph:


Control flow graph is process oriented.
Control flow graph shows all the paths that can be traversed during a program execution.
Control flow graph is a directed graph.
Edges in CFG portray control flow paths and the nodes in CFG portray basic blocks.

There exist 2 designated blocks in Control Flow Graph:


Entry Block:
Entry block allows the control to enter into the control flow graph.
Exit Block:
Control flow leaves through the exit block.
Hence, the control flow graph is comprised of all the building blocks involved in a flow diagram such
as the start node, end node and flows between the nodes.
General Control Flow Graphs:
Control Flow Graph is represented differently for all statements and loops. Following images describe
it:
1. If-else:

while:

3.. do-while:

for:

Example:
if A = 10 then
if B > C
A=B
else A = C
endif
endif
print A, B, C
Flowchart of above example will be:

Control Flow Graph of above example will be:

Advantage of CFG:
There are many advantages of a control flow graph. It can easily encapsulate the information per each
basic block. It can easily locate inaccessible codes of a program and syntactic structures such as loops
are easy to find in a control flow graph.

Data Dictionary :
A data dictionary is a file or a set of files that includes a database's metadata. The data dictionary hold
records about other objects in the database, such as data ownership, data relationships to other objects,
and other data. The data dictionary is an essential component of any relational database. Ironically,
because of its importance, it is invisible to most database users. Typically, only database administrators
interact with the data dictionary.
The data dictionary, in general, includes information about the following:
Name of the data item
Aliases
Description/purpose
Related data items
Range of values
Data structure definition/Forms
The name of the data item is self-explanatory.
Aliases include other names by which this data item is called DEO for Data Entry Operator and DR for
Deputy Registrar.
Description/purpose is a textual description of what the data item is used for or why it exists.
Related data items capture relationships between data items e.g., total_marks must always equal to
internal_marks plus external_marks.
Range of values records all possible values, e.g. total marks must be positive and between 0 to 100.
Data structure Forms: Data flows capture the name of processes that generate or receive the data items.
If the data item is primitive, then data structure form captures the physical structures of the data item. If
the data is itself a data aggregate, then data structure form capture the composition of the data items in
terms of other data items.
The mathematical operators used within the data dictionary are defined in the table:
Notations Meaning

x=a+b x includes of data elements a and b.

x=[a/b] x includes of either data elements a or b.

x=a x includes of optimal data elements a.

x=y[a] x includes of y or more occurrences of data element a

x=[a]z x includes of z or fewer occurrences of data element a

x=y[a]z x includes of some occurrences of data element a which are between y and z.

Advantages :
 Behavior and working of a system can easily be understood without any effort.
 Results are more accurate by using this model.
 This model requires less cost for development as cost of resources can be minimal.
 It focuses on behavior of a system rather than theories.

Disadvantages :
 This model does not have any theory, so trainee is not able to fully understand basic principle and
major concept of modeling.
 This modeling cannot be fully automated.
 Sometimes, it’s not easy to understand overall result.
 Does not achieve maximum productivity due to some technical issues or any errors.

Other classical analysis methods:


Structured Charts: It partitions a system into block boxes. A Black box system that
functionality is known to the user without the knowledge of internal design.

Structured Chart is a graphical representation which shows:

o System partitions into modules


o Hierarchy of component modules
o The relation between processing modules
o Interaction between modules
o Information passed between modules

The following notations are used in structured chart:


Pseudo-code

Pseudo-code notations can be used in both the preliminary and detailed design phases. Using
pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.

What Does Data Flow Model Mean?


A data flow model is diagramatic representation of the flow and exchange of information within a system. Data flow
models are used to graphically represent the flow of data in an information system by describing the processes
involved in transferring data from input to file storage and reports generation.
As information converts data through software, it is modified by a series of transformations. These transformations
are then depicted through a graphical representation of processes that are applied as data transforms the input it
receives by creating a data flow output.
A data flow diagram takes business processes and activities and uses them to create a clear illustration of how data
flows through a system. DFDs represent the flow of data from external entities into a single system by moving and
storing data from one process to another.
Through the use of data flow diagrams, a system can be decomposed into subsystems, and subsystems can be further
decomposed into lower-level subsystems. Each subsystem represents a process or activity in which data is
processed. Once the lowest level is reached, processes can no longer be decomposed.
Data flow modeling can be used to identify a variety of different things, such as:

 Information that is received from or sent to other individuals, organizations, or other computer systems.
 Areas within a system where information is stored and the flows of information within the system are being
modeled.
 The processes of a system that act upon information received and produce the resulting outputs.
Analysis Modeling

Structured Analysis (DeMarco)


 Analysis products must be highly maintainable, especially the software requirements
specification.
 Problems of size must be dealt with using an effective method of partitioning.
 Graphics should be used whenever possible.
 Differentiate between the logical (essential) and physical (implementation) considerations.
 Find something to help with requirements partitioning and document the partitioning before
specification.
 Devise a way to track and evaluate user interfaces.
 Devise tools that describe logic and policy better than narrative text.
Analysis Model Elements
 Data dictionary - contains the descriptions of all data objects consumed or produced by the
software
 Entity relationship diagram (ERD) - depicts relationships between data objects
 Data flow diagram (DFD) - provides an indication of how data are transformed as they move
through the system; also depicts functions that transform the data flow (a function is represented
in a DFD using a process specification or PSPEC)
 State transition diagram (STD) - indicates how the system behaves as a consequence of external
events, states are used to represent behavior modes. Arcs are labeled with the events triggering
the transitions from one state to another (control information is contained in control specification
or CSPEC)
Data Modeling Elements (ERD)
 Data object - any person, organization, device, or software product that produces or consumes
information
 Attributes - name a data object instance, describe its characteristics, or make reference to another
data object
 Relationships - indicate the manner in which data objects are connected to one another

Cardinality and Modality (ERD)


 Cardinality - in data modeling, cardinality specifies how the number of occurrences of one object
are related to the number of occurrences of another object (1:1, 1:N, M:N)
 Modality - zero (0) for an optional object relationship and one (1) for a mandatory relationship

Functional Modeling and Information Flow (DFD)


 Shows the relationships of external entities, process or transforms, data items, and data stores
 DFD's cannot show procedural detail (e.g. conditionals or loops) only the flow of data through the
software
 Refinement from one DFD level to the next should follow approximately a 1:5 ratio (this ratio
will reduce as the refinement proceeds)
 To model real-time systems, structured analysis notation must be available for time continuous
data and event processing (e.g. Ward and Mellor or Hately and Pirbhai)

You might also like