Se Notes I - V Rejinpaul
Se Notes I - V Rejinpaul
com
www.rejinpaul.com
YEAR / SEMESTER : II / IV
SUB NAME : SOFTWARE ENGINEERING SUB CODE : CS6403
PART A
1. What is software engineering?
Software engineering is a discipline in which theories, methods and tools are applied to develop professional
software.
2. What is Software?
Software is nothing but a collection of computer programs that are related documents that are indented to
provide desired features, functionalities and better performance.
Part - B
The first published model of the software development process was derived from other engineering process.
This is illustrated from following figure. Because of the cascade from one phase to another, this model is
known as "Water fall model" or "Software life cycle model". The principle stage of the model map on to
fundamental development activities.
(i)Requirement analysis and definition : The systems services constraints and goals are established by
consultation with system users. They are then defined in detail and serve as system specification
(ii)System and Software Design : The system design process partition the requirements to either
hardware or software systems. It establishes an overall system architecture. Software design involves
identifying and describing the fundamental software system abstraction and their relationship.
(iii)Implementation and unit testing : During this stage the software design is realized as a set of
programs or program unit. Unit testing involves verifying that each unit meets its specification
(iv)Integration and system testing : The individual program units or programs are integrated and tested
as a complete system to ensure that the software requirement have been met. Af
Generic building is a type of software building that build any product some points to remember are:
(i)Specification - set out requirements and constraints
(ii) Design - Produce a paper model of the system
Throw-away prototyping : Objective is to understand the system requirements. Starts with poorly understood
requirements are difficult to capture accurately.
Advantages
(i)Faster than the waterfall model
(ii) High level of user involvement from start
(iii)Technical or other problems discovered early - risk reduced Strategies
(iv)Evolutionary Prototyping
Advantages
(i)Effort of prototype is not wasted
(ii)Faster than the waterfall model
(iii)High level of user involvement from start
(iv)Technical or other problems discovered early - risk reduced
Incremental model. Have same phases as the waterfall model. Phases are
Analysis.
Design. Code. Test.
Incremental model delivers series of releases to customers called as increments.
The first increment is called as core product. Here only the document processing facilities are available.
Second increment, more sophisticated document producing and processing facilities are available.
Next increment spelling and grammar checking facilities are given.
Merits
This model can be adopted when there is less number of people involved in the project.
Technical risks can be managed with each increment.
For a very small time span, at least core product can be delivered to the customer.
RAD Model
Rapid Application Development Model is the type of incremental model.
construction.
Phases
Business modeling
Data modeling
Process modeling
www.rejinpaul.com
www.rejinpaul.com
Water fall Model. The first published model of the software development process was derived from other
engineering process. This is illustrated from following figure. Because of the cascade from one phase to
another, this model is known as "Water fall model" or "Software life cycle model". The principle stage of the
model map on to fundamental development activities.
(i)Requirement analysis and definition : The systems services constraints and goals are established by
consultation with system users. They are then defined in detail and serve as system specification.
(ii)System and Software Design : The system design process partition the requirements to either hardware or
software systems. It establishes an overall system architecture. Software design involves identifying and
describing the fundamental software system abstraction and their relationship.
(iii)Implementation and unit testing : During this stage the software design is realized as a set of programs or
program unit. Unit testing involves verifying that each unit meets its specification
(iv)Integration and system testing : The individual program units or programs are integrated and tested as a
complete system to ensure that the software requirement have been met. After testing the software system is
delivered to the customer.
(v)Operation and maintenance : The system is installed and put into practical use. Maintenance involves
correcting errors which were not discovered in earlier stages of life cycle.
www.rejinpaul.com
www.rejinpaul.com
SPIRAL MODEL
The spiral model is divided into number of frame works. These frameworks are denoted by task regions.
Usually there are six task regions. In spiral model project entry point axis is defined.
The task regions are: Customer communication Planning
Risk analysis. Engineering.
Construct and release.
System engineering process follows a waterfall model for the Customer evaluation.
Drawbacks
It is based on customer communication.
It demands considerable risk assessment.
Partition requirements
Identify sub-systems.
Assign requirements to sub-systems. Specify sub-system functionality. Define sub-system interfaces.
Requirement
Definition System Design
Sub-system Design System Integration System
Decommissioning
System evolution System Installation
The umbrella activities in a software development life cycle process include the following:
Framework Activities
An effective process model should define a small set of framework activities that are always applicable,
regardless of project type. The APM defines the following set of framework activities:
project definition - tasks required to establish effective communication between developer and
customer(s) and to define requirements for the work to be performed
planning - tasks required to define resources, timelines and other project related information and assess
both technical and management risks
engineering and construction - tasks required to create one or more representations of the software (can
include the development of executable models, i.e., prototypes or simulations) and to generate code and
conduct thorough testing
release - tasks required to install the software in its target environment, and provide customer support
(e.g., documentation and training)
customer use - tasks required to obtain customer feedback based on use and evaluation of the
deliverables produced during the release activity
Each of the above framework activities will occur for every project. However, the set of tasks (we call this
a task set) that is defined for each framework activity will vary depending upon the project type (e.g.,
Concept Development Projects will have a different task set than Application Enhancement Projects) and
the degree of rigor selected for the project.
www.rejinpaul.com
www.rejinpaul.com
PART A
ii. Throw-away prototyping Using this approach a rough practical implementation of the system is
produced. The requirement problems can be identified from this implementation. It is then discarded. System
is then developed using some different engineering paradigm.
PART B
In its simplest possible form the requirements elicitation process might be considered as a conversation
between the client and the software engineer that results in an understanding by the software engineer of what
the customer wants. Life is seldom that simple however and experience has shown that although eliciting
requirement is easiest when there is a good understanding of the application domain. It is essential that the
requirements engineer understands the application domain but how do you know that you understand it? You
might think that when the user says a crank hard and support the Fox-1 unless you know what they mean but
how can you be sure?
The answer is to develop an explicit conceptual model of the domain and to present that back to the client for
verification
Requirements Elicitation might be described as eliciting a specification of what is required by
allowing experts in the problem domain to describe the goals to be reached during the problem resolution. This
being so, we may be limited to having a vague desire at the beginning of the process, such as "We want a new
aircraft carrier", and at the end of the process having a detailed description of the goals and some clear idea of
the steps necessary to reach the goals
There are several things wrong with this description. Where does the logical model reside, in people's
heads? Is there an expert with sufficient breadth and depth of domain knowledge to ensure the goal and all its
sub goals are consistent and achievable? If there is not, are we merely leaving to the design stage the process
of systematizing the sub goals. It is very likely that many goals will be inconsistent, even deliberately
contradictory. Can we say the Requirements Elicitation stage is complete while this is so? We certainly can if
our methods for supporting the elicitation have no means of establishing the consistency of the goals. How
precise do we need to be in specifying our goals.
Requirement validation ensures captured requirements reflect the functionality desired by the customer
and other stakeholders. Although requirement validation is not the focus of requirement testability
analysis, it is supported. Requirement validation involves an engineer, user or customer judging the
validity (i.e. correctness) of each requirement. Models provide a means for stakeholders to precisely
understand the requirements and assist in recognizing omissions. Test automatically derived from the
model support requirement validation through manual inspection or execution within simulation or host
environments.
Requirement validation is the final stage of requirements engineering.
The aim of requirement validation is to validate the requirements, i.e., check the requirements to certify
that they represent an acceptable description of the system, which is to be implemented. Distinction
Between Requirement Analysis and Requirement Validation Requirement analysis is concerned with
requirement as elicited from system stakeholders. The requirements are usually incomplete and are
expressed in an informal and unstructured way. Requirement validation is concerned with checking a final
draft of a requirements document which includes all system requirements and where known
incompleteness and inconsistency has been removed. Different Concerns Between Requirement Analysis
www.rejinpaul.com
www.rejinpaul.com
and Requirement Validation Requirements analysis should mostly be concerned with answering the
question have we got the right requirement
Requirements validation is mostly be concerned with answering the question Requirements
Validation Process The main problem of requirements validation is that there is no exiting document,
which can be a basis for the validation. A design or a program may be validated against the specification.
However, there is no way to demonstrate that a requirements specification is correct with respect to some
other system representation. Specification validation, therefore, really means ensuring that the
requirements document represent a clear description of the system for design and implementation and is a
final check that the requirements meet stakeholder needs.
To manage the relationship between requirements, and To manage the dependencies between the
requirements document and other documents produced during the
Data Dictionary is also called data repository Documents specific facts about the system .
The Following are the important divisions of data flows:
( i)Data flows
(ii)Data stores
(iii)Processes
(iv)External entities
(v)Data structures (records)
(vi)Data elements (data items, fields)
(vii)Documenting the elements
All of the items in a DFD is to be documented in the Data Dictionary, also called the Data Repository. All
major characteristics of the items must be recorded and described. The key objective is to provide clear,
www.rejinpaul.com
www.rejinpaul.com
During the documentation process, paper-based standard forms or a CASE tool can be used. Various tools
are available, and Visible Analyst is a popular example.
Must document every data element. Example attributes of a data element are:
(i)Name or label
(ii)Alternate name(s)
(iii)Type and length
(iv)Output format
(v)Default value
(vi)Prompt, header or field caption
(vii)Source
(viii)Security
(ix)Responsible user(s)
(x)Acceptable values and data validation
(xi)Derivation formula
(xii)Description and comments
(i)Name or label
(ii)Alternate name(s)
(iii)Description
(iv)Origin
(v)Record group of data elements
(vi)Volume and frequency
Development Managers
Use the requirements document to plan a bid for the system and to plan the
Test programmers
Use the requirements to develop validation tests for the system
Maintenance programmers
Use the requirements to help understand the system and the relationships between its parts
Organizational requirements
. Requirements which are a consequence of organizational policies and
procedures, e.g. process standards used, implementation requirements
etc.
External requirements
Requirements which arise from factors which are external to the system
and its development process, e.g. interoperability requirements,
legislative requirements etc.
www.rejinpaul.com
www.rejinpaul.com
PART A
5.What is Coupling?
Coupling is the measure of interconnection among modules in a program structure. It depends on the interface
complexity between modules.
PART B
This model describes the computations that take place within a system. This model is useful when the
transformation from the inputs to outputs is Complex. The functional model of a system can be represented
by a data Flow Diagram (DFD).
Structural Modeling.
Structural model includes a detail refinement of ERD,data flow model and control flow model.
Creating an ERD.
Developing relationships and cardinality/Modality. Creating a data flow model using the guidelines.
Creating a control flow model which describes the structural connection of
Processes
Control flows
Control stores. State automation
Process activation table.
The fundamental organization of a system embodied in its components, their relationships to each other and
to the environment, and the principles guiding its design and evolution. Over time the set of significant design
decisions about the organization of a software system which encompass:
(i)the selection of structural elements, their interfaces, and their collaborative behavior.
(ii) the composition of elements into progressively larger subsystems.
(iii) the architectural style that guides this organization.
(iv) system-level properties concerning usage, functionality, performance, resilience, reuse, constraints, trade-
offs, and aesthetics.
(v)Roles of Architecture Description Shows the big picture about the software solution structures. Captures
significant, early design decisions of a software system.
(vi)Enables to characterize or evaluate externally visible properties of the software system.
(vii)Defines Constraints on the selection of implementation alternatives.
(viii)Serves as a skeleton around which full- fledged systems can be fleshed out.
Architectural styles.
Categories:
1. Components
2. Constraints
3. Connectors
4. Semantic models
www.rejinpaul.com
www.rejinpaul.com
Architectural patterns
1. Concurrency
2. Persistence
3. Distribution
Architectural designs
Representing system in context
Target system
Super ordinate systems
Sub ordinate systems
Actors
Peer-level systems
Archetypes
Point or node
Control unit or controller
Indicator or output
Transform flow
Transaction flow
Transform mapping
Step.1:Review fundamental system model to identify information flow
Step.2:Review and refine the data flow diagram for software
Step.3:Determine if the DFD has the transform or transaction flow characteristics
Step.4:Isolate the transform center by specifyin incoming and outoing flow boundaries
Step.5:Perform first level factoring
Step.6:Perform second level factoring
Step.7:Refine the first iteration achiecture using design heuristics for improved software quality
Transaction mapping
Step.1:Review fundamental system model to identify information flow
Step.2:Review and refine the data flow diagram for software
Step.3:Determine if the DFD has the transform or transaction flow characteristics
Step.4:Identify the transaction center and flow characteristics along each of the action paths
Step.5:Map DFD into transaction processing structure
Step.6:Factor and refining the transaction structure and structure of each action path
Step.7:Refine the first iteration achiecture using design heuristics for improved software quality
www.rejinpaul.com
www.rejinpaul.com
Coupling
Types
Content coupling - a module depends on the internal working of another module
Common coupling - two modules share the same global data
Control coupling - one module controls the flow of another, by passing it information on what to do
Stamp coupling - modules share a composite data structure and use only part of it
Data coupling - modules share data through, e.g., through parameters.
External coupling
Traditional components
-Structured programming
Graphical design notations
Tabular design notations
Program design language(PDL)
The design concepts provide the software designer with a foundation from which more sophisticated
methods can be applied. A set of fundamental design concepts has evolved. They are:
1. Abstraction - Abstraction is the process or result of generalization by reducing the information
www.rejinpaul.com
www.rejinpaul.com
8. Design Heuristic
Heuristic evaluations are one of the most informal methods of usability inspection in the field of
human-computer interaction. There are many sets of usability design heuristics; they are not mutually
exclusive and cover many of the same aspects of user interface design.
Error prevention:
Even better than good error messages is a careful design which prevents a problem from occurring in the
first place. Either eliminate error-prone conditions or check for them and present users with a
confirmation option before they commit to the action.
Acceleratorsunseen by the novice usermay often speed up the interaction for the expert user such
that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent
actions.
Data-centered architectures
Data flow architectures
Call and return architectures
Object-oriented architectures
Layered architectures
Data-Centered Architecture
A data store resides at the center of this architecture and is accessed frequently by other components that
update, add, delete, or otherwise modify data within the store.
Layered Architecture
define separate branches of the module hierarchy for each major function
Vertical Partitioning:
Factoring
PART A
1.Define software testing?
Software testing is a critical element of software quality assurance and represents the ultimate review of
specification, design, and coding.
6.Define debugging.
Debugging is defined as the process of removal of defect. It occurs as a consequence of successful testing.
Alpha test: The alpha testing is attesting in which the version of complete software is tested by the customer
under the supervision of developer. This testing is performed at developers site.
Beta test: The beta testing is a testing in which the version of the software is tested by the customer without
the developer being present. This testing is performed at customers site.
-product is improved.
PART B
Validation refers to a different set of activities that ensure that the software that has been built is traceable
to the customer
requirements. According to Boehm,
Top-down integration
It is an incremental approach.
Modules are integrated by moving downward through the control hierarchy beginning with the main control
module(main program).
Subordinate modules are incorporated by depth-first or breadth-first manner.
Bottom-up integration
This testing begins construction and testing with the components at the lowest levels in the program structure.
Regression testing
It is the re-execution of some subset of tests that have already been conducted to ensure the changes that have
not been propagated unintended side effects.
Smoke testing
It minimizes the integration risk.
Error diagnosis and correction are simplified
3.Explain in detail about black box testing and white box testing?
White box testing:
1.condition testing
2.loop testing
-simple loop
www.rejinpaul.com
www.rejinpaul.com
-nested loop
-concatenated loop
-unstructured loop
3.basis path testing
Advantages:
Each procedure can be tested thoroughly.
It helps in optimizing the code
White box testing can be easily automated
Black box testing Internal system design is not considered in this type of testing. Tests are based on
requirements and functionality.
White box testing This testing is based on knowledge of the internal logic of an applications code. Also
known as Glass box Testing. Internal software and code working should be known for this type of testing.
Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing Testing of individual software components or modules. Typically done by the programmer
and not by testers, as it requires detailed knowledge of the internal program design and code. may require
developing test driver modules or test harnesses.
Incremental integration testing Bottom up approach for testing i.e continuous testing of an application as
new functionality is added; Application functionality and modules should be independent enough to test
separately. done by programmers or by testers.
Integration testing Testing of integrated modules to verify combined functionality after integration.
Modules are typically code modules, individual applications, client and server applications on a network, etc.
This type of testing is especially relevant to client/server and distributed systems.
Functional testing This type of testing ignores the internal parts and focus on the output is as per
requirement or not. Black-box type testing geared to functional requirements of an application.
www.rejinpaul.com
www.rejinpaul.com
System testing Entire system is tested as per the requirements. Black-box type testing that is based on
overall requirements specifications, covers all combined parts of a system.
End-to-end testing Similar to system testing, involves testing of a complete application environment in a
situation that mimics real-world use, such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a
major testing effort. If application is crashing for initial use then system is not stable enough for further
testing and build or application is assigned to fix.
Regression testing Testing the application as a whole for the modification in any module or functionality.
Difficult to cover all the system in regression testing so typically automation tools are used for these testing
types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified
requirements. User or customer do this testing to determine whether to accept application.
Load testing Its a performance testing to check system behavior under load. Testing an application under
heavy loads, such as testing of a web site under a range of loads to determine at what point the systems
response time degrades or fails.
Stress testing System is stressed beyond its specifications to check how and when it fails. Performed under
heavy load like putting large number beyond storage capacity, complex database queries, continuous input to
system or database load.
Performance testing Term often used interchangeably with stress and load testing. To check whether
system meets performance requirements. Used different performance and load tools to do this.
Usability testing User-friendliness check. Application flow is tested, Can new user understand the
application easily, Proper help documented whenever user stuck at any point. Basically system navigation is
checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating
systems under different hardware, software environment.
Recovery testing Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
Security testing Can system be penetrated by any hacking way. Testing how well the system protects
against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Comparison testing Comparison of product strengths and weaknesses with previous versions or other
similar products.
Alpha testing In house virtual user environment can be created for this type of testing. Testing is done at
the end of development. Still minor design changes may be made as a result of such testing.
Beta testing Testing typically done by end-users or others. Final testing before releasing application for
commercial purpose
www.rejinpaul.com
www.rejinpaul.com
1. Define measure.
Measure is defined as a quantitative indication of the extent, amount, dimension, or size of some attribute of a
product or process.
2. Define metrics.
Metrics is defined as the degree to which a system component, or process possesses a given attribute.
6. What is EVA?
Earned Value Analysis is a technique of performing quantitative analysis of the software Project. It provides
a common value scale for every task of software project. It acts as a measure for software project progress.
8. Define maintenance.
Maintenance is defined as the process in which changes are implemented by either
modifying the existing system s architecture or by adding new components to the system.
PART B
1. Write a note on i)Cocomo model ii)SOFTWARE METRICS
i)COCOMO estimation criteria
Types
Three classes
Merits
Limitations
Example
ii)Software metrics
Size oriented approach
Function oriented approach
Advantages
Disadvantages
Example of LOC based estimation
Function Point Analysis should be performed by trained and experienced personnel. If Function Point Analysis
is conducted by untrained personnel, it is reasonable to assume the analysis will done incorrectly. The
personnel counting function points should utilize the most current version of the Function Point Counting
Practices Manual,
The Five Major Components
Software risks:
What can go wrong?
What is the likelihood?
What will be the damage?
What can be done about it?
Risk analysis and management are a set of activities that help a software team to understand and manage
uncertainty about a project.
Risk is the uncertainty associated with the outcome of a future event and has a number of attributes:
Uncertainty (probability)
Time (future event)
Potential for loss (or gain)
Multiple perspectives, e.g.,
Process perspective (development process breaks)
Project perspective (critical objectives are missed)
Product perspective (loss of code integrity)
www.rejinpaul.com
www.rejinpaul.com
The process by which a course of action is selected that balances the potential impact of a risk weighted by
its probability of occurrence and the benefits of avoiding (or controlling) the risk
Risk Projection
Estimate the probability of occurrence
Estimate the impact on the project on a particular scale, e.g.,
low impact (negligible)
medium impact (marginal)
high impact (critical)
very high impact (catastrophic)
Definition
Software risks
Types of risks
Project risks
Technical risks
Business risks
Predictable risks
Unpredictable risks
Reactive and proactive risks
Risk identification
Risk projection
When working on the product or documentation, the staff member should always be aware of the stability of
the computing environment theyre working in. Any changes in the stability of the environment should be
recognized and taken seriously.
Management
The lack of a stable-computing environment is extremely hazardous to a software development team. In the
event that the computing environment is found unstable, the development team should cease work on that
system until the environment is made stable again, or should move to a system that is stable and continue
working there.