0% found this document useful (0 votes)
19 views

Soft. Engineering

Uploaded by

tripadarsh2112
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Soft. Engineering

Uploaded by

tripadarsh2112
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Software Design- •Design is the highly significant phase in the software development where the designer plans “how” a

software system should be produced in order to make it functional, reliable and reasonably easy to understand, modify and
maintain. • SRS tell us what a system does and becomes input to design process, which tells us “how” a software system works. •
Software design involves identifying the component of software design, their inner workings, and their interface from the SRS.
The principle work of this activity is the software design document (SDD) which is also referred as software design description •
Software design deals with transforming the customer requirements, as described in the SRS document, into a form (a set of
documents) called software design document that is suitable for implementation in a programming language.

Conceptual and Technical design • Conceptual design describe the system in language understandable to the customer. It does
not contain any technical jargons and is independent of implementation • By contrast the technical design describe the hardware
configuration, software needs, communication interface, input and output of system, network architecture that translate the
requirement in to the solution to the customer's problem.

Characteristics and objectives of a good software design- Good design is the key of successful product. • Correctness: A good
design should correctly implement all the functionalities identified in the SRS document. • Understandability: A good design is
easily understandable. • Efficiency: It should be efficient. • Maintainability: It should be easily amenable to change.
Features of a design document- • It should use consistent and meaningful names for various design components. • The design
should be modular. The term modularity means that it should use a cleanly decomposed set of modules. • It should neatly
arrange the modules in a hierarchy, e.g. in a tree-like diagram.

Software Design process- • Architectural Design (Top Level design):- Describe how software is decomposed and organized into
components • Detailed Design (Low level design):-describe the specific behavior of these components. The output of this process
is a set of models that records the major decision that has been taken.

Software Design principle- 1. Abstraction It is a tool that permits a designer to consider a component at abstract level; without
worrying about the detail of the implementation of the component. 2. Encapsulation/Information hiding The concept of
information hiding is to hide the implementation details of shared information and processing items by specifying modules called
information hiding modules. Design decisions that are likely to change in the future should be identified and modules should be
designed in such a way that those design decisions are hidden from other modules 3. Coupling and cohesion.

Coupling- It is the measure of the degree of interdependence between modules. Coupling is highly between components if they
depend heavily on one another, (e.g., there is a lot of communication between them).Types of Coupling:- 1. Data coupling:
communication between modules is accomplished through welldefined parameter lists consisting of data information items 2.
Stamp coupling: Stamp coupling occurs between module A and B when complete data structure is passed from one module to
another. 3. Control coupling: a module controls the flow of control or the logic of another module. This is accomplished by
passing control information items as arguments in the argument list. 4. Common coupling: modules share common or global
data or file structures. This is the strongest form of coupling both modules depend on the details of the common structure. 5.
Content coupling: A module is allowed to access or modify the contents of another, e.g. modify its local or private data items.
This the strongest form of coupling.

Pseudocode: - Is a tool for planning or documenting the content of the program routine or module as name implies is same as
real code. Pseudocode notation can be used in both the preliminary and detailed design phases. Using pseudocode, the designer
describes system characteristics using short, concise, English language phrases that are structured by keywords such as It-Then-
Else, While-Do, and End. Example:- COUNT=0, STOCK=STOCK+QUANTITY, OR ,READ THE DATA FROM SOURCE, WRITE THE DATA
TO DESTINATION.

Flowcharts- A flowchart is a picture of the separate steps of a process in sequential order. It is a generic tool that can be adapted
for a wide variety of purposes, and can be used to describe various processes, such as a manufacturing process, an
administrative or service process, or a project plan.

Cohesion- It is a measure of the degree to which the elements of a module are functionally related. Cohesion is weak if elements
are bundled simply because the perform similar or related functions. Cohesion is weak if elements are bundled simply because
they perform similar or related functions . Cohesion is strong if al parts are needed for the functioning of other parts
(e..Important design objective is to maximize module cohesion and minimize module coupling.
Types of Cohesion:- 1.Functional cohesion: A and B are part of a single functional task. This is very good reason for them to be
contained in the same procedure or achieved when the components of a module cooperate in performing exactly one function,
e.g., POLL_SENSORS, GENERATE_ALARM, etc. 2.Sequential Cohesion: Module A outputs some data which forms the input to B.
This is the reason for them to be contained in the same procedure. 3.Communicational cohesion: is achieved when software
units or components of a module sharing a common information or data structure are grouped in one module. 4.Procedural
cohesion: is the form of cohesion obtained when software components are grouped in a module to perform a series of functions
following a certain procedure specified by the application requirements. 5. Temporal cohesion: Module exhibits temporal
cohesion when it contains tasks that are related by the fact that all tasks must be executed in the same time-span. Examples are
functions required to be activated for a particular input event, or during the same state of operation. 6.Logical cohesion: refers to
modules designed using functions who are logically related, such as input/output functions, communication type functions (such
as send and receive), 7.Coincidental cohesion: Coincidental cohesion exists in modules that contain instructions that have little or
no relationship to one another.

OBJECT ORIENTED DESIGN- The basic abstractions are not the real world functions but are the data abstraction where the real
world entities are represented. 2- Function are grouped together on the basis of the data they operate since the classes are
associated with their methods. 3- Carried out using UML 4- It is a bottom up approach. 5- Begins by identifying objects and
classes.6- In this approach the state information is not represented is not represented in a centralized memory but is
implemented or distributed among the objects of the system.7- This approach is mainly used for evolving system which mimics a
business or business case.

FUNCTION ORIENTED DESIGN- carried out using structured analysis and structured design i.e, data flow diagram. 2- It is a top
down approach.3- In this approach the state information is often represented in a centralized shared memory. 4- In function
oriented design we decompose in function/procedure level. 5- Begins by considering the use case diagrams and the scenarios. 5-
Functions are grouped together by which a higher level function is obtained.6- The basic abstractions, which are given to the
user, are real world functions.

Top-down approach (is also known as step-wise design) is essentially the breaking down of a system to gain insight into its
compositional sub-systems. In a top-down approach an overview of the system is formulated, specifying but not detailing any
first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until
the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
these make it easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed enough to
realistically validate the model

Bottom-up approach- is the piecing together of systems to give rise to grander systems, thus making the original systems sub-
systems of the emergent system. In a bottom-up approach the individual base elements of the system are first specified in great
detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many
levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, whereby the beginnings are
small but eventually grow in complexity and completeness.

Halstead’s Software science- • Halstead has proposed metrics for length and volume of a program based of the number of
operation and operands. • Tokens are classified as either operators or operands all software science measures are function of the
count of these tokens • Any symbol or keyboard in a program that specify an algorithmic action is considered an operator ,while
a symbol used to represent the data is considered an operand. • Variable, constants and even labels are operands • Operators
consists of arithmetic symbols such as +,-,/,*and command names such as while ,for, printf, special character such as :=,braces,
parentheses etc. • In a program we define following measurable quantities N = N1+N2 N : program length where N1 : total
occurrences of operators N2 : total occurrences of operands Volume • The unit of measurement of volume is the common unit
for size “bits”. It is the actual size of a program if uniform binary encoding for the vocabulary is used. V= N* log2 ή Program Level
• The value of L ranges between zero and one, with L=1 representing a program written at the highest possible level (i.e., with
minimum size) and v is the potential volume L=V*/V Estimated program Length Ñ= ή1log2 ή2+ ή2log2 ή2

• Advantages – Users point of view: what user requests & receives from the system – Independent of tech.,lang,tool,methods –
Can be estimated from SRS or Design specification Doc. – Since directly from first phase doc. So easy re-estimation on expansion
or modification. • Disadvantages – Difficult to estimate – Experienced based/subjective

Cyclomatic Complexity Measures- Cyclomatic complexity is a source code complexity measurement that is being correlated to a
number of coding errors. It is calculated by developing a Control Flow Graph of the code that measures the number of
linearlyindependent paths through a program module. Lower the Program's cyclomatic complexity, lower the risk to modify and
easier to understand. It can be represented using the below formula:

Cyclomatic complexity = E - N + 2*P where, E = number of edges in the flow graph. N = number of nodes in the flow graph. P =
number of nodes that have exit points.
Example : IF A = 10 THEN IF B > C THEN A = B ELSE A = C ENDIF ENDIF Print A Print B Print C.

Software Testing- • Software testing is the process of testing the software product. • Testing strategy provides a framework or
set of activities which are essential for the success of the project. they may include planning, designing of test cases, execution of
program with test cases, interpretation of the outcome and finally collection and management of data. • Testing is the process of
executing a program with the intent of finding errors. • Effective software testing will contribute to the delivery of high quality
software product, more satisfied users, lower maintenance cost, more accurate and reliable result. • Hence software testing is
necessary and important activity of software development process.

Error, Mistake, Bug, Fault and Failure • People make errors. A good synonym is mistake. This may be a syntax error or
misunderstanding of specifications. Sometimes, there are logical errors. • When developers make mistakes while coding, we call
these mistakes “bugs”. • A fault is the representation of an error, where representation is the mode of expression, such as
narrative text, data flow diagrams, ER diagrams, source code etc. Defect is a good synonym for fault. • A failure occurs when a
fault executes. A particular fault may cause different failures, depending on how it has been exercised.

SOFTWARE TESTING TECHNIQUE Fundamental Principal of Testing: The objective of the testing is to provide a quality product to
the customer . 1. The goal of testing is to find defects before customers find them out. 2. Exhaustive testing is not possible;
program testing can only show the presence of defects, never their absence. 3. Testing applies all through the software life cycle
and is not an end of- cycle activity. 4. Understand the reason behind the test. 5. Test the tests first. Testing Objective: The main
objective of testing is 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one
that has a high probability of finding an as-yet undiscovered error. 3. A successful test is one that uncovers an as-yet-
undiscovered error.

Verification refers to the set of activities that ensure that software correctly implements a specific function. Verification: "Are we
building the product right?" Validation refers to a different set of activities that ensure that the software that has been built is
traceable to customer requirements. Validation: "Are we building the right product?

There are 3 levels of testing: i. Unit Testing ii. Integration Testing iii. System Testing

Unit testing is undertaken after a module has been coded and successfully reviewed. • Unit testing (or module testing) is the
testing of different units (or modules) of a system in isolation. • In order to test a single module, a complete environment is
needed to provide all that is necessary for execution of the module. That is, besides the module under test itself, the following
steps are needed in order to be able to test the module.

Integration Testing: Integration is the process by which components are aggregated to create larger components. Integration
Testing is testing done to show that even though the componenets were individually satisfactory (after passing component
testing), checks the combination of components are incorrect or inconsistent. • The purpose of unit testing is to determine that
each independent module is correctly implemented. This gives little chance to determine that the interface between modules is
also correct, and for this reason integration testing must be performed. • Focuses on interaction of modules in a subsystem. •
Unit tested modules combined to form subsystems. • Test cases to “exercise” the interaction of modules in different ways. Goal:
Test all interfaces between subsystems and the interaction of subsystems. The Integration testing strategy determines the order
in which the subsystems are selected for testing and integration.

TOP-DOWN INTEGRATION: Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. BOTTOM-
UP INTEGRATION: Bottom-up integration testing, begins construction and testing with atomic modules. Because components are
integrated from the bottom up, processing required for components subordinate to a given level is always available and the need
for stubs is eliminated. REGRESSION TESTING: Each time a new module is added as part of integration testing, the software
changes. These changes may cause problems. In the context of an integration test strategy, regression testing is the re-execution
of some subset of tests that have already been conducted.

System Testing System tests are designed to validate a fully developed system to assure that it meets its requirements. There are
essentially three main kinds of system testing: Alpha Testing. Alpha testing refers to the system testing carried out by the test
team within the developing organization. Beta testing. Beta testing is the system testing performed by a select group of friendly
customers. Acceptance Testing. Acceptance testing is the system testing performed by the customer to determine whether he
should accept the delivery of the system.

Audit : Audit means an independent examination of a software product or processes to assess compliance with specifications,
standards, contractual agreements, or other criteria. The terminology, Audit in the field of software can relate to any of the
following:  A software Quality Assurance, where the software is audited for quality  A software licensing audit, where a user of
software is audited for licence compliance  A Physical Configuration Audit (PCA) is the formal examination to verify the
configuration item's product baseline
Characteristics of Inspection :  Inspection is usually led by a trained moderator, who is not the author. Moderator's role is to do
a peer examination of a document  Inspection is most formal and driven by checklists and rules.  This review process makes
use of entry and exit criteria.  It is essential to have a pre-meeting preparation.  Inspection report is prepared and shared with
the author for appropriate actions.  Post Inspection, a formal follow-up process is used to ensure a timely and a prompt
corrective action.  Aim of Inspection is NOT only to identify defects but also to bring in for process improvement.

DevOps Development DevOps has become a widely accepted solution for organizations that are willing to shorten the software
life cycles from development to delivery and operation.

Test Automation There seems to be a trend of transition to automatic verification of various processes and code as every
enterprise wants to get maximum benefits from the product. Test automation solutions will help you attain high-quality outputs
with the help of automation.

IoT Testing Another challenge for IoT testers in the upcoming years lies in strategies. According to the World Quality Report, 34%
of respondents said their products have IoT functionality, but their team still can not find out the most proper testing strategy.

Block-chain Testing Block-chain is improving financial transactions in a far better way. It also brings along few security challenges
as well. To address these security issues, next-gen block-chain testing services will be carried out that are specifically designed for
more productive Block-chain apps.

Cyber Security Testing With the digital evolution on the hype there comes many security threats. As cyber threats can take place
in any form and at any moment, testing for security will be a popular topic this year

Black box testing is a software testing style wherein the objective is to examine whether software works for end-users as
intended without worrying about the internal system. In this, a tester observes the behavior of a system entirely by inputs and
outputs.

Types of Black Box Testing Black box testing mainly comprises three types of testing: Functional testing: It involves testing
specific functions or features of software under test. Functional testing includes unit testing, smoke testing, sanity testing,
integration testing, and user acceptance testing. Non-functional testing: It involves testing additional aspects of the software
that are beyond features and functionalities. It helps check how well a system performs under high load and different
environments. Non-functional testing includes performance testing, load testing, stress testing, volume testing, and security
testing.

Advantages of Black Box Testing  It can easily be performed by testers with no technical background or programming
knowledge.  It can start as soon as the functional specifications are complete.  Both the testers and developers work
independently, so the testing is balanced and unbiased.  It helps identify the defects and inconsistencies in the early stages of
testing.

Disadvantages of Black Box Testing  There is a high chance of not achieving any result at the end of the test.  Writing test cases
is slow and difficult as identifying all possible inputs in a limited time becomes challenging.  It is not ideal to use for large and
complicated applications as complete test coverage is not possible.  As it is specification-dependent, building test cases without
specifications become difficult.

White box testing is a type of software testing wherein the internal structure and design of the item being tested are well known
to the tester. It helps the developers in finding out the internal flaws of the security.

Types of White Box Testing- Conditional testing: This type of testing checks the logical conditions for both true and false values.
Path testing: It is a testing approach that uses the source code of a program to find every possible executable path. It helps the
testers achieve maximum path coverage with the least number of test cases. Unit testing: It is a testing method wherein
individual units of software are tested. It helps ensure that each component of the software works as intended. Integration
testing: It is a type of testing process in which individual software modules or components are tested as a group. It helps ensure
that the modules work fine when merged. Loop testing: This testing type that entirely focuses on validating the loop constructs
used in the algorithms.

Advantages of White Box Testing-  White box testing helps find hidden errors in an application as it checks and works by
internal functionality.  It is much more thorough than traditional black-box testing.  It helps get maximum test coverage while
writing test scenarios, as the tester has programming knowledge.

Disadvantages of White Box Testing  White box testing is an exhaustive method of testing as it takes a significant amount of
time to develop the test cases.  It might miss testing certain functionalities as only the available code is tested.  It requires
skilled testers having programming knowledge to perform white box testing.  It is costly as compared to black box testing.
(UT5) Various types of maintenance: Corrective: Corrective maintenance of a software product is necessary to rectify the bugs
observed while the system is in use. Adaptive: A software product might need maintenance when the customers need the
product to run on new platforms, on new operating systems, or when they need the product to interface with new hardware or
software. Perfective: A software product needs maintenance to support the new features that users want it to support, to
change different functionalities of the system according to customer demands, or to enhance the performance of the system.
Preventive: Preventive maintenance is the act of performing regularly scheduled maintenance activities to help prevent
unexpected failures in the future. Put simply, it's about fixing things before they break.

Software Configuration management (CM): Configuration management (CM) is the process of controlling and documenting
change to a developing system. As the size of an effort increase, so does the necessity of implementing effective CM. Software
configuration management (SCM) is a set of activities that are designed to control change by identifying the work products that
are likely to change, establishing relationships among them. The process of software development and maintenance is controlled
is called configuration management. The configuration management is different in development and maintenance phases of life
cycle due to different environments.

Functions of SCM • Identification -identifies those items whose configuration needs to be controlled, usually consisting of
hardware, software, and documentation. • Change Control - establishes procedures for proposing or requesting changes,
evaluating those changes for desirability, obtaining authorization for changes, publishing and tracking changes, and
implementing changes. This function also identifies the people and organizations who have authority to make changes at various
levels. • Status Accounting -is the documentation function of CM. Its primary purpose is to maintain formal records of
established configurations and make regular reports of configuration status. These records should accurately describe the
product, and are used to verify the configuration of the system for testing, delivery, and other activities. • Auditing -Effective CM
requires regular evaluation of the configuration. This is done through the auditing function, where the physical and functional
configurations are compared to the documented configuration. The purpose of auditing is to maintain the integrity of the
baseline and release configurations for all controlled products.

Constructive cost model (COCOMO) · COCOMO is one of the most widely used software estimation models in the world. · It was
developed by Barry Boehm in 1981. · COCOMO predicts the effort and schedule for a software product development based on
inputs relating to the size of the software and a number of cost drivers that affect productivity. COCOMO has three different
models that reflect the complexity: 1. Basic Model 2. Intermediate Model 3. Detailed Model

Project Scheduling: Project-task scheduling is a significant project planning activity. It comprises deciding which functions would
be taken up when. To schedule the project plan, a software project manager wants to do the following: 1. Identify all the
functions required to complete the project. 2. Break down large functions into small activities. 3. Determine the dependency
among various activities. 4. Establish the most likely size for the time duration required to complete the activities. 5. Allocate
resources to activities. 6. Plan the beginning and ending dates for different activities. 7. Determine the critical path. A critical way
is the group of activities that decide the duration of the project.

Resource Allocation Models  Resource allocation is the process of assigning and scheduling resources to project tasks 
Resources are the life blood of project management  Resources are used to carry out the project, and are returned to their
owners if not consumed by the project  A resource allocation model (RAM) is a methodology for determining where resources
should be allocated within an organisation.  Resources may include financial resources, technological resources and human
resources.  Strategic investment decisions may, of course, impact on the actual allocation in any single year.

Risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical
application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events. Risk
Management Process, describes the steps you need to take to identify, monitor and control risk. Within the Risk Process, a risk is
defined as any future event that may prevent you to meet your team goals.

Risk identification: Determining what risks or hazards exist or are anticipated, their characteristics, remoteness in time, duration
period, and possible outcomes. • Risk analysis is the process of defining and analyzing the dangers to individuals, businesses and
government agencies posed by potential natural and human-caused adverse events. In IT, a risk analysis report can be used to
align technology-related objectives with a company's business objectives. A risk analysis report can be either quantitative or
qualitative. • Risk Planning: Risk Planning is developing and documenting organized, comprehensive, and interactive strategies
and methods for identifying risks. It is also used for performing risk assessments to establish risk handling priorities, developing
risk handling plans, monitoring the status of risk handling actions, determining and obtaining the resources to implement the risk
management strategies.

Data flow diagram is graphical representation of flow of data in an information system. It is capable of depicting incoming data
flow, outgoing data flow and stored data. The DFD does not mention anything about how data flows through the system. There is
a prominent difference between DFD and Flowchart. The flowchart depicts flow of control in program modules. DFDs depict flow
of data in the system at various levels. DFD does not contain any control or branch elements.

Types of DFD: Data Flow Diagrams are either Logical or Physical. Logical DFD - This type of DFD concentrates on the system
process, and flow of data in the system.For example in a Banking software system, how data is moved between different entities.
Physical DFD - This type of DFD shows how the data flow is actually implemented in the system. It is more specific and close to
the implementation.

DFD Components: Entities - Entities are source and destination of information data. Entities are represented by a rectangles with
their respective names. · Process - Activities and action taken on the data are represented by Circle or Round-edged rectangles. ·
Data Storage - There are two variants of data storage - it can either be represented as a rectangle with absence of both smaller
sides or as an open-sided rectangle with only one side missing. · Data Flow - Movement of data is shown by pointed arrows.
Data movement is shown from the base of arrow as its source towards head of the arrow as destination.

Top-Down Approach: In this approach, the problem is divided into smaller and more manageable sub-problems. Each sub-
problem is then analyzed in detail, and a solution is developed. The solutions to the sub-problems are then combined to form
the solution to the original problem. This approach is useful for complex problems that can be decomposed into smaller and
more manageable sub-problems.

Bottom-Up Approach: In this approach, the problem is broken down into smaller parts and the solutions to these smaller parts
are developed first. The solutions to the smaller parts are then combined to form the solution to the original problem. This
approach is useful for problems where a solution to the smaller parts can be developed independently, and then integrated to
form the solution to the original problem.

Hybrid Approach: In this approach, a combination of the top-down and bottom-up approaches is used. This approach is useful
for problems that cannot be solved using either of the two approaches alone. The problem is analyzed using a combination of
both approaches, depending on the nature of the problem. This approach is often used in complex software development
projects where a combination of top-down and bottom-up approaches is needed to solve the problem effectively.

Software consists of (1) Instructions (computer programs) that when executed provided desired function and performance,
(2)Data structures that enable the programs to adequately manipulate information, and (3) Documents that describe the
operation and use of the programs. But these are only the concrete part of software that may be seen, there exists also invisible
part which is more important: Software is the dynamic behavior of programs on real computers and auxiliary equipment.  " ...
a software product is a model of the real world, and the real world is constantly changing." Software is a digital form of
knowledge. ("Software Engineering," 6ed. Sommerville, Addison-Wesley, 2000)  Computer programs and
associated documentation such as requirements, design models and user manuals.  Software products may be developed
for a particular customer or may be developed for a general market.

Difference in Program and Software Program Software 1. Usually small in size I.Large 2. Author himself is sole user2.Large
number of users 3. Single developer 3.Tearn of developers 4. Lacks proper user interface 4.Well-designed interface 5. Lacks
proper documentation 5.Well documented & user- manual prepared 6. Ad hoc development. 6.Systematic development

Software Crisis Software fails because it  crash frequently  expensive  difficult to alter, debug, enhance 
often delivered late  use resources non-optimally  Software professionals lack engineering training

Software Engineering Software engineering is concerned with the theories, methods and tools for developing, managing and
evolving software products.  "The systematic application of tools and techniques in the development of computer-based
applications." (Sue Conger in The New Software Engineering)  "Software Engineering is about designing and developing high-
quality software." (Shari Lawrence Pfleeger in Software Engineering -- The Production of Quality Software)  A systematic
approach to the analysis, design, implementation and maintenance of software. (The Free On-Line Dictionary of Computing)The
systematic application of tools and techniques in the development of computer-based applications. (Sue Conger in The New
Software Engineering) Software Engineering is about designing and developing high-quality software. (Shari Lawrence Pfleeger in
Software Engineering -- The Production of Quality Software)  The technological and managerial discipline concerned with
systematic production and maintenance of software products that are developed and modified on time and within cost
constraints (R. Fairley)

Software Engineering Software Programming 1. Single develop I.Teams of developers with multiple roles 1. "Toy"
applications 2.Complex systems 2. Short lifespan 3.Indefinite lifespan 3. Single or few stakeholders Architect = Developer =
Manager = Tester = Customer = User 4.Numerous stakeholders Architect i- Developer i- Manager i- Tester i- Customer i- User
5.0ne-of-a-kind systems 5.System families

Difference between software engineering and computer science • Computer science is concerned with theory and
fundamentals; software engineering is concerned with the practicalities of developing and delivering useful software. •Computer
science theories are still insufficient to act as a complete underpinning for software engineering (unlike e.g. physics and electrical
engineering). Difference between software engineering and system engineering • System engineering is concerned with all
aspects of computer-based systems development including hardware, software and process engineering. Software engineering is
part of this process concerned with developing the software infrastructure, control, applications and databases in the system.
System engineers are involved in system specification, architectural design, integration and deployment • Software engineering is
based on computer science, information science and discrete mathematics whereas traditional engineering is based on
mathematics, science and empirical knowledge. • Traditional engineers construct real artifacts and software engineers construct
non real (abstract) artifacts. • In traditional engineering, two main concerns for a product are cost of production and reliability
measured by time to failure whereas in software engineering two main concerns are cost of development and reliability
measured by the no. of errors per thousand lines of source code.

Software Development Life Cycle (SDLC)Models Software-development life-cycle is used to facilitate the development of a large
software product in a systematic, well-defined, and cost-effective way.An information system goes through a series of phases
from conception to implementation. This process is called the Software-Development Life-Cycle. Various reasons for using a life-
cycle model include: - Helps to understand the entire process - Enforces a structured approach to development - Enables
planning of resources in advance - Enables subsequent controls of them - Aids management to track progress of the system
Activities undertaken during feasibility study: - The main aim of feasibility study is to determine whether it would be financially
and technically feasible to develop the product. Activities undertaken during requirements analysis and specification: - The aim
of the requirements analysis and specification phase is to understand the exact requirements of the customer and to document
them properly. This phase consists of two distinct activities, namely Requirements gathering and analysis, this phase ends with
the preparation of Software requirement Specification (SRS) Activities undertaken during design: - The goal of the design phase is
to transform the requirements specified in the SRS document into a structure that is suitable for implementation in some
programming language . Design specification Document is outcome of this phase. Activities undertaken during coding and unit
testing:-The purpose of the coding and unit testing phase (sometimes called the implementation phase) of software
development is to translate the software design into source code. Each component of the design is implemented as a program
module. The end-product of this phase is a set of program modules that have been individually tested. Code Listings are
generated after this phase,. Activities undertaken during integration and system testing: - Integration of different modules is
undertaken once they have been coded and unit tested During each integration step, the partially integrated system is tested and
a set of previously planned modules are added to it. Finally, when all the modules have been successfully integrated and tested,
system testing is carried out. The goal of system testing is to ensure that the developed system conforms to its requirements laid
out in the SRS document.Test Rports are generatd after this phase.

Different software life cycle models Many life cycle models have been proposed so far. Each of them has some advantages as
well as some disadvantages. A few important and commonly used life cycle models are as follows: 1.Classical Waterfall Model
2.Iterative Waterfall Model 3. Prototyping Model 4. Evolutionary Model 5.Spiral Model 6.Rapid Development Application model
(RAD)

The classical waterfall model is intuitively the most obvious way to develop software. Though the classical waterfall model is
elegant and intuitively obvious, it is not a practical model in the sense that it cannot be used in actual software development
projects. Thus, this model can be considered to be a theoretical way of developing software. But all other life cycle models are
essentially derived from the classical waterfall model. So, in order to be able to appreciate other life cycle models it is necessary
to learn the classical waterfall model. Classical waterfall model divides the life cycle into the following phases as shown in fig:
1.Feasibility Study 2.Requirements Analysis and Specification 3.Design 4.Coding and Unit Testing 5.Integration and System Testing
6.Maintenance

Waterfall model Spiral model Separate and distinct phases of specification and development. Process is represented
as a spiral rather than as a sequence of activities with backtracking After every cycle a useable product IS given to the customer.
Each loop in the spiral represents a phase in the process Effective m the situations where requirements are defined precisely and
there IS no confusion about the functionality of the final product. No fixed phases such as specification or design - loops in the
spiral are chosen depending on what is required Risks are never explicitly assessed and resolved throughout the process Risks are
explicitly assessed and resolved throughout the process

Iterative Waterfall Model •Waterfall model assumes in its design that no error will occur during the design phase • Iterative
waterfall model introduces feedback paths to the previous phases for each process phase • It is still preferred to detect the
errors in the same phase they occur
Advantages of Waterfall Model •It is a linear model. • It is a segmental model. • It is systematic and sequential. • It is a simple
one. • It has proper documentation Disadvantages of Waterfall Model • It is difficult to define all requirements at the beginning
of the project. • Model is not suitable for accommodating any change • It does not scale up well to large projects • Inflexible
partitioning of the project into distinct stages makes it difficult to respond to changing customer requirements.

Data Flow Diagram (DFD) Data flow diagram is graphical representation of flow of data in an information system. It is capable of
depicting incoming data flow, outgoing data flow and stored data. The DFD does not mention anything about how data flows
through the system. There is a prominent difference between DFD and Flowchart. The flowchart depicts flow of control in
program modules. DFDs depict flow of data in the system at various levels. DFD does not contain any control or branch elements.
Types of DFD Data Flow Diagrams are either Logical or Physical. Logical DFD - This type of DFD concentrates on the system
process, and flow of data in the system.For example in a Banking software system, how data is moved between different entities.
Physical DFD - This type of DFD shows how the data flow is actually implemented in the system. It is more specific and close to
the implementation.

· Entities - Entities are source and destination of information data. Entities are represented by a rectangles with their respective
names. · Process - Activities and action taken on the data are represented by Circle or Round-edged rectangles. · Data Storage -
There are two variants of data storage - it can either be represented as a rectangle with absence of both smaller sides or as an
open-sided rectangle with only one side missing. · Data Flow - Movement of data is shown by pointed arrows. Data movement is
shown from the base of arrow as its source towards head of the arrow as destination.

Data Dictionary A data dictionary is a collection of descriptions of the data objects or items in a data model for the benefit of
programmers and others who need to refer to them. Often a data dictionary is a centralized metadata repository. Types of Data
Dictionary 1. Active Data Dictionary 2. Passive Data Dictionary Active Data Dictionary The data dictionary is automatically
updated by the database management system when any changes are made in the database. This is known as an active data
dictionary as it is self updating. Passive Data Dictionary The passive data dictionary has to be manually updated to match the
database. This needs careful handling or else the database and data dictionary are out of sync.

Relationship Relationships are represented by diamond-shaped box. Name of the relationship is written inside the diamond -
box. All the entities (rectangles) participating in a relationship are connected to it by a line. Binary Relationship and Cardinality A
relationshipwhere two entities are participating is called a binary relationship. Cardinality is the number of instance of an entity
from a relation that can be associated with the relation.

Software Requirement Specification (SRS) The production of the requirements stage of the software development process is
Software Requirements Specifications (SRS) (also called a requirements document). This report lays a foundation for software
engineering activities and is constructing when entire requirements are elicited and analyzed. SRS is a formal report, which acts
as a representation of software that enables the customers to review whether it (SRS) is according to their requirements. Also, it
comprises user requirements for a system as well as detailed specifications of the system requirements.

Following are the features of a good SRS document: 1. Correctness: User review is used to provide the accuracy of requirements
stated in the SRS. SRS is said to be perfect if it covers all the needs that are truly expected from the system. 2. Completeness: The
SRS is complete if, and only if, it includes the following elements: (1) All essential requirements, whether relating to functionality,
performance, design, constraints, attributes, or external interfaces. (2) Definition of their responses of the software to all
realizable classes of input data in all available categories of situations. Note: It is essential to specify the responses to both valid
and invalid values. (3) Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and
units of measure. 3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its
conflict. There are three types of possible conflict in the SRS: (1) The specified characteristics of real-world objects may conflicts.
For example, (a) The format of an output report may be described in one requirement as tabular but in another as textual. (b)
One condition may state that all lights shall be green while another states that all lights shall be blue. (2) There may be a
reasonable or temporal conflict between the two specified actions. For example, (a) One requirement may determine that the
program will add two inputs, and another may determine that the program will multiply them. (b) One condition may state that
"A" must always follow "B," while other requires that "A and B" co-occurs. (3) Two or more requirements may define the same
real-world object but use different terms for that object. For example, a program's request for user input may be called a
"prompt" in one requirement's and a "cue" in another. The use of standard terminology and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This suggests that each
element is uniquely interpreted. In case there is a method used with multiple definitions, the requirements report should
determine the implications in the SRS so that it is clear and simple to understand. 5. Ranking for importance and stability:

Software Engineering Institute Capability Maturity Model (SEICMM) The Capability Maturity Model (CMM) is a procedure used to
develop and refine an organization's software development process. The model defines a five-level evolutionary stage of
increasingly organized and consistently more mature processes. CMM was developed and is promoted by the Software
Engineering Institute (SEI), a research and development center promote by the U.S. Department of Defense (DOD). Capability
Maturity Model is used as a benchmark to measure the maturity of an organization's software process

Methods of SEI-CMM There are two methods of SEI-CMM: Capability Evaluation: Capability evaluation provides a way to assess
the software process capability of an organization. The results of capability evaluation indicate the likely contractor performance
if the contractor is awarded a work. Therefore, the results of the software process capability assessment can be used to select a
contractor. Software Process Assessment: Software process assessment is used by an organization to improve its process
capability. Thus, this type of evaluation is for purely internal use. SEI CMM categorized software development industries into the
following five maturity levels. The various levels of SEI CMM have been designed so that it is easy for an organization to build its
quality system starting from scratch slowly.

ISO 9000 Models- ISO 9000 is defined as a set of international standards on quality management and quality
assurance developed to help companies effectively document the quality system elements needed to maintain an
efficient quality system. They are not specific to any one industry and can be applied to organizations of any size.

Verification and Validation Verification and validation begin by reviewing the requirements and covering the design
and analysis of the code up to the product testing. For this reason, verification demands to check that the program
meets specified requirements. While, on the other hand, validation requires examining that the software product
meets the client expectation as well as a formal proof of program correctness.

You might also like