0% found this document useful (0 votes)
8 views

Software Engineer Assignment 1 to 5

The document outlines key concepts in software engineering, including layered technology, characteristics of software, differences between generic and customized products, and the software development life cycle (SDLC). It describes the advantages and disadvantages of various software models, such as the waterfall and spiral models, and categorizes software requirements into functional, non-functional, and domain types. Additionally, it emphasizes the importance of planning, requirement analysis, and maintenance in software development.

Uploaded by

tecnomindankit07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Software Engineer Assignment 1 to 5

The document outlines key concepts in software engineering, including layered technology, characteristics of software, differences between generic and customized products, and the software development life cycle (SDLC). It describes the advantages and disadvantages of various software models, such as the waterfall and spiral models, and categorizes software requirements into functional, non-functional, and domain types. Additionally, it emphasizes the importance of planning, requirement analysis, and maintenance in software development.

Uploaded by

tecnomindankit07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Unit - 1

1. Draw the neat and labelled diagram of


software Engineering layered technology
ANS:-
Software engineering is a fully layered technology, to develop
software we need to go from one layer to another. All the layers
are connected and each layer demands the fulfillment of the

previous layer.
Layered technology is divided into four parts:

1. A quality focus:
It defines the continuous process improvement principles of
software. It provides integrity that means providing security to the
software so that data can be accessed by only an authorized
person, no outsider can access the data. It also focuses on
maintainability and usability.
2. Process:
It is the foundation or base layer of software engineering. It is key
that binds all the layers together which enables the development of
software before the deadline or on time. Process defines a
framework that must be established for the effective delivery of
software engineering technology. The software process covers all
the activities, actions, and tasks required to be carried out for
software development.
3. Method: During the process of software development the
answers to all “how-to-do” questions are given by method. It has
the information of all the tasks which includes communication,
requirement analysis, design modelling, program construction,
testing, and support.

4. Tools: Software engineering tools provide a self-operating


system for processes and methods. Tools are integrated which
means information created by one tool can be used by another.

Components of a Layered Architecture

 Presentation Layer – responsible for user interactions with


the software system
 Application/Business Layer – handles aspects related to
accomplishing functional requirements
 Domain Layer – responsible for algorithms, and
programming components
 Infrastructure/Persistence/Database Layer – responsible for
handling data, databases

Advantages
 Improves the productivity

 Generates small parts of code automatically


 Improves software quality
 Can be integrated with other tools say, with code editor to
work with coding.
Disadvantages
 Scalability is difficult because the structure of the
framework does not allow for growth.
 They can be difficult to maintain. A change in a single layer
can affect the entire system because it operates as a single
unit.
 There is interdependence between layers since a layer
depends on the layer above it to receive data.
 Parallel processing is not possible.

2. List the characteristics of software.


ANS:-
Software is defined as a collection of computer programs,
procedures, rules, and data. Software Characteristics are classified
into six major components.

The characteristics of the software include:

 It is intangible, meaning it cannot be seen or touched.


 It is non-perishable, meaning it does not degrade over time.
1. It is easy to replicate, meaning it can be copied and distributed
easily.
2. It can be complex, meaning it can have many interrelated parts
and features.
3. It can be difficult to understand and modify, especially for large
and complex systems.
 It can be affected by changing requirements, meaning it may
need to be updated or modified as the needs of users change.
 It can be impacted by bugs and other issues, meaning it may
need to be tested and debugged to ensure it works as intended.

Components of Software Characteristics:


There are basically 6 components of Software Characteristics that
are discussed here. We will discuss each one of them in detail.

Functionality:
- It refers to the degree of performance of the software
against its intended purpose.
- Functionality refers to the set of features and capabilities
that a software program or system provides to its users.
- Examples of functionality in software include:
 Data storage and retrieval
 Data processing and manipulation
 User interface and navigation

1. Communication and networking


 Security and access control
 Reporting and visualization
 Automation and scripting

Required functions are:


Reliability:
- A set of attributes that bears on the capability of software to
maintain its level of performance under the given condition
for a stated period of time.
- Examples of factors that can affect the reliability of software
include:
1. Bugs and errors in the code
2. Lack of testing and validation
3. Poorly designed algorithms and data structures
4. Inadequate error handling and recovery
5. Incompatibilities with other software or hardware
- To improve the reliability of software, various techniques, and
methodologies can be used, such as testing and validation,

formal verification, and fault tolerance.


Efficiency:
- It refers to the ability of the software to use system
resources in the most effective and efficient manner. The
software should make effective use of storage space and
executive command as per desired timing requirements.
- Examples of factors that can affect the efficiency of the
software include:
1. Poorly designed algorithms and data structures
2. Inefficient use of memory and processing power
3. High network latency or bandwidth usage
4. Unnecessary processing or computation

Usability:
It refers to the extent to which the software can be used with ease.
the amount of effort or time required to learn how to use the
software.
Required functions are:

Maintainability:
It refers to the ease with which modifications can be made in a
software system to extend its functionality, improve its
performance, or correct errors.
Required functions are:

Maintainability

Portability:
A set of attributes that bears on the ability of software to be
transferred from one environment to another, without minimum
changes.
Required functions are:
3. Difference between Generic Products and
Customized Products
ANS:
Generic software product Custom software
S.No. development development

Customer software development


The generic software development is
is done to develop a software
1. done for developing a general purpose
product as per the needs of
software.
particular customer.

In this development process, the


In this development process, the
end-user requirements can be
2. software developers have to depict
aggregated by communicating by
the end-users specifications.
them.

From designing and marketing This development does not require


3. perspective, this type of development marketing, because it is developed
is very difficult. for appropriate group of users.

Large number of users may be using This type of software is used by


4.
this kind of software. limited number of users.

Quality is the main criterion in


Quality of the product is not a customer software product. Best
5.
preference for generic software. quality of the product is focused
for customer or company.

6. Development team controls the Customer determines the process


Generic software product Custom software
S.No. development development

process of generic software of software development in this


development. type of product.

Generally, generic software is cheaper Custom software is more


than custom software. There might be expensive than generic software,
7. additional costs that may occur during but they are built according to the
software installation or specifications provided by the
implementation. client.

Inventory control and


Example of generic software product
management system are
8. development is Word-editing
examples of customer software
software.
development.

4. Write a note on SDLC


ANS:
A software life cycle model (also termed process model) is a pictorial
and diagrammatic representation of the software life cycle. A life cycle
model represents all the methods required to make a software product
transit through its life cycle stages. It also captures the structure in
which these methods are to be undertaken.

SDLC Cycle
SDLC Cycle represents the process of developing software. SDLC
framework includes the following steps:

The stages of SDLC are as follows:


Stage1: Planning and requirement analysis

Requirement Analysis is the most important and necessary stage in SDLC.

The senior members of the team perform it with inputs from all the
stakeholders and domain experts or SMEs in the industry.

Planning for the quality assurance requirements and identifications of the


risks associated with the projects is also done at this stage.

For Example,

A client wants to have an application which concerns money transactions.


In this method, the requirement has to be precise like what kind of
operations will be done, how it will be done, in which currency it will be
done, etc.

Stage2: Defining Requirements

Once the requirement analysis is done, the next stage is to certainly


represent and document the software requirements and get them
accepted from the project stakeholders.

This is accomplished through "SRS"- Software Requirement Specification


document which contains all the product requirements to be constructed
and developed during the project life cycle.

Stage3: Designing the Software

The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of
the last two, like inputs from the customer and requirement gathering.

Stage4: Developing the project

In this phase of SDLC, the actual development begins, and the


programming is built. The implementation of design begins concerning
writing code. Developers have to follow the coding guidelines described
by their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.

Stage5: Testing

After the code is generated, it is tested against the requirements to make


sure that the products are solving the needs addressed and gathered
during the requirements stage.

During this stage, unit testing, integration testing, system testing,


acceptance testing are done.

Stage6: Deployment

Once the software is certified, and no bugs or errors are stated, then it is
deployed.

Then based on the assessment, the software may be released as it is or


with suggested enhancement in the object segment.

After the software is deployed, then its maintenance begins.

Stage7: Maintenance

Once when the client starts using the developed systems, then the real
issues come up and requirements to be solved from time to time.

This procedure where the care is taken for the developed product is
known as maintenance.

Advantages
o Efficient with regard to costs
o Efficacious in terms of time
o Enhances teamwork and coordination, defines suitable roles for
employees and increases workplace transparency.
o Minimal danger when the project is implemented

Disadvantages
o The project may take longer and cost more if the planning is not
done properly.
o Correcting problems in code can occasionally take a long time and
cause deadlines to be missed if there are many of them.
5. What are the different types of
requirements?
ANS:-
1. Requirements are specifications that define
the functions, features, and characteristics of a
software system.
2. They serve as the foundation for designing
and developing a system that meets the needs of
its users.
3. Requirements can be categorized into
different types based on various criteria.

Main types of software requirement can be of 3 types:


 Functional requirements
 Non-functional requirements
 Domain requirements

1. Functional requirements:-
These specify the functions and features that the
software system must provide. They describe
what the system is supposed to do in terms of
input, processing, and output.

2. Non-functional requirements:-
These define the quality attributes and constraints
of the system, such as performance, reliability,
scalability, usability, and security. Non-functional
requirements are often as critical as functional
requirements for the success of the system.
3. Domain requirements:-
Specifies the data storage, retrieval, and
processing needs of the software.

1. Other common classifications of software


requirements can be:
4. User requirements:
Also known as stakeholder requirements, these
capture the needs and expectations of the end-
users or stakeholders who will interact with the
system. User requirements are typically expressed
in natural language and may include scenarios, use
cases, and user stories.
5. System requirements:
These describe the environment in which the
software will operate, including hardware, software,
network, and other external interfaces. System
requirements help ensure that the software can
function properly within its intended context.
6. Business requirements: These requirements
describe the business goals and
objectives that the software system is expected
to achieve. Business requirements are usually
expressed in terms of revenue, market share,
customer satisfaction, or other business metrics.
7. Interface requirements: These requirements
specify the interactions between the software
system and external systems or components,
such as databases, web services, or other
software applications.

Advantages:-
1. Better organization: Classifying software
requirements helps organize them into groups that
are easier to manage, prioritize, and track
throughout the development process.
2. Improved communication: Clear classification of
requirements makes it easier to communicate them
to stakeholders, developers, and other team
members. It also ensures that everyone is on the
same page about what is required.
3. Increased quality: By classifying requirements,
potential conflicts or gaps can be identified early in
the development process. This reduces the risk of
errors, omissions, or misunderstandings, leading to
higher quality software.

Disadvantages:-

1. Complexity: Classifying software requirements can


be complex, especially if there are many
stakeholders with different needs or requirements. It
can also be time-consuming to identify and classify all
the requirements.
2. Rigid structure: A rigid classification structure may
limit the ability to accommodate changes or evolving
needs during the development process. It can also lead
to a siloed approach that prevents the integration of
new ideas or insights.
3. Misclassification: Misclassifying requirements can
lead to errors or misunderstandings that can be costly
to correct later in the development process.

6. Explain the various phases of the waterfall


model.
ANS:-

1. The waterfall model is a model used during the


Software Development Life Cycle (SDLC) to create
software systems.
2. It consists of various stages that follow one after
the other. After one stage is completed, it is not
possible to go back to the previous stage.
3. The phases of the waterfall model are requirement
gathering, analysis and design, implementation,
verification, deployment, and maintenance.
Here are the various phases of the Waterfall Model:
1. Requirement Gathering and analysis − All possible
requirements of the system to be developed are
captured in this phase and documented in a
requirement specification document. This involves
understanding the needs of the end-users,
business objectives, and system constraints.
2. System Design- The gathered requirements are
used to create a detailed system design. This
phase involves defining the architecture,
components, modules, data storage, and
interfaces. The output is a blueprint that serves as
a guide for the next phases.
3. Implementation- In this phase, the actual code for
the software system is written based on the
design specifications. Developers use the
programming languages and tools defined during
the design phase to create the software product.
4. Testing- The testing phase involves verifying that
the developed software meets the specified
requirements. Different types of testing are
performed, including unit testing, integration
testing, system testing, and user acceptance
testing. Defects are identified, reported, and
addressed during this phase.
5. Deployment (Installation) of system-Once the
functional and non-functional testing is done; the
product is deployed in the customer environment
or released into the market.
6. Maintenance- The maintenance phase involves
ongoing support and updates to the software. It
includes fixing bugs, addressing issues, and
incorporating changes based on user feedback.
Maintenance may continue for an extended period,
depending on the lifespan and requirements of the
software.

Advantages:-
1. Simple and easy to understand and use
2. Phases are processed and completed one at a time.
3. Works well for smaller projects where
requirements are very well understood.
4. Clearly defined stages.
Disadvantages:-
1. No working software is produced until late during
the life cycle.
2. High amounts of risk and uncertainty.
3. It is difficult to measure progress within stages.
4. Not a good model for complex and object-oriented
projects.
7. Write a short note of spiral model.
ANS:
o The Spiral Model is a software development
methodology that combines elements of both the
iterative and waterfall models.
o It involves a series of iterations where the project
progresses through planning, risk analysis,
development, and evaluation.
o It's like a spiral because each iteration builds upon
the previous one, allowing for continuous
improvement and refinement.
o It's a flexible approach that helps manage risks
and adapt to changing requirements.
1. Planning:
The first phase of the Spiral Model is the planning
phase, where the scope of the project is determined
and a plan is created for the next iteration of the spiral.
2. Risk Analysis:
In the risk analysis phase, the risks associated with the
project are identified and evaluated.
3. Engineering:
In the engineering phase, the software is developed
based on the requirements gathered in the previous
iteration.
4. Evaluation:
In the evaluation phase, the software is evaluated to
determine if it meets the customer’s requirements and
if it is of high quality.
5. Review and Planning:
At the end of each spiral, a review is conducted to
assess the progress and identify areas for improvement.
Based on the review, plans for the next iteration are
refined or modified.

When to use Spiral Model?


o When the project is large
o When requirements are unclear and complex
o When changes may require at any time
o Large and high budget projects
Advantages
o High amount of risk analysis
o Useful for large and mission-critical projects.
Disadvantages
o Can be a costly model to use.
o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.
DIFFERENCE BETWEEN:-

8. Write a note on Agile methodology.


ANS:-
o The meaning of Agile is swift or versatile. "Agile
process model" refers to a software development
approach based on iterative development.
o Agile methods break tasks into smaller iterations,
or parts do not directly involve long term planning.
o Unlike traditional waterfall models, Agile
emphasizes delivering small, functional increments
of software quickly and frequently, allowing for
continuous improvement based on feedback
throughout the development process.

When to use the Agile Model?


1. When frequent changes are required.
2. When a highly qualified and experienced team is
available.
3. When a customer is ready to have a meeting with
a software team all the time.
4. When project size is small.

Principles of Agile:
 The highest priority is to satisfy the customer
through early and continuous delivery of valuable
software.
 It welcomes changing requirements, even late in
development.
 Deliver working software frequently, from a couple
of weeks to a couple of months, with a preference
for the shortest timescale.
 Build projects around motivated individuals. Give
them the environment and the support they need
and trust them to get the job done.
 Working software is the primary measure of
progress.
 Simplicity the art of maximizing the amount of work
not done is essential.

Software Process Model:


 Scrum model
 XP –(Extreme Programming)
 DSDM
 Crystal
Advantages:
 Frequent Delivery
 Face-to-Face Communication with clients.
 Efficient design and fulfils the business
requirement.
 Anytime changes are acceptable.
 It reduces total development time.
Disadvantages:
 Not suitable for large system.
 Costly for the stable development environment.
9. Draw and Explain RAD Model.
ANS:-
 RAD stands for Rapid Application Development
Model.
 The Rapid Application Development Model was first
proposed by IBM in the 1980s.
 The RAD model is a type of incremental process
model in which there is extremely short
development cycle.
 Multiple teams work on developing the software
system using RAD model parallel.
 It is high speed adaptation of waterfall model.
 It is the example of incremental software process
model.
When to use RAD Model?
o When the system should need to create the
project that modularizes in a short span time (2-3
months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which
modularized in 2-3 months of period.
RAD Model:-
The various phases of RAD are as follows:
1.Business Modelling: The information flow among
business functions is defined by answering questions
like what data drives the business process, what data is
generated, who generates it, where does the
information go, who process it and so on.
2. Data Modelling: The data collected from business
modeling is refined into a set of data objects (entities)
that are needed to support the business. The attributes
(character of each entity) are identified, and the relation
between these data objects (entities) is defined.
3. Process Modelling: The information object defined in
the data modeling phase are transformed to achieve
the data flow necessary to implement a business
function. Processing descriptions are created for adding,
modifying, deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to
facilitate construction of the software; even they use
the 4th GL techniques.
5. Testing & Turnover: Many of the programming
components have already been tested since RAD
emphasis reuse. This reduces the overall testing time.
But the new part must be tested, and all interfaces
must be fully exercised.

Advantages:
 The use of reusable components helps to
reduce the cycle time of the project.
 Feedback from the customer is available at
the initial stages.
 This model is flexible for change.
 It reduced development time.
 It increases the reusability of features.
Disadvantages:
 It required highly skilled designers.
 All application is not compatible with RAD.
 For smaller projects, we cannot use the RAD model.
 Required user involvement.
 Not every application can be used with RAD.
10. Explain Prototype Model.
ANS:-
 In this, we will collect the requirements from the
customer and prepare a prototype (sample), and
get it reviewed and approved by the customer.
 The prototype is just the sample or a dummy of
the required software product.
 If all the mentioned modules are present, then
only the developer and tester will perform
prototype testing.
When we use the Prototype model
 Whenever the customer is new to the software
industry or when he doesn't know how to give the
requirements to the company.
 When the developers are new to the domain.

Prototype model process


o Requirement analysis
o feasibility study
o Create a prototype
o Prototype testing
o Customer review and approval
o Design
o Coding
o Testing
Advantage
o We can easily detect the missing functionality.
o In this, customer satisfaction exists.
o We can re-use the prototype in the design phase
and for similar applications.
o Issues can be identified in the early phase.
o In this model, customer rejection is less as
compared to the other models.
Disadvantage
o There is no requirement review, but the prototype
review is there.
o There are no parallel deliverables, which means that
the two teams cannot be work together.
o Insufficient or partial problem analysis.
o We may also lose customer attention if they are
not happy with the final product or original
prototype.
Unit 2
Q1 . write a short note on system engineering.
Ans:
Systems engineering is an interdisciplinary field of engineering
and engineering management that focuses on how to design,
integrate, and manage complex systems over their life cycles.
At its core, systems engineering utilizes systems thinking
principles to organize this body of knowledge.
Systems engineering evaluates individual elements and
determines how they can work together as a system that
contributes to and accomplishes specific goals.
A system engineer working in construction might develop a
better technical system for allocating resources to project sites.
Another example is an engineer working in manufacturing
who's designing a system for the assembly of automobile
engines . Engineered systems often include:
 Software and Hardware components
 Technical equipment
 Facilities
 Procedures
 Policies
 Natural elements
 Sensors
 Instrumentation
 The role of a systems engineering:
A systems engineer is tasked with looking at the entire integrated system and evaluating it against
its desired outcomes. In that role, the systems engineer must know a little bit about everything
and have an ability to see the

 Design compatibility

 Definition of requirements

 Management of projects

 Cost analysis

 Scheduling

 Possible maintenance needs

 Ease of operations

 Future systems upgrades

Advantages of System engineering :


Reduced errors in production or delivery.
Reduced introduction costs.
Better traceability of decision making
More able to manage and afford change
Management of risk

Disadvantages of System engineering:


 Project timelines that are difficult to meet
 Testing and evaluation are difficult
 Health Problems Because of Longer Working Periods
Q2. Short note on availability and reliability.
Ans:-
Availability:
Availability is a measure of the percentage of time that an IT service or
component is in an operable state.
The mathematical formula for Availability is :
Percentage of availability = (total elapsed time – sum of downtime)/total elapsed time
Availability measures how well a machine or system functions at any given
moments.Availability minimizes downtime and improves on-time
performance for repairable and continuously operating equipment.

For example, if a company has a machine that operates 8 hours a day and is
down for maintenance 2 hours a day, the equipment availability would be
80%. This is important for businesses to track as it impacts production and
can affect the bottom line.

Reliability:

Reliability is a measure of the probability that the system will meet defined
performance standards in performing its intended function during a specified
interval.

Examples of reliability engineering include: System testing: This is usually an internal


method of reliability engineering. Engineers run testing instances of the software to
determine if the system will remain consistent through repeated trials.
Advantage
When we think about the benefits of reliability, likely the first one that comes to mind is
how it impacts the bottom line by increasing the productivity of machines and
decreasing downtime. And that's a fantastic benefit, but it's not the only one
Disadvantage
More or less the disadvantages of this reliability are - Practice effect: Practice will
probably produce varying amounts of improvement in the retest scores of different
individuals. Interval effect: If the interval between retest is fairly short, the test takers
may recall many of their former response.

Q3 .Explain the simple critical system with suitable example.


Ans:
A critical system is a system which must be highly reliable and retain
this reliability as it evolves without incurring prohibitive costs.[1]
Critical systems in software engineering refer to software applications or
systems that are essential for the proper functioning of an organization or
for ensuring public safety.

These systems are characterized by their high level of reliability, availability,


and safety. Examples of critical systems include those used in aviation,
healthcare, transportation, and nuclear power plants.

There are four types of critical systems:


4. safety critical,

5. mission critical,

6. business critical
7. security critical.
1.Safety critical
Safety critical systems deal with scenarios that may lead to loss of life,
serious personal injury, or damage to the natural environment. Examples
of safety-critical systems are a control system for a chemical
manufacturing plant, aircraft, the controllerof an unmanned train metro
system, a controller of a nuclear plant, etc.

2.Mission critical
Mission critical systems are made to avoid inability to complete the overall
system, project objectives or one of the goals for which the system was
designed. Examples of mission-critical systems are a navigational system
for a spacecraft, software controlling a baggage handling system of an
airport, etc.

3-Business critical
Business critical systems are programmed to avoid significant tangible or
intangible economic costs; e.g., loss of business or damage to reputation.
This is often due to the interruption of service caused by the system
being unusable. Examples of a business-critical systems are the customer
accounting system in a bank, stock-trading system, ERP system of a
company, Internet search engine, etc.

4.Security critical
Security critical systems deal with the loss of sensitive data through theft
or accidental loss.
Example
Automatic braking, cruise control, lane control, computer vision, obstacle
recognition, electronic engine control modules, etc. Every one of these is a
life-critical system, where a failure can be fatal.
 Advantages of Critical Path Method (CPM):
It has the following advantages:
 It figures out the activities which can run parallel to each other.
 It helps the project manager in identifying the most critical
elements of the project.
 It gives a practical and disciplined base which helps in determining
how to reach the objectives.
 CPM is effective in new project management.
 CPM can strengthen a team perception if it is applied properly.

 Disadvantages of Critical Path Method (CPM):


It has the following disadvantages:
 The scheduling of personnel is not handled by the CPM.
 In CPM, it is difficult to estimate the completion time of an activity.
 The critical path is not always clear in CPM.
 For bigger projects, CPM networks can be complicated too.
 It also does not handle the scheduling of the resource allocation.
 In CPM, critical path needs to be calculated precisely.
Q4. WRITE A SHORT NOTE ON REQUIREMENT ENGINEERING
PROCESSES

Ans:
Requirements engineering (RE) refers to the process of defining,
documenting, and maintaining requirements in the engineering design
process. Requirement engineering provides the appropriate mechanism
to understand what the customer desires, analyzing the need, and
assessing feasibility, negotiating a reasonable solution, specifying the
solution clearly, validating the specifications and managing the
requirements as they are transformed into a working system. Thus,
requirement engineering is the disciplined application of proven
principles, methods, tools, and notation to describe a proposed system's
intended behavior and its associated constraints.
Requirement Engineering Process
It is a four-step process, which includes -
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation

1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for
developing the software that is acceptable to users, flexible to change
and conformable to established standards.
Types of Feasibility:
Backward Skip 10sPlay Video Forward Skip 10s
5. Technical Feasibility - Technical feasibility evaluates the current
technologies, which are needed to accomplish customer
requirements within the time and budget.
6. Operational Feasibility - Operational feasibility assesses the range
in which the required software performs a series of levels to solve
business problems and customer requirements.
7. Economic Feasibility - Economic feasibility decides whether the
necessary software can generate financial profits for an
organization.

2. Requirement Elicitation and Analysis:


This is also known as the gathering of requirements. Here, requirements
are identified with the help of customers and existing systems processes,
if available.
Analysis of requirements starts with requirement elicitation. The
requirements are analyzed to identify inconsistencies, defects, omission,
etc. We describe requirements in terms of relationships and also resolve
conflicts if any.
Problems of Elicitation and Analysis
o Getting all, and only, the right people involved.
o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system
requirements.

3. Software Requirement Specification:


Software requirement specification is a kind of document which is
created by a software analyst after the requirements collected from the
various sources - the requirement received by the customer written in
ordinary language. It is the job of the analyst to write the requirement in
technical language so that they can be understood and beneficial by the
development team.
The models used at this stage include ER diagrams, data flow diagrams
(DFDs), function decomposition diagrams (FDDs), data dictionaries, etc.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely
for modeling the requirements. DFD shows the flow of data
through a system. The system may be a company, an organization,
a set of procedures, a computer hardware system, a software
system, or any combination of the preceding. The DFD is also
known as a data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store
information about all data items defined in DFDs. At the
requirements stage, the data dictionary should at least define
customer data items, to ensure that the customer and developers
use the same definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement
specification is the entity-relationship diagram, often called an "E-R
diagram." It is a detailed logical representation of the data for the
organization and uses three main constructs i.e. data entities,
relationships, and their associated attributes.
4. Software Requirement Validation:
After requirement specifications developed, the requirements discussed
in this document are validated. The user might demand illegal, impossible
solution or experts may misinterpret the needs. Requirements can be the
check against the following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of
software
o If there are any ambiguities
o If they are full
o If they can describe
Requirements Validation Techniques
o Requirements reviews/inspections: systematic manual analysis of
the requirements.
o Prototyping: Using an executable model of the system to check
requirements.
o Test-case generation: Developing tests for requirements to check
testability.
o Automated consistency analysis: checking for the consistency of
structured requirements descriptions.
Q5. Explian the principles of requirement management

Ans:-

1. The requirement management process is the process of managing


changing requirements during the requirements engineering process and
system development where the new requirements emerge as a system
is being developed and after it has gone into use.

2. During this process, one must keep track of individual requirements


and maintain links between dependent requirements so that one can
assess the impact of requirements changes along with establishing a
formal process for making change proposals and linking these to system
requirements.

3. It belongs to one of the phases of the Requirement Engineering


Process.
Now during this phase, there needs to be a certain level of requirement
management details which will help to make Requirement Management
decisions.

4.To accumulate the details for taking that decision one can follow the
following processes:
 Requirements Identification: In this, the requirement must be
uniquely identified so that it can be cross-referenced with
other requirements. Here, one can learn what is important and
required and what is not and it also helps to establish a
foundation for product vision, scope, cost, and schedule.
 Requirement change management process: This is the set of
activities that assess the impact and cost of changes.
 Traceability policies: The main purpose of this policy is to keep
a record of the defined relationships between each
requirement and the system designs which will help to
minimize the risks.
 Tool support: Tools like MS Excel, spreadsheets, or a simple
database system can be used.

Now, after the details have been gathered for the Requirement
Management, its time to see whether the change needs to be
implemented or not. For this, we use the Requirement Change
Management process. In this, the three basic steps that we follow are:
 Problem analysis and change specification
 Change analysis and costing
 Change implementation

Advantages of the Requirement Management Process:


1. Recognizing the need for change in the requirements.
2. Improved team communication.
3. It helps to minimize errors at the early stage of the
development cycle.

Q 6. Explain the different types of system model


Ans:-
System Model:A system model represents a real-world system or process
using a combination of physical components, entities, and
their interactions.it is available for purchase can include various elements
like hardware, software, people, and their relationships within a specific
context. System models are often used in engineering, architecture, and
other fields to understand and design complex systems, such as
computer systems, transportation networks, or ecosystems. They may
involve graphical representations, block diagrams, flowcharts, or even
physical prototypes.
The main types of systems modeled are:
 Flow control, arbitration, credit-based and behavior modeling
 Performance Modeling with stochastic components
 Architecture Modeling with cycle-accurate components
 Signal Algorithmic Modeling
 Control and Mixed Signal Modeling
 Software design and verification
1. Flow Control and Behavior Modeling:-

 Queue management, flow control, arbitration and scheduling are


design trade-offs that are based on the following variables:

 1. Number of input streams


2. Data rates
3. Queue depths
4. Scheduling or scanning or polling logic
5. Credit policy
6. External flags

 This is a stochastic model running monte-carlo simulation to explore


the quality of service and determine the required throughput. This
model requires knowledge of buffer state and usage at multiple
locations, before making a decision on data transfer. The ingress and
egress can have a large number of channels and virtual connections.
Components are modeled using Traffic, ExpressionList, Queues (FIFO)
and Servers (FIFO + Processing Delay). The logic is constructed using
the Script or Finite State Machine. Reports generated will be latency,
buffer occupancy, and throughput.

2. Performance Modeling

 Mixed signal and control system modeling requires the knowledge


of two different time domains- one where the time changes in a
continuous manner (Continuous) and the other where the time
moves in discrete but random distances (Discrete Event). Good
examples of control system design are MEMS accelerometers and
evaluating the impact on the engine control in a noisy car tracking
situation. For mixed signal, the Sigma-Delta A/D converter in Figure
15 is a good design target. Here the evaluation is to look at the
impact of frequency and signal changes over time. Another aspect
is the loss of data when moving from one time domain to another.

Architecture Exploration is a detailed and accurate exploration of a system


platform. The system platform can be a SoC, software or a network of
systems containing hardware and software.

The focus is on sizing the individual components, distribution of tasks on


to the distributed system connected by networks, partition into hardware
and software. The evaluation is for both Power and Performance.

 Disadvantages

The disadvantage of a system model can vary depending on the specific


type of system being modeled. However, some common disadvantages
of system models include the potential for oversimplification, which may
lead to inaccurate predictions or conclusions. Additionally, system models
may not account for all variables or interactions within a complex system,
leading to limitations in their predictive or explanatory power. Finally, the
construction and maintenance of a system model can be time-consuming
and resource-intensive.

 Advantages

The advantage of using a model is that it allows prediction and


simplification of complex systems. On the other hand, the disadvantage
of a model is that they could be misleading and can be misinterpreted in a
different way.

Models come in a variety of forms, sizes, and styles.

Q7. Drow the context diadram of bank system and


explain
Ans:
1.A context diagram is drawn in order to define and clarify the boundaries
of the software system.

2.The bank system context diagram, sometimes called a level 0 data-flow


diagram, identifies the flows of information between the system and
external entities.

3.Bank System level 0 data-flow has elaborated the high-level process of


banking.

4.The bank system context diagram is a basic overview of the whole


banking system and is designed to be an at-a-glance view of bank
managers, customers, sales agents, and relations with other banks.

5.The context diagram shows how bank managers send open and close
account requests to the bank systems.

6.Also, the level 0 data-flow diagram shows how third parties initiate the
money transfer to the bank system.

Q8.Explain object model


Ans:
1.An object model is a logical interface, software or system that is modeled
through the use of object-oriented techniques.

2.It enables the creation of an architectural software or system model


prior to development or programming.

3.An object model is part of the object-oriented programming (OOP)


lifecycle.

4.An object model helps describe or define a software/system in terms of


objects and classes.

5. It defines the interfaces or interactions between different models,


inheritance, encapsulation and other object-oriented interfaces and
features.

6-The object model is a key aspect of object-oriented programming (OOP),


where software is organized around objects that encapsulate data and
behavior. Object models are commonly used in the Unified Modeling
Language (UML) to illustrate the design of a software system.

6.Object model examples include:

a- Object Model (DOM): A set of objects that provides a modeled


representation of dynamic HTML and XHTML-based Web pages

b-Component Object Model (COM): A proprietary Microsoft software


architecture used to create software components

7.The advantages of the object-oriented model are as follows —

 Complex data sets can be saved and retrieved quickly and easily.

 Object IDs are assigned automatically.

 Semantic content is added.


 Support for complex objects.
 Inheritance promotes data integrity.
 Visual representation includes semantic content.

8.The disadvantages of the object-oriented model are as follows:

 Object databases are not widely adopted.


 In some situations, the high complexity can cause performance problems.

 It is a complex navigational system.


 Slow development of standards.
 High system overheads.
 Slow transactions.

Q9 Example use case diagram online shopping .


Ans:-
Use case diagrams are intended to provide all stakeholders, including
clients and project managers as well as develops and engineers, with a
high-level view of the subject system and communicate the highest level
system requirements in non-technical terms.
The Online Shopping System use case diagram is a visual representation
of the key functionalities and interactions within an online shopping
platform. The diagram outlines the various actions and processes involved
in the system, enabling users to understand its overall structure and
behavior.
Use-case diagrams describe the high-level functions and scope of a
system.

These diagrams also identify the interactions between the system and its
actors. The use cases and actors in use-case diagrams describe what the
system does and how the actors use it, but not how the system operates
internally.
Use case diagrams are intended to provide all stakeholders, including
clients and project managers as well as develops and engineers, with a
high-level view of the subject system and communicate the highest level
system requirements in non-technical terms.
The purpose of use case diagrams is to model what the system should do
(What) without considering how it should be done at this stage (How) and
to view the use of the system from the user's perspective (external view)
rather than internally (implementation of these features).
Use Case diagrams have only 4 major elements:
2. The actors that the system you are describing interacts with:
3. The system itself (system boundary - the rectangle)
4. The use cases, or services, that the system knows how to perform,
and
5. The lines (link) that represent relationships between these elements

Advantages of use case diagram:


The greatest advantage of a use case diagram is that it helps software
developers and businesses design processes from a user's perspective. As
a result, the system functions more efficiently and serves the user's goals.
disadvantages of use case diagram:
One of the drawbacks of use case diagrams is that they can become too
complex and cluttered if they try to capture too many details or too many
use cases. They can also be ambiguous or inconsistent if they are not well-
defined or aligned with the system specification

Q10 . Draw and Explain DFD diagram with suitable example:-


Ans:-
A data flow diagram (DFD) maps out the flow of information for any
process or system. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage points
a nd the routes between each destination.
A data flow diagram (DFD) maps out the flow of information for any
process or system. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage points
and the routes between each destination.Data flowcharts can range from
simple, even hand-drawn process overviews, to in-depth, multi-level DFDs
that dig progressively deeper into how the data is handled.
Also known as DFD, Data flow diagrams are used to graphically represent
the flow of data in a business information system.
A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict the
right amount of the system requirement graphically. It can be manual,
automated, or a combination of both.
It shows how data enters and leaves the system, what changes the
information, and where data is stored.
The objective of a DFD is to show the scope and boundaries of a system
as a whole. It may be used as a communication tool between a system
analyst and any person who plays a part in the order that acts as a starting
point for redesigning a system. The DFD is also called as a data flow graph
or bubble chart.

symbols for Data Flow Diagram:


4. Circle: A circle (bubble) shows a process that transforms data inputs
into data outputs.
5. Data Flow: A curved line shows the flow of data into or out of a
process or data store.
6. Data Store: A set of parallel lines shows a place for the collection of
data items. A data store indicates that the data is stored which can
be used at a later stage or by the other processes in a different
order. The data store can have an element or group of elements.
7. Source or Sink: Source or Sink is an external entity and acts as a
source of system inputs or sink of system outputs.
There are two types of DFDs :
 logical : Logical diagrams display the theoretical process of
moving information through a system, like where the data comes
from, where it goes, how it changes, and where it ends up.
 physical: Physical diagrams show you the practical process of
moving information through a system.
Advantages of data flow diagram:
8. It aids in describing the boundaries of the system.
9. It is beneficial for communicating existing system knowledge to the
users.
10. A straightforward graphical technique which is easy to recognise.
11. DFDs can provide a detailed representation of system components.
12. It is used as the part of system documentation file.
13. DFDs are easier to understand by technical and nontechnical
audiences
14. It supports the logic behind the data flow within the system.
Disadvantages of data flow diagram:
2. It make the programmers little confusing concerning the system.
3. The biggest drawback of the DFD is that it simply takes a long time
to create, so long that the analyst may not receive support from
management to complete it.
4. Physical considerations are left out.
Difference Between Flowchart and Data Flow
Diagram
Flow Chart Data Flow Diagram (DFD)

The main objective is to The main objective is to


represent the flow of control in represent the processes and
the program. data flow between them.

It has only a single type of arrow It defines the flow and process
is used to show the control flow of data input, data output, and
in the flow chart. storing data.

It is the view of the system at a It is the view of the system at a


lower level. high level.

Three symbols represent a


Five symbols represent a DFD
Flowchart.
Flow Chart Data Flow Diagram (DFD)

It deals with the physical aspect It deals with the logical aspect
of the action. of the action.

It shows how to make the It defines the functionality of


system function. the system.

It is not very suitable for a


It is used for complex systems.
complex system.

Flowchart types are System


flowchart, Data flowchart, DFD types are Logical DFD and
Document flowchart, and Physical DFD.
Program flowchart.

Architectural Design:This phase primarily deals with the high-level


structure and organization of the software system. It focuses on major
components, their interactions, and the system's overall design
philosophy.

High-Level Design:Encompassing architectural design, high-level


design involves making decisions at an abstract level. It includes
architectural decisions along with other high-level decisions
related to data structures, algorithms, and system-wide design
principles.

2. System wide perspective:-


Architectural Design:It provides a system-wide perspective, defining the
system's major components, their responsibilities, and how they interact
with each other.
High-Level Design:By nature, high-level design is concerned with the overall
structure and behavior of the entire system, making it
inclusive of architectural considerations.

Architectural Design:It creates a blueprint or a foundational framework


for the software system. This includes decisions about the system's
components,
interfaces, and their relationships.
High-Level Design:As the overarching design phase, high-level design
creates a blueprint for the entire system, which encompasses
architectural design
decisions along with other high-level design aspects.

Architectural Design:Involves key decisions related to the system's


structure,
such as the choice of architectural patterns, communication
protocols, andmajor component interactions.

High-Level Design:Encompasses architectural decisions and other


design choices made at a higher level of abstraction. It sets the
direction for the detailed design and implementation phases.

Architectural design works as a tool for stackholders. It


is
used as a support or roadmap in the discussion with
system stakeholders.
It is used for system analysis. Architectural design is
used to analyze whether the system will be able to meet
its
nonfunctional requirements or not.
It facilitates large-scale re-use.The software
architecture that is the output of the architectural design
process can be
reused across a range of the system.

Architectural design re-use the components, the use of


redundant components improves the availability but
makes the security of the system.
Use of large components may improve the performance
as
large components but it reduces the maintainability as it
becomes difficult to modify and replace the large component.
Avoiding critical features inside the small components
leads to more communication among the components.

TYPES OF CONTROL MODELS


1 Centralized Model- Centralized model is a formulation of
centralized control in which one subsystem has overall
responsibility for control and starts and stops other
subsystems. It is a control sub-system that takes
responsibility for managing the execution of other
subsystems.
2 Event-based Model- Event-based models are those in which
each sub-system can respond to externally generated events
from other subsystems or the system’s environement. It is a
system driven by externally generated events where the timing
of the events is out with the control of the subsystems
which process the event.

TYPES OF CENTRALIZED MODELS

1 Call-return Model
In call-return model, it is a model which has top-down sub-
routine architecture where control starts at the top of a
subroutine
hierarchy and moves downwards. It is applica-ble to sequential
systems. This familiar model is embedded in programming
languages such as C, Ada and Pascal. Control passes from a
higher-level routine in the hierarchy to lower-level routine.This
call- return model may be used at the module level to control
functions or objects.

Fig(1).Call Return Model


The call-return model is illustrated in Figure (1).The main
program calls routines 1, 2 and 3 whilst Routine 1 can call
Routines 1.1 or 1.2. This is a model of program dynamics. It is
not a structural
model; there is no need for Routine 1.1, for example, to be a part
of Routine 1.

2 Manager Model
Manager model is applicable to concurrent systems. One
system component controls the stopping, starting and
ordination of other system processes. It can be imple-mented
in sequential systems as a case system.

Fig (2). Manager Model

Figure 2 is an illustration of centralized management model for


a concurrent system. It is often used in real time systems
which do not have very tight constraints.The cen-tral
controller manages the execution of a set of processes
associated with sensors and
actuators.The system controller process decides when
processes should be started or stopped depending on system
state variables. The controller usually loops continuously,
polling sensors and
other processes for events or state changes. For this reason,
this model is called an event-loop model.

TYPES OF EVENT-BASED MODELS


1 Broadcast Model
It is a model in which an event is broadcast to all subsys-tems.
Any subsystem which can handle the broadcasting event may
behave as a broadcast model. These are effective in
integrating
components distributed across different computers on a
network. The advantage of this model is that evolution is
simple. This
distribution is transparent to other components. The
disadvantage is that the compo-nents don’t know if the event
will be handled.

Fig (3).Broadcast Model


In Figure 3, components register an interest in specific events.
When these events occur, control is transferred to the
component that can handle the event.

4 Interrupt-driven Model
Interrupt-driven model is used in real time systems where
interrupts are detected by and interrupt handler and passed to
some other component for processing. This model is used in
real- time systems where immediate response to some event
is
necessary.The advantage is that it allows very fast responses
to events to be implemented. The disadvantage is that it is
complex to program an difficult to validate.

Fig (4) Interrupt Driven Model

In Figure 4, there are known number of interrupt types with a


handler for each type. Each type of interrupt is as-sociated with
the memory location where it handler’s address is stored.The
interrupt handler may start or stop the processes in response
to the event
signaled by the interrupt.

User interface (UI) design is the process


designers use to build interfaces in software
or computerized devices, focusing on looks or
style. Designers aim to create interfaces
which users find easy to use and
pleasurable. UI design refers to graphical user
interfaces and other forms—e.g., voice-
controlled interfaces.

Here are some key elements of User


Interface Design:

1.Visual Design:

The use of images, icons, and other visual


elements
to enhance the aesthetic appeal of the
interface.Choosing an appropriate color palette
for the interface to convey information and
create a visually pleasing experience.Selecting
fonts and text styles for optimal readability and
alignment with the overall design theme.

2.Interaction Designing -
Designing intuitive navigation structures to help
users easily move through the
application.Providing feedback to users through
visual cues, animations, or messages to inform
them about the status of their actions.Ensuring
the interface responds promptly to user
interactions, creating a seamless and interactive
experience.

3.Usability:
Conducting usability testing to identify and
address any issues related to user interaction,
navigation, and comprehension.Designing
interfaces that are accessible to users with
disabilities, ensuring inclusivity.

Issues Facing UI Design in Software Engineering:

1.Inconsistency:
Problem:Inconsistent design elements and
patterns
across different parts of the application
can confuse users and hinder the overall
user experience.
Solution:Establish and adhere to a
consistent design language and pattern
library.

2.Overcrowded Interfaces:
Problem:Cluttered interfaces with too many
elements can overwhelm users and make it
challenging for them to focus on essential tasks.
Solution:Prioritize content, declutter the
interface, and use whitespace effectively to
create a visually balanced layout.

3.Poor Navigation:
Problem:Complex or unclear navigation
structures can lead to user confusion and
frustration.
Solution:Design intuitive navigation paths,
provide clear labels, and offer visual cues for
navigation elements.

4.Lack of Responsiveness:
Problem:Interfaces that are slow or
unresponsive can lead to a negative user
experience.
Solution:Optimize performance, implement
responsive design principles, and ensure
smooth interactions.

5.Inadequate User Feedback:


Problem:Lack of feedback on user actions can
leave users uncertain about the outcome of
their interactions.
Solution:Provide visual feedback, confirmation
messages, or animations to inform users about
the status of their actions.
Q.4 List and explain various principles of UI.
Here are some key UI design principles:

1.Clarity:
Explanation:The interface should be clear and easy
to understand. Users should be able to quickly
grasp the purpose and functionality of each
element.
Application:Use straightforward language, intuitive
icons, and logical layout to enhance clarity.

2.Consistency:
Explanation:Maintain a consistent design across the
entire application. Consistency in layout, terminology,
and visual elements helps users build a mental model
of the system.
Application:Use consistent navigation patterns,
color schemes, and typography throughout the
application.

3.Feedback:
Explanation:Provide immediate and informative
feedback to users for their actions. Feedback helps
users
understand the system's response and ensures a sense
of control.
Application:Use visual cues, animations, and messages
to confirm or inform users about the outcome of
their
interactions

4.Efficiency:
Explanation:Design interfaces that allow users to
complete tasks quickly and with minimal effort.
Reduce the number of steps required to accomplish
common tasks.
Application:Streamline workflows, provide shortcuts,
and optimize the placement of frequently used
features.

5.Flexibility:
Explanation:Design interfaces that cater to a diverse
range of users and usage scenarios. Allow users to
customize settings and adapt the interface to
their preferences.
Application:Provide user preferences,
customizable themes, and adjustable font sizes
to accommodate different user needs.

6.Hierarchy:
Explanation:Establish a clear visual hierarchy to guide
users through the information and actions in the
interface. Prioritize content based on importance.
Application:Use size, color, contrast, and spacing to
emphasize key elements and create a visual hierarchy

7 .Simplicity −
Explanation: Simplicity Design is a user-friendly
interface that simplifies website usage. Achieve it by
using user-
friendly design elements that people can handle
without instructions to simplify their way through
every stage of the buying cycle.
Application: Leverage simple and consistent design
concepts people already know, like clear visuals and
navigation structures, to ensure that your website is
used in the ways that you want.

Q.5 What is software project management?


Here are key components and activities involved
in software project management:

1.Project Planning:
Definition:The process of defining the project
scope, objectives, timelines, and resource
requirements.
Activities:Develop a project plan, create a work
breakdown structure (WBS), estimate effort and costs,
identify risks and define milestones.

2.Risk Management:
Definition:Identifying potential risks that could impact
the project's success and developing strategies to
mitigate or manage those risks.
Activities:Risk identification, risk analysis, risk response
planning, and ongoing monitoring and control.

3.Resource Management:
Definition:Ensuring that the necessary resources,
including personnel, equipment, and tools, are available
and
effectively utilized throughout the project.
Activities:Resource allocation, task assignments,
tracking resource usage, and managing team
dynamics.

4.Scheduling:
Definition:Creating a timeline for project activities
and tasks, including start and end dates for each
phase.
Activities:Develop project schedules, set
milestones, allocate time for tasks, and establish
dependencies between tasks.

5.Communication Management:
Definition:Establishing effective communication
channels and mechanisms to ensure that information
is shared among team members, stakeholders, and
other relevant parties.
Activities:Develop a communication plan, hold
regular status meetings, provide updates, and
address issues promptly.

6.Quality Management:
Definition:Ensuring that the software product
meets the specified quality standards and
requirements.
Activities:Define quality metrics, establish
testing processes, conduct reviews and
inspections, and implement quality assurance
practices.

7.Change Management:
Definition:Managing changes to project scope,
requirements, and deliverables in a controlled
and systematic manner
Activities:Define change control processes, assess
the impact of changes, obtain approvals, and
update project documentation

8.Monitoring and Control:


Definition:Continuously tracking project performance
against the plan, identifying deviations, and
implementing corrective actions.
Activities:Regularly review project status, track
progress, compare actual vs. planned performance,
and make
adjustments as needed.

9.Documentation:
Definition:Creating and maintaining project
documentation,
including requirements specifications, design
documents, and project plans.
Activities:Document project processes, decisions, and
outcomes to ensure transparency and provide a
reference for future phases.

10.Closure and Evaluation:


Definition:Concluding the project, delivering the final
product, and assessing the project's overall success
and lessons learned.
Activities:Conduct project reviews, document lessons
learned, archive project documentation, and obtain
formal project closure.
1.Risk Identification:
Process:Identify and document potential risks that
could impact the project. Risks can be related to
technology, requirements, scope, resources,
schedule, or external
factors.
Methods:Brainstorming sessions, documentation
reviews, historical data analysis, and expert interviews
can help in identifying risks.

2.Risk Analysis:
Process:Assess the likelihood and impact of each
identified risk. This involves assigning a probability of
occurrence and determining the potential
consequences on project objectives.
Methods:Qualitative analysis (using probability and
impact scales), quantitative analysis (using
mathematical
models), and expert judgment.
3.Risk Prioritization:
Process:Prioritize risks based on their severity and
potential impact on the project. Focus on addressing
high- priority risks first.
Methods:Use risk matrices, risk heat maps, or
other prioritization techniques to categorize
risks.
4.Risk Response Planning:
Process:Develop strategies to respond to each
identified risk. There are four main response strategies:
Avoidance, Mitigation, Transfer, and Acceptance
(AMTA).
Methods:Develop contingency plans, establish risk
budgets, and define specific actions to be taken for
each identified risk.

5.Risk Mitigation:
Process:Implement proactive measures to reduce
the likelihood or impact of identified risks.
Mitigation
strategies aim to address the root causes of risks.
Methods:Implementing early prototyping,
conducting thorough testing, diversifying
resources, or improving communication can be
examples of risk mitigation strategies.
6.Risk Monitoring and Control:
Process:Regularly monitor identified risks throughout
the project lifecycle. Assess the effectiveness of risk
response strategies and update the risk management
plan as needed.
Methods:Conduct regular risk reviews, track key risk
indicators, and update risk registers. Ensure that the
team remains vigilant for new risks that may emerge.

Here's an explanation of how product quality can


be planned, controlled, and reviewed:

1.Quality Planning:
Definition:Quality planning involves defining the quality
standards, processes, and metrics that will be used to
ensure the product meets the specified requirements
and user expectations.

Activities:Identify quality objectives and


criteria.Define quality assurance processes and
standards.Establish testing and validation
procedures.Develop metrics to measure and assess
quality.

2.Quality Control:
Definition:Quality control focuses on monitoring
and verifying that the processes are being
followed and the product is meeting the defined
quality standards.
Activities:Conduct inspections and reviews of work
products.Perform testing, including unit testing,
integration testing, and system testing.Use
automated testing tools to ensure consistent and
repeatable
testing.Implement continuous integration
practices to catch defects early.
3.Quality Reviews:
Definition:Quality reviews involve systematic
examinations of work products to identify and correct
defects, ensure
adherence to standards, and improve overall quality.
Activities:Conduct code reviews to ensure code quality
and maintainability.Perform design reviews to validate
the
architecture and design decisions.Hold walkthroughs
or inspections for documentation and other
artifacts.Use peer reviews for requirements and
other project
documents.

Software Metrics:
A metric is a measurement of the level at which
any impute belongs to a system product or
process.
Software metrics is a quantifiable or countable
assessment of the attributes of a software product.
There are 4 functions related to software metrics:
1.Planning

2.Organizing
3.Controlling
4.Improving

Advantages of Software Metrics :

1.Reduction in cost or budget.


2.It helps to identify the particular area for improvising.

3.It helps to increase the product quality.


4.Managing the workloads and teams.
5.Reduction in overall time to produce the product,.
Disadvantages of Software Metrics :
1.It is expensive and difficult to implement the metrics
in
some cases.
2.Performance of the entire team or an individual from
the team can’t be determined. Only the performance
of the product is determined.
3.Sometimes the quality of the product is not met
with the expectation.

Software Measurement:
A measurement is a manifestation of the size, quantity,
amount, or dimension of a particular attribute of a
product or process. Software measurement is a titrate
impute of a characteristic of a software product or
the software
process. It is an authority within software engineering.
The software measurement process is defined and
governed by ISO Standard.
Importance of Software Metrics to measure in
Software Engineering:

1.Quality Assurance:
Purpose:Metrics help in assessing and ensuring
the quality of the software product.
Example:Defect density and code coverage metrics
can indicate the effectiveness of testing efforts.

2.Performance Monitoring:
Purpose:Metrics provide a means to monitor and
evaluate the performance of the development team
and the project as a whole.
Example:Velocity in Agile development measures the
team's productivity and helps in planning future
iterations.

3.Resource Management:
Purpose:Metrics assist in resource allocation,
helping organizations optimize their use of time,
budget, and personnel.
Example:Effort estimation metrics provide insights
into resource requirements for project planning.
4.Decision Making:

Purpose:Metrics support informed decision-making


by providing data-driven insights into various
aspects of the software development process.
Example:Metrics related to code complexity and
maintainability can guide decisions about
refactoring or redesigning certain modules.

5.Process Improvement:
Purpose:Metrics help in identifying areas of
improvement in software development processes.
Example:Defect density and cycle time metrics can
highlight areas where process improvements are
needed.
Product:

In the context of software engineering, Product includes any software


manufactured based on the customer’s request. This can be a problem solving
software or computer based system. It can also be said that this is the result of a
project.

Process:
Process is a set of sequence steps that have to be followed to create a project.
The main purpose of a process is to improve the quality of the project. The
process serves as a template that can be used through the creation of its
examples and is used to direct the project.

The main difference between a process and a product is that the process is a
set of steps that guide the project to achieve a convenient product. while on the
other hand, the product is the result of a project that is manufactured by a wide
variety of people.
Verification -
Verification in Software Testing is a
process of checking documents, design, code,
and program in order to check if the
software has been built according to the
requirements or not. The main goal of
verification process is to ensure
quality of software application, design,
architecture etc. The verification process
involves activities like reviews, walk-through s
and inspection
Validation -
Validation in Software Engineering is a
dynamic mechanism of testing and validating if
the software product actually meets the
exact needs of the customer or not. The
process helps to ensure that the software
fulfills the desired use in an
appropriate environment. The validation
process involves activities like unit
testing, integration testing, system testing
and user acceptance testing.
Unit 4
Q.1. Explain Verification and Validation.
Ans:-
A] Verification:-
1. Verification testing includes different activities such as business requirements, system
requirements, design review, and code walkthrough while developing a product.
2. It is also known as static testing, where we are ensuring that "we are developing the right product
or not". And it also checks that the developed application fulfilling all the requirements given by the
client.

B] Validation:-
3. Validation testing is testing where tester performed functional and non-functional testing. Here
functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing (ST), and non
-functional testing includes User acceptance testing (UAT).
4. Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs of the
client.

5. Verification and Validation process are done under the V model of the software development life
cycle.
Difference:-
Verification Validation
We check whether we are developing the right We check whether the developed product is
product or not. right.
Verification is also known as static testing. Validation is also known as dynamic testing.
Quality assurance comes under verification Quality control comes under validation testing.
testing.
The execution of code does not happen in the In validation testing, the execution of code
verification testing. happens.
In verification testing, we can find the bugs early In the validation testing, we can find those bugs,
in the development phase of the product. which are not caught in the verification process.
Verification testing is executed by the Quality Validation testing is executed by the testing
assurance team to make sure that the product is team to test the application.
developed according to customers'
requirements.
Verification is done before the validation testing. After verification testing, validation testing takes
place.
In this type of testing, we can verify that the In this type of testing, we can validate that the
inputs follow the outputs or not. user accepts the product or not.

Q.2. Explain Cleanroom software development.


Ans:- 1. Clean room software engineering is a software development approach to producing quality
software.
2. It is different from classical software engineering as in classical software engineering QA (Quality
Assurance) is the last phase of development that occurs at the completion of all development stages
while there is a chance of less reliable and fewer quality products full of bugs, and errors and upset
client, etc.
3. But in clean room software engineering, an efficient and good quality software product is delivered
to the client as QA (Quality Assurance) is performed each and every phase of software development.
4. The cleanroom software engineering follows a quality approach to software development which
follows a set of principles and practices for gathering requirements, designing, coding, testing,
managing, etc. which not only improves the quality of the product but also increases productivity and
reduces development cost.

5. The clean room approach was developed by Dr. Harlan Mills of IBM’s Federal Systems Division, and it
was released in the year 1981 but got popularity after 1987 when IBM and other organizations started
using it.
6. Processes of Cleanroom development :
Clean room software development approaches consist of four key processes i.e.
Management –
It is persistent throughout the whole project lifetime which consists of project mission,
schedule, resources, risk analysis, training, configuration management, etc.
Specification –
It is considered the first process of each increment which consists of requirement
analysis, function specification, usage specification, increment planning, etc.
Development –
It is considered the second process of each increment which consists of software
reengineering, correctness verification, incremental design, etc.
Certification –
It is considered the final process of each increment which consists of usage modeling
and test planning, statistical training and certification process, etc.
7. Box structure in clean room process :
Box structure is a modeling approach that is used in clean room engineering. A box is like a container
that contains details about a system or aspects of a system. All boxes are independent of other boxes
to deliver the required information/details. It generally uses three types of boxes i.e.
Black box –
It identifies the behavior of the system.
State box –
It identifies state data or operations.
Clear box –
It identifies the transition function used by the state box.
8. Benefits of Clean Room Software engineering :
 Delivers high-quality products.
 Increases productivity.
 Reduces development cost.
 Errors are found early.
 Reduces the overall project time.
 Saves resources.
9. Clean room software engineering ensures good quality software with certified reliability and for
that only it has been incorporated into many new software practices.
10. Still, according to the IT industry experts, it is not very adaptable as it is very theoretical and
includes too mathematical to use in the real world. But they consider it as a future technology for the
IT industries.
Q.3. What is Integration testing? Explain.
Ans:- 1. Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface.
2. The purpose of integration testing is to expose faults in the interaction between integrated units.
Once all the modules have been unit-tested, integration testing is performed.
3. Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application.
4. The goal of integration testing is to identify any problems or bugs that arise when different
components are combined and interact with each other. Integration testing is typically performed
after unit testing and before system testing.
5. It helps to identify and resolve integration issues early in the development cycle, reducing the risk
of more severe and costly problems later on.
6. Types of Integration Testing
Integration testing can be classified into two parts:
1. Incremental integration testing
Incremental integration testing is carried out by further methods:
1. Top-Down approach
2. Bottom-Up approach
2. Non-Incremental Integration testing i.e Big bang integration testing
8. Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will
add the modules incrementally or one by one and check the data flow in the same order.
Advantages:
 Identification of defect is difficult.
 An early prototype is possible.
Disadvantages:
 Due to the high number of stubs, it gets quite complicated.
 Lower level modules are tested inadequately.
 Critical Modules are tested first so that fewer chances of defects.
9. Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested
with higher level modules until the successful completion of testing of all the modules. Top level
critical modules are tested at last, so it may cause a defect. Or we can say that we will be adding the
modules from bottom to the top and check the data flow in the same order.
Advantages:-
 Identification of defect is easy.
 Do not need to wait for the development of all the modules as it saves time.
Disadvantages:-
 Critical modules are tested last due to which the defects can occur.
 There is no possibility of an early prototype.
10. Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less time
for execution of this process so that internally linked interfaces and high-risk critical modules can be
missed easily.
Advantages:
 It is convenient for small size software systems.
Disadvantages:
 Identification of defects is difficult because finding the error where it came from is a problem,
and we don't know the source of the bug.
 Small modules missed easily.
 Time provided for testing is very less.
11. Mixed Integration Testing –
A mixed integration testing is also called sandwiched integration testing. A mixed integration testing
follows a combination of top down and bottom-up testing approaches. It is also called the hybrid
integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
 Mixed approach is useful for very large projects having several sub projects.
 This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
 Parallel test can be performed in top and bottom layer tests.
Disadvantages:
 For mixed integration testing, it requires very high cost because one part has a Top-down
approach while another part has a bottom-up approach.
 This integration testing cannot be used for smaller systems with huge interdependence
between different modules.

Q.4. List the characteristics of good test case.


Ans:- Test cases in software testing are a particular succession of activities related to a particular
arrangement of information components, all of which practice a specific capacity or highlight of the
technique/application/framework under test.
In software product advancement, outlining the qualities of a good test case is challenging in various
ways and requires expansive, strategic considering.
Following are the characteristics of good test case:-
1. A clear objective with refined scope
What specifically is the intent and scope of the test? Is this a white or black box test, and is the
purpose regression or performance? When determining the test objective, start at a high-level
considering user context and then work down to thinking at a granular functional level. For example,
are we only testing that a single login button was clicked or are we testing that an existing user can
log in to the website? There is certainly overlap between the two, but both are not necessarily the
same test.
2. Obvious and meaningful pass/fail verifications
What constitutes a “pass” and “failure,” and how are both determined? Each should be clearly defined
as specifically as possible. The test case is only as accurate as what conditions you are verifying or
validating.
3. Clear and Concise documentation
Create a standardized template for your test cases recording things like a unique ID number,
description, any pre-conditions, related datasets, and expected results. This is especially important for
manual testing and/or if your test scripts are written in low-level code or are otherwise hard to read.
4. Traceability to requirements
All test cases for a system under test should be traceable to the business requirements. We want to
ensure that we have full test coverage of requirements and change requests while also ensuring that
we are not wasting our time testing irrelevant components.
5. Reusability
Test cases will probably change over time as the system under test evolves throughout its life, but
we certainly want to write automated tests that will be reusable for as long as possible. Write the test
case to be modular and easily maintainable.
6. Independence from other test cases while testing one thing
A single test case shouldn’t depend on other test cases for execution. Ideally, to create larger end-to-
end tests you should be able to combine your independent, modular test cases into test suites for
sequential or parallel execution.
7. Permutations taken into account by the test case designer
Is it necessary to test every permutation or just the critical path? What about negative testing
possibilities? In our web login example, we want to test valid username and password login
combinations – but do we also want to check for the possibility of various kinds of invalid login
credentials? What if users enter weird characters into the username and password fields? It might be
ok to only test the critical path, especially if a tight deadline is looming, but you should at least have
the debate where to draw the line with your testing.
8. Good test cases in software testing must be autonomous, i.e. you must have the capacity to
execute it in any request with no reliance on other test case example for manual testing.
9. Title of the test case that quickly depicts its point.
10. Features of test cases must have a unique need; when in doubt, functional test cases are of
medium or high need, and some usability tests might be of low need.

Q.5. Write a note on Size Oriented Metrics.


Ans:- 1. Size-oriented metrics are derived by normalizing quality and productivity Point Metrics
measures by considering the the size of the software that has been produced.
2. The organization builds a simple record of size measure for the software projects. It is built on
past experiences of organizations. It is a direct measure of software.
3. This metric measure is one of the simplest and earliest metrics that is used for computer programs
to measure size.
4. Size Oriented Metrics are also used for measuring and comparing the productivity of programmers.
5. It is a direct measure of a Software. The size measurement is based on lines of code computation.
The lines of code are defined as one line of text in a source file.
6. While counting lines of code, the simplest standard is:
 Don’t count blank lines
 Don’t count comments
 Count everything else
 The size-oriented measure is not a universally accepted method.
7. A simple set of size measures that can be developed is given below:
 Size = Kilo Lines of Code (KLOC)
 Effort = Person / month
 Productivity = KLOC / person-month
 Quality = Number of faults / KLOC
 Cost = $ / KLOC
 Documentation = Pages of documentation / KLOC
8. Example of Size-Oriented Metrics
If a software organization maintains simple records, a table of size-oriented measures, such as the
one shown in figure below, can be created. The table lists each software development project that
has been completed over the past few years and corresponding measures for that project. Referring
to the table entry (below figure) for project alpha: 12,100 lines of code were developed with 24 person
-months of effort at a cost of $168,000. It should be noted that the effort and cost recorded in the
table represent all software engineering activities (analysis, design, code, and test), not just coding.
Further information for project alpha indicates that 365 pages of documentation were developed, 134
errors were recorded before the software was released, and 29 defects were encountered after
release to the customer within the first year of operation. Three people worked on the development
of software for project alpha.
9. Advantages of Size Oriented metrics:-
 This method is well designed upon programming language.
 It does not accommodate non-procedural languages.
10. Disadvantages of Size-Oriented Metrics
 This measure is dependent upon programming language.
 Sometimes, it is very difficult to estimate LOC in early stage of development.
 Though it is simple to measure but it is very hard to understand it for users.
 It cannot measure size of specification as it is defined on code.

Q.6. Difference between function point and object point.


Ans:- A) Function points(FP):-
1. FP are a language independent way of expressing the functionality in a program.
2. Productivity is expressed as the number of function points that are implemented per person-
month.
3. A function point is computed by combining several different measurements or estimates (see
below)

4. You can then compute the so-called unadjusted function-point count (UFC) by multiplying each
initial count by the estimated weight and summing all values.
5. The unadjusted function-point count is then readjusted and yield final function-point count for the
overall system.
6. Problem:-
Function-point count in a program depends on the estimator. Different people have different notions
of complexity.
B) Object Points (Application Points )
1. Application points are an alternative to function points.
2. Object points are only concerned with screens, reports and modules in conventional programming
languages.
3. The advantage of application points over function points is that they are easier to estimate
 The number of application points in a program is a computed using ,
 The number of separate screens that are displayed
 The number of reports that are produced.
 The number of modules in codes to be developed to supplement the database programming
code.
4. Problems
They are not concerned with implementation details and the complexity factor estimation is much
simpler.

Q.7. Explain three models of COCOMO.


Ans:- The Cocomo Model is a procedural cost estimate model for software projects and is often used
as a process of reliably predicting the various parameters associated with making a project such as
size, effort, cost, time, and quality. It was proposed by Barry Boehm in 1981.
In COCOMO, projects are categorized into three types:
1. Organic:- A software project is said to be an organic type if the team size required is adequately
small, the problem is well understood and has been solved in the past and also the team members
have a nominal experience regarding the problem.Example:- simple business systems, data
processing systems,etc
2. Semi-detached
The projects classified as Semi-Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience and better guidance and creativity.
Example:- Developing a new OS, DBMS, etc
3. Embedded: A development project is treated to be of an embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the operational
method exist. For Example: ATM, Air Traffic control.
The following are three models of COCOMO:-
1. Basic Model:-
 E = a(KLOC)^b
 Time = c(Effort)^d
 Person required = Effort/ time
The above formula is used for the cost estimation of for the basic COCOMO model, and also is used in
the subsequent models. The constant values a, b, c, and d for the Basic Model for the different
categories of the system:
Software projects a b c d
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32
i. The effort is measured in Person-Months and as evident from the formula is dependent on Kilo-
Lines of code. The development time is measured in months.
ii. These formulas are used as such in the Basic Model calculations, as not much consideration of
different factors such as reliability, and expertise is taken into account, henceforth the estimate is
rough.
2. Intermediate Model: The basic Cocomo model considers that the effort is only a function of the
number of lines of code and some constants calculated according to the various software systems.
The intermediate COCOMO model recognizes these facts and refines the initial estimates obtained
through the basic COCOMO model by using a set of 15 cost drivers based on various attributes of
software engineering.
Classification of Cost Drivers and their attributes:
(i) Product attributes -
 Required software reliability extent
 Size of the application database
 The complexity of the product
(ii) Hardware attributes -
 Run-time performance constraints
 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time
(iii) Personnel attributes -
 Analyst capability
 Software engineering capability
 Applications experience
 Virtual machine experience
 Programming language experience
(iv) Project attributes -
 Use of software tools
 Application of software engineering methods
 Required development schedule
3. Detailed COCOMO Model:Detailed COCOMO incorporates all qualities of the standard version with
an assessment of the cost driver's effect on each method of the software engineering process. The
detailed model uses various effort multipliers for each cost driver property. In detailed cocomo, the
whole software is differentiated into multiple modules, and then we apply COCOMO in various
modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
 Planning and requirements
 System structure
 Complete structure
 Module code and test
 Integration and test
 Cost Constructive model

Q.8. What is System Testing?


Ans:-
1. System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution.
2. It tests if the system meets the specified requirements and if
it is suitable for delivery to the end-users.
3. This type of testing is performed after the integration testing and
before the acceptance testing.
4. System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both.
5. System testing tests the design and behavior of the system and also the expectations of the
customer.
6. System Testing is a black-box testing.Tools used for System Testing : JMeter, Gallen Framework,
Selenium, SoapUI, Appium,etc.
7. System Testing Process: System Testing is performed in the following steps:
a. Test Environment Setup: Create testing environment for the better quality testing.
b. Create Test Case: Generate test case for the testing process.
c. Create Test Data: Generate the data that is to be tested.
d. Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
e. Defect Reporting: Defects in the system are detected.
f. Regression Testing: It is carried out to test the side effects of the testing process.
g. Log Defects: Defects are fixed in this step.
h. Retest: If the test is not successful then again test is performed.

8. Types of System Testing:


 Performance Testing: Performance Testing is a type of software testing that is carried out to
test the speed, scalability, stability and reliability of the software product or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to determine the
behavior of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried out to
check the performance of a software application or system in terms of its capability to scale
up or scale down the number of user request load.
9. Advantages of System Testing :
 Verifies the overall functionality of the system.
 Improves system reliability and quality.
 Increases user confidence and reduces risks.
 Facilitates early detection and resolution of bugs and defects.
10. Disadvantages of System Testing :
 Can be time-consuming and expensive.
 Can be complex and challenging, especially for large and complex systems.
 May require multiple test cycles to achieve desired results.

Q.9. Short Note:- Component Testing


Ans:-
1. Component Testing. It is used to test all the components separately as well as the usability
testing; interactive valuation is also done for each specific component. It is further known as
Module Testing or Program Testing and Unit Testing.
2. In order to implement the component testing, all the components or modules require to be in
the individual state and manageable state. And all the related components of the software
should be user-understandable.
3. This type of testing provides a way to finding defects, which happen in all the modules. And
also helps in certifying the working of each component of the software.
4. Components testing is one of the most repeated types of black-box testing executed by the
Quality Assurance Team.
5. The primary purpose of executing the component testing is to validate the input/output
performance of the test object. And also makes sure the specified test object's functionality
is working fine as per needed requirement or specification.
6. Let's see some of the other vital goals of component testing:

 Component Testing Process:-


Step1: Requirement Analysis
The first step of component testing is requirement analysis, where the user requirement associated
with each component is detected.
Step2: Test Planning
Once the requirement analysis phase is done, we will mov to the next step of component testing
process, which is test planning. In this phase, the test is designed to evaluate the requirement given
by the users/clients.
Step3: Test Specification
Once the test planning phase is done, we will move to the next phase, known as test specification.
Here, we will identify those test cases that needs to be executed and missed.
Step4: Test Implementation
The forth step in the component testing process is Test implementation. When the test cases are
identified as per the user requirements or the specification, then only we can implement the test
cases.
Step5: Test Recording
When all the above steps have been completed successfully, we will go to the next step that is, Test
Recording. In this step of the component testing process, we have the records of those defects/bugs
discovered during the implementation of component testing.
Step6: Test Verification
Once the bugs or defects have been recorded successfully, we will proceed to the test verification
phase. It is the process of verifying whether the product fulfils the specification or not.
Step7: Completion
After completed all the above steps successfully, we will come to the last step of the component
testing process. In this particular step, the results will be evaluated in order to deliver a good quality
product.

Q.10. Explain Software Cost Estimation techniques.


Ans:-
1. Cost estimation simply means a technique that is used to find out the cost estimates. The
cost estimate is the financial spend that is done on the efforts to develop and test software
in Software Engineering.
2. Cost estimation models are some mathematical algorithms or parametric equations that are
used to estimate the cost of a product or a project.
3. Various techniques or models are available for cost estimation, also known as Cost Estimation
Models as shown below :

 Empirical Estimation Technique –


Empirical estimation is a technique or model in which empirically derived formulas are used
for predicting the data that are a required and essential part of the software project planning
step.
These techniques are usually based on the data that is collected previously from a project and
also based on some guesses, prior experience with the development of similar types of
projects, and assumptions.
It uses the size of the software to estimate the effort. In this technique, an educated guess
of project parameters is made. Hence, these models are based on common sense.
For example Delphi technique and Expert Judgement technique.
 Heuristic Technique –
Heuristic word is derived from a Greek word that means “to discover”. The heuristic
technique is a technique or model that is used for solving problems, learning, or discovery in
the practical methods which are used for achieving immediate goals.
These techniques are flexible and simple for taking quick decisions through shortcuts and
good enough calculations, most probably when working with complex data. But the decisions
that are made using this technique are necessary to be optimal.
In this technique, the relationship among different project parameters is expressed using
mathematical equations.
The popular heuristic technique is given by Constructive Cost Model (COCOMO).
 Analytical Estimation Technique –
Analytical estimation is a type of technique that is used to measure work. In this technique,
firstly the task is divided or broken down into its basic component operations or elements for
analyzing.
Second, if the standard time is available from some other source, then these sources are
applied to each element or component of work.
Third, if there is no such time available, then the work is estimated based on the experience
of the work. In this technique, results are derived by making certain basic assumptions about
the project.
Hence, the analytical estimation technique has some scientific basis. Halstead’s software
science is based on an analytical estimation model.
 Other Cost Estimation Models are:

1. Function Point Analysis (FPA): This technique counts the number and complexity of functions
that a piece of software can perform to determine how functional and sophisticated it is. The
effort needed for development, testing and maintenance can be estimated using this model.
2. Putnam Model: This model is a parametric estimation model that estimates effort, time and
faults by taking into account the size of the the programme, the expertise of the
development team and other project-specific characteristics.
3. Price-to-Win Estimation: Often utilized in competitive bidding, this model is concerned with
projecting the expenses associated with developing a particular software project in order to
secure a contract. It involves looking at market dynamics and competitors.
4. Models Based on Machine Learning: Custom cost estimating models can be built using
machine learning techniques including neural networks, regression analysis and decision trees.
These models are based on past project data. These models are flexible enough to adjust to
changing data and project-specific features.
5. Function Points Model (IFPUG): A standardized technique for gauging the functionality of
software using function points is offered by the International Function Point Users Group
(IFPUG). It is employed to calculate the effort required for software development and
maintenance.

UNIT 5
1.EXPLAIN PROCESS IMPROVEMENT AND PRODUCT
IMPROVEMENT QUALITY?
ANS:
 PROCESS IMPROVEMENT:
1. It is simply defined as the definition of a sequence of various
tasks, tools, and techniques that are needed to perform and
plan and just implement all the improvement activities.
2. It includes three factors: People, Technology and Product.
3. It also includes improve planning, implementation,
evaluation.
4. It reduces the cost, increases development speed by
installing tools that reduce the time and work done by humans
or automate the production process.
5. It increases product quality.
6. It is created to achieve specific goals such as increasing the
development speed, achieving higher product quality and
many more.
7. It improves team performance by hiring best people.
 PRODUCT IMPROVEMENT QUALITY:
There are several ways to improve the quality of the product.
Portability: A software device is said to be portable, if it can be
freely made to work in various operating system environments,
in multiple machines, with other software products, etc.
Usability: A software product has better usability if various
categories of users can easily invoke the functions of the
product.
Reusability: A software product has excellent reusability if
different modules of the product can quickly be reused to
develop new products.
Correctness: A software product is correct if various
requirements as specified in the SRS document have been
correctly implemented.
Maintainability: A software product is maintainable if bugs can
be easily corrected as and when they show up, new tasks can
be easily added to the product, and the functionalities of the
product can be easily modified, etc.

2.EXPLAIN THE CMMI PROCESS IMPROVEMENT


FRAME?
ANS:
A] Capability Maturity Model Integration (CMMI) is a successor
of CMM and is a more evolved model that incorporates best
components of individual disciplines of CMM like Software
CMM, Systems Engineering CMM, People CMM, etc.
B] Since CMM is a reference model of matured practices in a
specific discipline, so it becomes difficult to integrate these
disciplines as per the requirements. This is why CMMI is used as
it allows the integration of multiple disciplines as and when
needed.
Objectives of CMMI:
1. Fulfilling customer needs and expectations.
2. Value creation for investors/stockholders.
3. Market growth is increased.
4. Improved quality of products and services.
5. Enhanced reputation in Industry.
CMMI Representation – Staged and Continuous:
A representation allows an organization to pursue a different
set of improvement objectives. There are two representations
for CMMI:
 Staged Representation:
 uses a pre-defined set of process areas to
define improvement path.
 an improved path is defined by maturity level.
 maturity level describes the maturity of
processes in organization.
 Staged CMMI representation allows comparison
between different organizations for multiple
maturity levels.
 Continuous Representation:
 allows selection of specific process areas.
 uses capability levels that measures
improvement of an individual process area.
 In this representation, order of improvement of
various processes can be selected which allows
the organizations to meet their objectives and
eliminate risks.
CMMI Model – Maturity Levels:
In CMMI with staged representation, there are five maturity
levels described as follows:
1. Maturity level 1: Initial
 processes are poorly managed or controlled.
 unpredictable outcomes of processes involved.
 ad hoc and chaotic approach used.
 Lowest quality and highest risk.
2. Maturity level 2: Managed
 requirements are managed.
 processes are planned and controlled.
 Quality is better than Initial level.
3. Maturity level 3: Defined
 processes are well characterized and described
using standards, proper procedures, and
methods, tools, etc.
 Medium quality and medium risk involved.
 Focus is process standardization.
4. Maturity level 4: Quantitatively managed
 quantitative objectives for process performance
and quality are set.
 higher quality of processes is achieved.
 lower risk
5. Maturity level 5: Optimizing
 continuous improvement in processes and their
performance.
 lowest risk in processes and their performance.
CMMI Model – Capability Levels
A capability level includes relevant specific and generic
practices for a specific process area that can improve the
organization’s processes associated with that process area. For
CMMI models with continuous representation, there are six
capability levels as described below:
1. Capability level 0: Incomplete
 incomplete process – partially or not performed.
 one or more specific goals of process area are
not met.
 No generic goals are specified for this level.
 this capability level is same as maturity level 1.
2. Capability level 1: Performed
 process performance may not be stable.
 objectives of quality, cost and schedule may not
be met.
 a capability level 1 process is expected to
perform all specific and generic practices for
this level.
 only a start-step for process improvement.
3. Capability level 2: Managed
 process is planned, monitored and controlled.
 managing the process by ensuring that
objectives are achieved.
 objectives are both model and other including
cost, quality, schedule.
 actively managing processing with the help of
metrics.
4. Capability level 3: Defined
 a defined process is managed and meets the
organization’s set of guidelines and standards.
 focus is process standardization.
5. Capability level 4: Quantitatively Managed
 process is controlled using statistical and
quantitative techniques.
 quantitative objectives for process quality and
performance are established.
6. Capability level 5: Optimizing
 focuses on continually improving process
performance.
 performance is improved in both ways –
incremental and innovation.

3.EXPLAIN SERVICES AS REUSABLE COMPONENT?


ANS:
1. Services are reusable components that are independent (no
requires interface) and are loosely coupled.
2. A web service is: A loosely coupled, reusable software
component that encapsulates discrete functionality, which
may be distributed and programmatically accessed.
3. Service reusability is typically measured by how much
extra functionality a service contains that could be reused
in future, and how much of the service’s functionality
goes beyond the current requirements.
4. This encourages services that contain extra capabilities
built around possible future service usage scenarios.
5. However, little is done in designing the service logic in a
manner that it could be reused to automate multiple
business processes.
6. This results in more focus on equipping services with extra
functionality than concentrating on making the core
service logic reusable, leading to gold-plated services
whose development require increased time and efforts.
7. This additional functionality may not even fall within the
original functional context of the service and might not
even be used at all, as it was built without establishing its
needs.
8. The resulting SOA would not be able to provide true
service reusability as promised.
9. Another misconception about service reuse is that the
reuse relates to the frequency of its usage.
10. Contrary to this, the actual reuse relates to when the
service is used to automate multiple business processes.
11. This is the true service reuse as such a service eliminates
the need for creating altogether a new service and
becomes a part of multiple business processes without
being part of any particular business process.
Application
In order to concentrate on the quality of the logic, the service
reusability requires exploring the business domain as well as
the current technologies in use. Some of the considerations
that help in designing services with reusable logic include:
 Analysing the functional contexts of the current
services.
 Current legacy systems and any future plans of
decommissioning such legacy systems.
 What are the current requirements that the service is
required to address.
 Details about the corresponding business domain(s).
By conducting this analysis, we can arrive at the right type of
reusable logic that needs to be included within the service.

4.EXPLAIN COTS PRODUCT REUSE?


ANS:
Commercial Off-the-Shelf Systems (COTS)
1. COTS systems are complete application systems (not
components of some larger system) that can be deployed
and run as independent systems.
2. Reuse of COTS systems involve adapting and configuring
these systems for a specific operational environment.
3. Examples:
4. Develop Excel spreadsheets to support project costing
5. Configure a patient record system for a specific medical
practice Requirements issues
6. The top-down process of identifying requirements then
building a system to deliver these requirements does not
work
7. Rather, requirements engineering is an iterative process.
Two types of systems:-
 COTS-solution systems
Choose a generic system that has been developed to deliver
some business function and adapt that system to the needs of
a particular organisation.
 COTS-integrated systems
Develop a new system by choosing a number of COTS systems
and integrating these to deliver combined functionality.
Additional software (glueware) is required to make these
systems work together.

COTS solution systems Domain-specific applications


Application systems that are designed to support a particular
business application.
For example, an appointment management system for
dentists.
Generic applications:-
Generic systems that can be used with a range of other
applications
For example, clients or spreadsheets
Benefits
 Designed for a specific application area so provide
extensive, integrated functionality to support that area.
 A specific user community may be created to share
knowledge of problems and how to use the system
effectively.
 Usually designed to support information sharing.

Problems
 The assumptions made by the vendor may not hold for
the user of the system.
 We shall see later a system that had problems because it
assumed that it would be used under a particular legal
system.
 The process of use may not match user processes.
 Systems may only be available on limited platforms.

5.EXPLAIN BENEFITS OF SaaS?


ANS:
1. The SaaS model is a plan in which a business profits by
offering cloud-based programs to clients.
2. Customers can access SaaS applications over an internet
network or remotely from any device or place, which can
have more benefits than traditional software business
models.
3. In these situations, the software provider has the
responsibility of building, installing, configuring and
updating the app.
4. Part of the SaaS model is allowing consumers to rent
software on a subscription basis, making monthly or yearly
payments on the product over a period instead of
purchasing it outright.

Benefits of the SaaS model


The versatile SaaS framework has various benefits for users and
software providers, including:
1. Increases cost savings
The SaaS model operates on a pay-as-you-go structure,
allowing consumers to pay subscription fees over time and
based on usage. Often, SaaS business products are more
affordable and workable for clients because renting software
on a subscription basis involves less financial risk than buying it
in full.
2. Provides free trials
The SaaS model makes it easy for providers to offer free trials
of their software. These are typically seven-, 14- or 30-day
periods in which customers can test a program for free to
determine whether they want to sign up and pay for a
subscription.
While using a free trial, consumers can determine whether the
product is right for their needs. SaaS businesses that offer this
benefit can generate more leads and expand their customer
base.
3. Improves cash flow
SaaS models can increase cash flow for providers and
customers. Cash flow is the amount of cash going in and out of
a company. Companies can benefit from access to a significant
amount of liquid assets in cash so they can purchase supplies,
pay debts, make investments and remain in operation overall.
4. Offers flexibility
SaaS models provide significant flexibility for the customer,
allowing them to only pay for the product when they're using it.
Businesses may provide diverse and custom payment plans
with different tiers for clients with varying financial situations.
5. Provides convenience
Products from SaaS businesses can provide convenience for
customers because the providers manage IT development.
Customers can usually access up-to-date, functional
applications simply by connecting to the internet and signing in.
This instant, easy access saves time and effort and eliminates
the need for clients to have IT expertise.
6. Increases engagement
Using a SaaS business model can make products and services
more affordable and accessible, especially to small- and
medium-sized businesses, increasing customer engagement
rates and improving revenue levels.

7. Updates continuously
It's easier to update software continuously using the SaaS
model because these updates are the provider's responsibility.
Some vendors create new versions of products as frequently
as every week or every few months to ensure a product
remains useful to current user needs.

6.EXPLAIN DISTRIBUTED COMPONENT


ARCHITECTURE WITH DIAGRAM?
ANS:
1. A distributed system is a collection of computer programs
spread across multiple computational nodes.
2. Each node is a separate physical device or software process
but works towards a shared objective.
3. This setup is also known as distributed computing systems
or distributed databases.
4. The main goal of a distributed database system is to avoid
bottlenecks and eliminate central points of failure by
allowing the nodes to communicate and coordinate through
a shared network.
5. The following are the characteristics of the distributed
systems:
 Error detection: Failures or errors within a distributed system
are readily identified and detected.
 Simultaneous processing: Multiple machines in a distributed
system can perform the same function or task
simultaneously.
 Scalability: The computing and processing capacity of a
distributed system can be expanded by adding more
machines as required.
 Resource sharing: In a distributed computing system,
resources like hardware, software, or data can be shared
among multiple nodes.
 Transparency: Each node within the system can access and
communicate with other nodes without being aware of the
underlying complexities or differences in their
implementation.
Benefits Of Distributed Systems
 Scalability: Distributed database systems offer improved
scalability as they can add more nodes to easily
accommodate the increase in workload.
 Improved reliability: It eliminates central points of failure and
bottlenecks. The redundancy of nodes ensures that even if
one node fails, others can take over its tasks.
 Enhanced performance: These systems can easily scale
horizontally by adding more nodes or vertically by increasing
a node's capacity. This scalability results in enhanced
performance and optimum output.
Drawbacks & Risks of Distributed Systems
 Requirement for specialized tools: Management of multiple
repositories in a distributed system requires the use of
specialized tools.
 Development sprawl and complexity: As the system's
complexity grows, organizing, managing, and improving a
distributed system can become challenging.
 Security risks: A distributed system is more vulnerable to
cyberattacks, as data processing is distributed across multiple
nodes that communicate with each other.

Q.7 Explain all level of DFD diagram


Ans:-
In Software engineering DFD(data flow diagram) can be drawn to represent
the system of different levels of abstraction. Higher-level DFDs are
partitioned into low levels-hacking more information and functional
elements. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see
mainly 3 levels in the data flow diagram, which are: 0-level DFD, 1-level DFD,
and 2-level DFD.
Data Flow Diagrams (DFD) are graphical representations of a system that
illustrate the flow of data within the system. DFDs can be divided into
different levels, which provide varying degrees of detail about the system.
The following are the four levels of DFDs:
1. Level 0 DFD: This is the highest-level DFD, which provides an overview
of the entire system. It shows the major processes, data flows, and data
stores in the system, without providing any details about the internal
workings of these processes.
2. Level 1 DFD: This level provides a more detailed view of the system by
breaking down the major processes identified in the level 0 DFD into sub-
processes. Each sub-process is depicted as a separate process on the level
1 DFD. The data flows and data stores associated with each sub-process
are also shown.
3. Level 2 DFD: This level provides an even more detailed view of the
system by breaking down the sub-processes identified in the level 1 DFD
into further sub-processes. Each sub-process is depicted as a separate
process on the level 2 DFD. The data flows and data stores associated
with each sub-process are also shown.
4. Level 3 DFD: This is the most detailed level of DFDs, which provides a
detailed view of the processes, data flows, and data stores in the system.
This level is typically used for complex systems, where a high level of
detail is required to understand the system. Each process on the level 3
DFD is depicted with a detailed description of its input, processing, and
output. The data flows and data stores associated with each process are
also shown.
The choice of DFD level depends on the complexity of the system and the
level of detail required to understand the system. Higher levels of DFD
provide a broad overview of the system, while lower levels provide more
detail about the system’s processes, data flows, and data stores. A
combination of different levels of DFD can provide a complete
understanding of the system.
 0-level DFD: It is also known as a context diagram. It’s designed to be
an abstraction view, showing the system as a single process with its
relationship to external entities. It represents the entire system as a single
bubble with input and output data indicated by incoming/outgoing
arrows.
 1-level DFD: In 1-level DFD, the context diagram is decomposed into
multiple bubbles/processes. In this level, we highlight the main functions
of the system and breakdown the high-level process of 0-level DFD into
subprocesses.

 2-level DFD: 2-level DFD goes one step deeper into parts of 1-level DFD.
It can be used to plan or record the specific/necessary detail about the
system’s functioning.

Q.8 Explain Peer to Peer Architecture


In the P2P network architecture, the computers connect with each other in a
workgroup to share files, and access to internet and printers.
 Each computer in the network has the same set of responsibilities and
capabilities.
 Each device in the network serves as both a client and server.
 The architecture is useful in residential areas, small offices, or small
companies where each computer act as an independent workstation and
stores the data on its hard drive.
 Each computer in the network has the ability to share data with other
computers in the network.
 The architecture is usually composed of workgroups of 12 or more
computers
.

 Peer to peer architecture is a type of computer networking


architecture in which there is no division or distinction of abilities
amidst the various workstations or nodes of a network.
 In P2P each computer can act as both the server and the client as
the need demands.
 Although P2P has a wide array of applications, its most important
one is the ability to distribute content efficiently.
 Things that facilitate on-demand delivery of content such as
software publication and distribution, streaming and peer casting
for multicasting streams, and content delivery networks, all come
under this.

Q 9 Explain class based modeling


Ans:-
Class-based modeling identifies classes, attributes and relationships that
the system will use. In the airline application example, the traveler/user
and the boarding pass represent classes.
Class based modeling represents the object. The system manipulates
the operations.
Here are some details about class-based modeling:
 Class: A class model shows all the classes present in the system. It
shows the attributes and the behavior associated with the objects.
 Attributes: An attribute is a characteristic of the class, something that
describes it. For example, the travel book's number of pages or retail
price are some of its attributes.
 Operations: Operations are the behaviors that define the class.
 Class diagram: A class diagram is used to show the class model.
Class Modeling focuses on static system structure in terms of classes
(Class, Data Type, Interface and Signal items), Associations and on
characteristics of Classes (Operations and Attributes).

Modeler provides Class Diagrams and Composite Structure Diagrams to


support the definition of the Class Model:

• The Class Diagram is the primary diagram for defining Classes and their
Attributes, Operations and relationships. The Class Diagram notation is
based on the Unified Modeling Language (UML).

• The Composite Structure Diagram defines the structure of Classes, in


particular, showing how Class parts and ports connect with each other.
The
Composite Structure Diagram notation is based on the UML 2.0 notation,
with the addition of SysML IO Flows.

The following table summarizes in which parts of Modeler you can


create the Class Model items.
Q.10. Explain Distributed system issues.
Ans:-
1. Distributed System is a collection of autonomous computer systems that are physically
separated but are connected by a centralized computer network that is equipped with
distributed system software.
2. These are used in numerous applications, such as online gaming, web applications, and cloud
computing.
3. However, creating a distributed system is not simple, and there are a number of design
considerations to take into account.
4. The following are some of the major design issues of distributed systems:
 Heterogeneity: Heterogeneity is applied to the network, computer hardware, operating
system, and implementation of different developers. A key component of the heterogeneous
distributed system client-server environment is middleware. Middleware is a set of services
that enables applications and end-user to interact with each other across a heterogeneous
distributed system.
 Openness: The openness of the distributed system is determined primarily by the degree to
which new resource-sharing services can be made available to the users. Open systems are
characterized by the fact that their key interfaces are published. It is based on a uniform
communication mechanism and published interface for access to shared resources. It can be
constructed from heterogeneous hardware and software.
 Scalability: The scalability of the system should remain efficient even with a significant
increase in the number of users and resources connected. It shouldn’t matter if a program has
10 or 100 nodes; performance shouldn’t vary. A distributed system’s scaling requires
consideration of a number of elements, including size, geography, and management.
 Security: The security of an information system has three components Confidentially,
integrity, and availability. Encryption protects shared resources and keeps sensitive
information secrets when transmitted.
 Failure Handling: When some faults occur in hardware and the software program, it may
produce incorrect results or they may stop before they have completed the intended
computation so corrective measures should to implemented to handle this case. Failure
handling is difficult in distributed systems because the failure is partial i, e, some components
fail while others continue to function.
 Concurrency: There is a possibility that several clients will attempt to access a shared resource
at the same time. Multiple users make requests on the same resources, i.e. read, write, and
update. Each resource must be safe in a concurrent environment. Any object that represents
a shared resource in a distributed system must ensure that it operates correctly in a
concurrent environment.
 Transparency: Transparency ensures that the distributed system should be perceived as a
single entity by the users or the application programmers rather than a collection of
autonomous systems, which is cooperating. The user should be unaware of where the
services are located and the transfer from a local machine to a remote one should be
transparent.

You might also like