Software Engineer Assignment 1 to 5
Software Engineer Assignment 1 to 5
previous layer.
Layered technology is divided into four parts:
1. A quality focus:
It defines the continuous process improvement principles of
software. It provides integrity that means providing security to the
software so that data can be accessed by only an authorized
person, no outsider can access the data. It also focuses on
maintainability and usability.
2. Process:
It is the foundation or base layer of software engineering. It is key
that binds all the layers together which enables the development of
software before the deadline or on time. Process defines a
framework that must be established for the effective delivery of
software engineering technology. The software process covers all
the activities, actions, and tasks required to be carried out for
software development.
3. Method: During the process of software development the
answers to all “how-to-do” questions are given by method. It has
the information of all the tasks which includes communication,
requirement analysis, design modelling, program construction,
testing, and support.
Advantages
Improves the productivity
Functionality:
- It refers to the degree of performance of the software
against its intended purpose.
- Functionality refers to the set of features and capabilities
that a software program or system provides to its users.
- Examples of functionality in software include:
Data storage and retrieval
Data processing and manipulation
User interface and navigation
Usability:
It refers to the extent to which the software can be used with ease.
the amount of effort or time required to learn how to use the
software.
Required functions are:
Maintainability:
It refers to the ease with which modifications can be made in a
software system to extend its functionality, improve its
performance, or correct errors.
Required functions are:
Maintainability
Portability:
A set of attributes that bears on the ability of software to be
transferred from one environment to another, without minimum
changes.
Required functions are:
3. Difference between Generic Products and
Customized Products
ANS:
Generic software product Custom software
S.No. development development
SDLC Cycle
SDLC Cycle represents the process of developing software. SDLC
framework includes the following steps:
The senior members of the team perform it with inputs from all the
stakeholders and domain experts or SMEs in the industry.
For Example,
The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of
the last two, like inputs from the customer and requirement gathering.
Stage5: Testing
Stage6: Deployment
Once the software is certified, and no bugs or errors are stated, then it is
deployed.
Stage7: Maintenance
Once when the client starts using the developed systems, then the real
issues come up and requirements to be solved from time to time.
This procedure where the care is taken for the developed product is
known as maintenance.
Advantages
o Efficient with regard to costs
o Efficacious in terms of time
o Enhances teamwork and coordination, defines suitable roles for
employees and increases workplace transparency.
o Minimal danger when the project is implemented
Disadvantages
o The project may take longer and cost more if the planning is not
done properly.
o Correcting problems in code can occasionally take a long time and
cause deadlines to be missed if there are many of them.
5. What are the different types of
requirements?
ANS:-
1. Requirements are specifications that define
the functions, features, and characteristics of a
software system.
2. They serve as the foundation for designing
and developing a system that meets the needs of
its users.
3. Requirements can be categorized into
different types based on various criteria.
1. Functional requirements:-
These specify the functions and features that the
software system must provide. They describe
what the system is supposed to do in terms of
input, processing, and output.
2. Non-functional requirements:-
These define the quality attributes and constraints
of the system, such as performance, reliability,
scalability, usability, and security. Non-functional
requirements are often as critical as functional
requirements for the success of the system.
3. Domain requirements:-
Specifies the data storage, retrieval, and
processing needs of the software.
Advantages:-
1. Better organization: Classifying software
requirements helps organize them into groups that
are easier to manage, prioritize, and track
throughout the development process.
2. Improved communication: Clear classification of
requirements makes it easier to communicate them
to stakeholders, developers, and other team
members. It also ensures that everyone is on the
same page about what is required.
3. Increased quality: By classifying requirements,
potential conflicts or gaps can be identified early in
the development process. This reduces the risk of
errors, omissions, or misunderstandings, leading to
higher quality software.
Disadvantages:-
Advantages:-
1. Simple and easy to understand and use
2. Phases are processed and completed one at a time.
3. Works well for smaller projects where
requirements are very well understood.
4. Clearly defined stages.
Disadvantages:-
1. No working software is produced until late during
the life cycle.
2. High amounts of risk and uncertainty.
3. It is difficult to measure progress within stages.
4. Not a good model for complex and object-oriented
projects.
7. Write a short note of spiral model.
ANS:
o The Spiral Model is a software development
methodology that combines elements of both the
iterative and waterfall models.
o It involves a series of iterations where the project
progresses through planning, risk analysis,
development, and evaluation.
o It's like a spiral because each iteration builds upon
the previous one, allowing for continuous
improvement and refinement.
o It's a flexible approach that helps manage risks
and adapt to changing requirements.
1. Planning:
The first phase of the Spiral Model is the planning
phase, where the scope of the project is determined
and a plan is created for the next iteration of the spiral.
2. Risk Analysis:
In the risk analysis phase, the risks associated with the
project are identified and evaluated.
3. Engineering:
In the engineering phase, the software is developed
based on the requirements gathered in the previous
iteration.
4. Evaluation:
In the evaluation phase, the software is evaluated to
determine if it meets the customer’s requirements and
if it is of high quality.
5. Review and Planning:
At the end of each spiral, a review is conducted to
assess the progress and identify areas for improvement.
Based on the review, plans for the next iteration are
refined or modified.
Principles of Agile:
The highest priority is to satisfy the customer
through early and continuous delivery of valuable
software.
It welcomes changing requirements, even late in
development.
Deliver working software frequently, from a couple
of weeks to a couple of months, with a preference
for the shortest timescale.
Build projects around motivated individuals. Give
them the environment and the support they need
and trust them to get the job done.
Working software is the primary measure of
progress.
Simplicity the art of maximizing the amount of work
not done is essential.
Advantages:
The use of reusable components helps to
reduce the cycle time of the project.
Feedback from the customer is available at
the initial stages.
This model is flexible for change.
It reduced development time.
It increases the reusability of features.
Disadvantages:
It required highly skilled designers.
All application is not compatible with RAD.
For smaller projects, we cannot use the RAD model.
Required user involvement.
Not every application can be used with RAD.
10. Explain Prototype Model.
ANS:-
In this, we will collect the requirements from the
customer and prepare a prototype (sample), and
get it reviewed and approved by the customer.
The prototype is just the sample or a dummy of
the required software product.
If all the mentioned modules are present, then
only the developer and tester will perform
prototype testing.
When we use the Prototype model
Whenever the customer is new to the software
industry or when he doesn't know how to give the
requirements to the company.
When the developers are new to the domain.
Design compatibility
Definition of requirements
Management of projects
Cost analysis
Scheduling
Ease of operations
For example, if a company has a machine that operates 8 hours a day and is
down for maintenance 2 hours a day, the equipment availability would be
80%. This is important for businesses to track as it impacts production and
can affect the bottom line.
Reliability:
Reliability is a measure of the probability that the system will meet defined
performance standards in performing its intended function during a specified
interval.
5. mission critical,
6. business critical
7. security critical.
1.Safety critical
Safety critical systems deal with scenarios that may lead to loss of life,
serious personal injury, or damage to the natural environment. Examples
of safety-critical systems are a control system for a chemical
manufacturing plant, aircraft, the controllerof an unmanned train metro
system, a controller of a nuclear plant, etc.
2.Mission critical
Mission critical systems are made to avoid inability to complete the overall
system, project objectives or one of the goals for which the system was
designed. Examples of mission-critical systems are a navigational system
for a spacecraft, software controlling a baggage handling system of an
airport, etc.
3-Business critical
Business critical systems are programmed to avoid significant tangible or
intangible economic costs; e.g., loss of business or damage to reputation.
This is often due to the interruption of service caused by the system
being unusable. Examples of a business-critical systems are the customer
accounting system in a bank, stock-trading system, ERP system of a
company, Internet search engine, etc.
4.Security critical
Security critical systems deal with the loss of sensitive data through theft
or accidental loss.
Example
Automatic braking, cruise control, lane control, computer vision, obstacle
recognition, electronic engine control modules, etc. Every one of these is a
life-critical system, where a failure can be fatal.
Advantages of Critical Path Method (CPM):
It has the following advantages:
It figures out the activities which can run parallel to each other.
It helps the project manager in identifying the most critical
elements of the project.
It gives a practical and disciplined base which helps in determining
how to reach the objectives.
CPM is effective in new project management.
CPM can strengthen a team perception if it is applied properly.
Ans:
Requirements engineering (RE) refers to the process of defining,
documenting, and maintaining requirements in the engineering design
process. Requirement engineering provides the appropriate mechanism
to understand what the customer desires, analyzing the need, and
assessing feasibility, negotiating a reasonable solution, specifying the
solution clearly, validating the specifications and managing the
requirements as they are transformed into a working system. Thus,
requirement engineering is the disciplined application of proven
principles, methods, tools, and notation to describe a proposed system's
intended behavior and its associated constraints.
Requirement Engineering Process
It is a four-step process, which includes -
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for
developing the software that is acceptable to users, flexible to change
and conformable to established standards.
Types of Feasibility:
Backward Skip 10sPlay Video Forward Skip 10s
5. Technical Feasibility - Technical feasibility evaluates the current
technologies, which are needed to accomplish customer
requirements within the time and budget.
6. Operational Feasibility - Operational feasibility assesses the range
in which the required software performs a series of levels to solve
business problems and customer requirements.
7. Economic Feasibility - Economic feasibility decides whether the
necessary software can generate financial profits for an
organization.
Ans:-
4.To accumulate the details for taking that decision one can follow the
following processes:
Requirements Identification: In this, the requirement must be
uniquely identified so that it can be cross-referenced with
other requirements. Here, one can learn what is important and
required and what is not and it also helps to establish a
foundation for product vision, scope, cost, and schedule.
Requirement change management process: This is the set of
activities that assess the impact and cost of changes.
Traceability policies: The main purpose of this policy is to keep
a record of the defined relationships between each
requirement and the system designs which will help to
minimize the risks.
Tool support: Tools like MS Excel, spreadsheets, or a simple
database system can be used.
Now, after the details have been gathered for the Requirement
Management, its time to see whether the change needs to be
implemented or not. For this, we use the Requirement Change
Management process. In this, the three basic steps that we follow are:
Problem analysis and change specification
Change analysis and costing
Change implementation
2. Performance Modeling
Disadvantages
Advantages
5.The context diagram shows how bank managers send open and close
account requests to the bank systems.
6.Also, the level 0 data-flow diagram shows how third parties initiate the
money transfer to the bank system.
Complex data sets can be saved and retrieved quickly and easily.
These diagrams also identify the interactions between the system and its
actors. The use cases and actors in use-case diagrams describe what the
system does and how the actors use it, but not how the system operates
internally.
Use case diagrams are intended to provide all stakeholders, including
clients and project managers as well as develops and engineers, with a
high-level view of the subject system and communicate the highest level
system requirements in non-technical terms.
The purpose of use case diagrams is to model what the system should do
(What) without considering how it should be done at this stage (How) and
to view the use of the system from the user's perspective (external view)
rather than internally (implementation of these features).
Use Case diagrams have only 4 major elements:
2. The actors that the system you are describing interacts with:
3. The system itself (system boundary - the rectangle)
4. The use cases, or services, that the system knows how to perform,
and
5. The lines (link) that represent relationships between these elements
It has only a single type of arrow It defines the flow and process
is used to show the control flow of data input, data output, and
in the flow chart. storing data.
It deals with the physical aspect It deals with the logical aspect
of the action. of the action.
1 Call-return Model
In call-return model, it is a model which has top-down sub-
routine architecture where control starts at the top of a
subroutine
hierarchy and moves downwards. It is applica-ble to sequential
systems. This familiar model is embedded in programming
languages such as C, Ada and Pascal. Control passes from a
higher-level routine in the hierarchy to lower-level routine.This
call- return model may be used at the module level to control
functions or objects.
2 Manager Model
Manager model is applicable to concurrent systems. One
system component controls the stopping, starting and
ordination of other system processes. It can be imple-mented
in sequential systems as a case system.
4 Interrupt-driven Model
Interrupt-driven model is used in real time systems where
interrupts are detected by and interrupt handler and passed to
some other component for processing. This model is used in
real- time systems where immediate response to some event
is
necessary.The advantage is that it allows very fast responses
to events to be implemented. The disadvantage is that it is
complex to program an difficult to validate.
1.Visual Design:
2.Interaction Designing -
Designing intuitive navigation structures to help
users easily move through the
application.Providing feedback to users through
visual cues, animations, or messages to inform
them about the status of their actions.Ensuring
the interface responds promptly to user
interactions, creating a seamless and interactive
experience.
3.Usability:
Conducting usability testing to identify and
address any issues related to user interaction,
navigation, and comprehension.Designing
interfaces that are accessible to users with
disabilities, ensuring inclusivity.
1.Inconsistency:
Problem:Inconsistent design elements and
patterns
across different parts of the application
can confuse users and hinder the overall
user experience.
Solution:Establish and adhere to a
consistent design language and pattern
library.
2.Overcrowded Interfaces:
Problem:Cluttered interfaces with too many
elements can overwhelm users and make it
challenging for them to focus on essential tasks.
Solution:Prioritize content, declutter the
interface, and use whitespace effectively to
create a visually balanced layout.
3.Poor Navigation:
Problem:Complex or unclear navigation
structures can lead to user confusion and
frustration.
Solution:Design intuitive navigation paths,
provide clear labels, and offer visual cues for
navigation elements.
4.Lack of Responsiveness:
Problem:Interfaces that are slow or
unresponsive can lead to a negative user
experience.
Solution:Optimize performance, implement
responsive design principles, and ensure
smooth interactions.
1.Clarity:
Explanation:The interface should be clear and easy
to understand. Users should be able to quickly
grasp the purpose and functionality of each
element.
Application:Use straightforward language, intuitive
icons, and logical layout to enhance clarity.
2.Consistency:
Explanation:Maintain a consistent design across the
entire application. Consistency in layout, terminology,
and visual elements helps users build a mental model
of the system.
Application:Use consistent navigation patterns,
color schemes, and typography throughout the
application.
3.Feedback:
Explanation:Provide immediate and informative
feedback to users for their actions. Feedback helps
users
understand the system's response and ensures a sense
of control.
Application:Use visual cues, animations, and messages
to confirm or inform users about the outcome of
their
interactions
4.Efficiency:
Explanation:Design interfaces that allow users to
complete tasks quickly and with minimal effort.
Reduce the number of steps required to accomplish
common tasks.
Application:Streamline workflows, provide shortcuts,
and optimize the placement of frequently used
features.
5.Flexibility:
Explanation:Design interfaces that cater to a diverse
range of users and usage scenarios. Allow users to
customize settings and adapt the interface to
their preferences.
Application:Provide user preferences,
customizable themes, and adjustable font sizes
to accommodate different user needs.
6.Hierarchy:
Explanation:Establish a clear visual hierarchy to guide
users through the information and actions in the
interface. Prioritize content based on importance.
Application:Use size, color, contrast, and spacing to
emphasize key elements and create a visual hierarchy
7 .Simplicity −
Explanation: Simplicity Design is a user-friendly
interface that simplifies website usage. Achieve it by
using user-
friendly design elements that people can handle
without instructions to simplify their way through
every stage of the buying cycle.
Application: Leverage simple and consistent design
concepts people already know, like clear visuals and
navigation structures, to ensure that your website is
used in the ways that you want.
1.Project Planning:
Definition:The process of defining the project
scope, objectives, timelines, and resource
requirements.
Activities:Develop a project plan, create a work
breakdown structure (WBS), estimate effort and costs,
identify risks and define milestones.
2.Risk Management:
Definition:Identifying potential risks that could impact
the project's success and developing strategies to
mitigate or manage those risks.
Activities:Risk identification, risk analysis, risk response
planning, and ongoing monitoring and control.
3.Resource Management:
Definition:Ensuring that the necessary resources,
including personnel, equipment, and tools, are available
and
effectively utilized throughout the project.
Activities:Resource allocation, task assignments,
tracking resource usage, and managing team
dynamics.
4.Scheduling:
Definition:Creating a timeline for project activities
and tasks, including start and end dates for each
phase.
Activities:Develop project schedules, set
milestones, allocate time for tasks, and establish
dependencies between tasks.
5.Communication Management:
Definition:Establishing effective communication
channels and mechanisms to ensure that information
is shared among team members, stakeholders, and
other relevant parties.
Activities:Develop a communication plan, hold
regular status meetings, provide updates, and
address issues promptly.
6.Quality Management:
Definition:Ensuring that the software product
meets the specified quality standards and
requirements.
Activities:Define quality metrics, establish
testing processes, conduct reviews and
inspections, and implement quality assurance
practices.
7.Change Management:
Definition:Managing changes to project scope,
requirements, and deliverables in a controlled
and systematic manner
Activities:Define change control processes, assess
the impact of changes, obtain approvals, and
update project documentation
9.Documentation:
Definition:Creating and maintaining project
documentation,
including requirements specifications, design
documents, and project plans.
Activities:Document project processes, decisions, and
outcomes to ensure transparency and provide a
reference for future phases.
2.Risk Analysis:
Process:Assess the likelihood and impact of each
identified risk. This involves assigning a probability of
occurrence and determining the potential
consequences on project objectives.
Methods:Qualitative analysis (using probability and
impact scales), quantitative analysis (using
mathematical
models), and expert judgment.
3.Risk Prioritization:
Process:Prioritize risks based on their severity and
potential impact on the project. Focus on addressing
high- priority risks first.
Methods:Use risk matrices, risk heat maps, or
other prioritization techniques to categorize
risks.
4.Risk Response Planning:
Process:Develop strategies to respond to each
identified risk. There are four main response strategies:
Avoidance, Mitigation, Transfer, and Acceptance
(AMTA).
Methods:Develop contingency plans, establish risk
budgets, and define specific actions to be taken for
each identified risk.
5.Risk Mitigation:
Process:Implement proactive measures to reduce
the likelihood or impact of identified risks.
Mitigation
strategies aim to address the root causes of risks.
Methods:Implementing early prototyping,
conducting thorough testing, diversifying
resources, or improving communication can be
examples of risk mitigation strategies.
6.Risk Monitoring and Control:
Process:Regularly monitor identified risks throughout
the project lifecycle. Assess the effectiveness of risk
response strategies and update the risk management
plan as needed.
Methods:Conduct regular risk reviews, track key risk
indicators, and update risk registers. Ensure that the
team remains vigilant for new risks that may emerge.
1.Quality Planning:
Definition:Quality planning involves defining the quality
standards, processes, and metrics that will be used to
ensure the product meets the specified requirements
and user expectations.
2.Quality Control:
Definition:Quality control focuses on monitoring
and verifying that the processes are being
followed and the product is meeting the defined
quality standards.
Activities:Conduct inspections and reviews of work
products.Perform testing, including unit testing,
integration testing, and system testing.Use
automated testing tools to ensure consistent and
repeatable
testing.Implement continuous integration
practices to catch defects early.
3.Quality Reviews:
Definition:Quality reviews involve systematic
examinations of work products to identify and correct
defects, ensure
adherence to standards, and improve overall quality.
Activities:Conduct code reviews to ensure code quality
and maintainability.Perform design reviews to validate
the
architecture and design decisions.Hold walkthroughs
or inspections for documentation and other
artifacts.Use peer reviews for requirements and
other project
documents.
Software Metrics:
A metric is a measurement of the level at which
any impute belongs to a system product or
process.
Software metrics is a quantifiable or countable
assessment of the attributes of a software product.
There are 4 functions related to software metrics:
1.Planning
2.Organizing
3.Controlling
4.Improving
Software Measurement:
A measurement is a manifestation of the size, quantity,
amount, or dimension of a particular attribute of a
product or process. Software measurement is a titrate
impute of a characteristic of a software product or
the software
process. It is an authority within software engineering.
The software measurement process is defined and
governed by ISO Standard.
Importance of Software Metrics to measure in
Software Engineering:
1.Quality Assurance:
Purpose:Metrics help in assessing and ensuring
the quality of the software product.
Example:Defect density and code coverage metrics
can indicate the effectiveness of testing efforts.
2.Performance Monitoring:
Purpose:Metrics provide a means to monitor and
evaluate the performance of the development team
and the project as a whole.
Example:Velocity in Agile development measures the
team's productivity and helps in planning future
iterations.
3.Resource Management:
Purpose:Metrics assist in resource allocation,
helping organizations optimize their use of time,
budget, and personnel.
Example:Effort estimation metrics provide insights
into resource requirements for project planning.
4.Decision Making:
5.Process Improvement:
Purpose:Metrics help in identifying areas of
improvement in software development processes.
Example:Defect density and cycle time metrics can
highlight areas where process improvements are
needed.
Product:
Process:
Process is a set of sequence steps that have to be followed to create a project.
The main purpose of a process is to improve the quality of the project. The
process serves as a template that can be used through the creation of its
examples and is used to direct the project.
The main difference between a process and a product is that the process is a
set of steps that guide the project to achieve a convenient product. while on the
other hand, the product is the result of a project that is manufactured by a wide
variety of people.
Verification -
Verification in Software Testing is a
process of checking documents, design, code,
and program in order to check if the
software has been built according to the
requirements or not. The main goal of
verification process is to ensure
quality of software application, design,
architecture etc. The verification process
involves activities like reviews, walk-through s
and inspection
Validation -
Validation in Software Engineering is a
dynamic mechanism of testing and validating if
the software product actually meets the
exact needs of the customer or not. The
process helps to ensure that the software
fulfills the desired use in an
appropriate environment. The validation
process involves activities like unit
testing, integration testing, system testing
and user acceptance testing.
Unit 4
Q.1. Explain Verification and Validation.
Ans:-
A] Verification:-
1. Verification testing includes different activities such as business requirements, system
requirements, design review, and code walkthrough while developing a product.
2. It is also known as static testing, where we are ensuring that "we are developing the right product
or not". And it also checks that the developed application fulfilling all the requirements given by the
client.
B] Validation:-
3. Validation testing is testing where tester performed functional and non-functional testing. Here
functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing (ST), and non
-functional testing includes User acceptance testing (UAT).
4. Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs of the
client.
5. Verification and Validation process are done under the V model of the software development life
cycle.
Difference:-
Verification Validation
We check whether we are developing the right We check whether the developed product is
product or not. right.
Verification is also known as static testing. Validation is also known as dynamic testing.
Quality assurance comes under verification Quality control comes under validation testing.
testing.
The execution of code does not happen in the In validation testing, the execution of code
verification testing. happens.
In verification testing, we can find the bugs early In the validation testing, we can find those bugs,
in the development phase of the product. which are not caught in the verification process.
Verification testing is executed by the Quality Validation testing is executed by the testing
assurance team to make sure that the product is team to test the application.
developed according to customers'
requirements.
Verification is done before the validation testing. After verification testing, validation testing takes
place.
In this type of testing, we can verify that the In this type of testing, we can validate that the
inputs follow the outputs or not. user accepts the product or not.
5. The clean room approach was developed by Dr. Harlan Mills of IBM’s Federal Systems Division, and it
was released in the year 1981 but got popularity after 1987 when IBM and other organizations started
using it.
6. Processes of Cleanroom development :
Clean room software development approaches consist of four key processes i.e.
Management –
It is persistent throughout the whole project lifetime which consists of project mission,
schedule, resources, risk analysis, training, configuration management, etc.
Specification –
It is considered the first process of each increment which consists of requirement
analysis, function specification, usage specification, increment planning, etc.
Development –
It is considered the second process of each increment which consists of software
reengineering, correctness verification, incremental design, etc.
Certification –
It is considered the final process of each increment which consists of usage modeling
and test planning, statistical training and certification process, etc.
7. Box structure in clean room process :
Box structure is a modeling approach that is used in clean room engineering. A box is like a container
that contains details about a system or aspects of a system. All boxes are independent of other boxes
to deliver the required information/details. It generally uses three types of boxes i.e.
Black box –
It identifies the behavior of the system.
State box –
It identifies state data or operations.
Clear box –
It identifies the transition function used by the state box.
8. Benefits of Clean Room Software engineering :
Delivers high-quality products.
Increases productivity.
Reduces development cost.
Errors are found early.
Reduces the overall project time.
Saves resources.
9. Clean room software engineering ensures good quality software with certified reliability and for
that only it has been incorporated into many new software practices.
10. Still, according to the IT industry experts, it is not very adaptable as it is very theoretical and
includes too mathematical to use in the real world. But they consider it as a future technology for the
IT industries.
Q.3. What is Integration testing? Explain.
Ans:- 1. Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface.
2. The purpose of integration testing is to expose faults in the interaction between integrated units.
Once all the modules have been unit-tested, integration testing is performed.
3. Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application.
4. The goal of integration testing is to identify any problems or bugs that arise when different
components are combined and interact with each other. Integration testing is typically performed
after unit testing and before system testing.
5. It helps to identify and resolve integration issues early in the development cycle, reducing the risk
of more severe and costly problems later on.
6. Types of Integration Testing
Integration testing can be classified into two parts:
1. Incremental integration testing
Incremental integration testing is carried out by further methods:
1. Top-Down approach
2. Bottom-Up approach
2. Non-Incremental Integration testing i.e Big bang integration testing
8. Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will
add the modules incrementally or one by one and check the data flow in the same order.
Advantages:
Identification of defect is difficult.
An early prototype is possible.
Disadvantages:
Due to the high number of stubs, it gets quite complicated.
Lower level modules are tested inadequately.
Critical Modules are tested first so that fewer chances of defects.
9. Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested
with higher level modules until the successful completion of testing of all the modules. Top level
critical modules are tested at last, so it may cause a defect. Or we can say that we will be adding the
modules from bottom to the top and check the data flow in the same order.
Advantages:-
Identification of defect is easy.
Do not need to wait for the development of all the modules as it saves time.
Disadvantages:-
Critical modules are tested last due to which the defects can occur.
There is no possibility of an early prototype.
10. Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less time
for execution of this process so that internally linked interfaces and high-risk critical modules can be
missed easily.
Advantages:
It is convenient for small size software systems.
Disadvantages:
Identification of defects is difficult because finding the error where it came from is a problem,
and we don't know the source of the bug.
Small modules missed easily.
Time provided for testing is very less.
11. Mixed Integration Testing –
A mixed integration testing is also called sandwiched integration testing. A mixed integration testing
follows a combination of top down and bottom-up testing approaches. It is also called the hybrid
integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Parallel test can be performed in top and bottom layer tests.
Disadvantages:
For mixed integration testing, it requires very high cost because one part has a Top-down
approach while another part has a bottom-up approach.
This integration testing cannot be used for smaller systems with huge interdependence
between different modules.
4. You can then compute the so-called unadjusted function-point count (UFC) by multiplying each
initial count by the estimated weight and summing all values.
5. The unadjusted function-point count is then readjusted and yield final function-point count for the
overall system.
6. Problem:-
Function-point count in a program depends on the estimator. Different people have different notions
of complexity.
B) Object Points (Application Points )
1. Application points are an alternative to function points.
2. Object points are only concerned with screens, reports and modules in conventional programming
languages.
3. The advantage of application points over function points is that they are easier to estimate
The number of application points in a program is a computed using ,
The number of separate screens that are displayed
The number of reports that are produced.
The number of modules in codes to be developed to supplement the database programming
code.
4. Problems
They are not concerned with implementation details and the complexity factor estimation is much
simpler.
1. Function Point Analysis (FPA): This technique counts the number and complexity of functions
that a piece of software can perform to determine how functional and sophisticated it is. The
effort needed for development, testing and maintenance can be estimated using this model.
2. Putnam Model: This model is a parametric estimation model that estimates effort, time and
faults by taking into account the size of the the programme, the expertise of the
development team and other project-specific characteristics.
3. Price-to-Win Estimation: Often utilized in competitive bidding, this model is concerned with
projecting the expenses associated with developing a particular software project in order to
secure a contract. It involves looking at market dynamics and competitors.
4. Models Based on Machine Learning: Custom cost estimating models can be built using
machine learning techniques including neural networks, regression analysis and decision trees.
These models are based on past project data. These models are flexible enough to adjust to
changing data and project-specific features.
5. Function Points Model (IFPUG): A standardized technique for gauging the functionality of
software using function points is offered by the International Function Point Users Group
(IFPUG). It is employed to calculate the effort required for software development and
maintenance.
UNIT 5
1.EXPLAIN PROCESS IMPROVEMENT AND PRODUCT
IMPROVEMENT QUALITY?
ANS:
PROCESS IMPROVEMENT:
1. It is simply defined as the definition of a sequence of various
tasks, tools, and techniques that are needed to perform and
plan and just implement all the improvement activities.
2. It includes three factors: People, Technology and Product.
3. It also includes improve planning, implementation,
evaluation.
4. It reduces the cost, increases development speed by
installing tools that reduce the time and work done by humans
or automate the production process.
5. It increases product quality.
6. It is created to achieve specific goals such as increasing the
development speed, achieving higher product quality and
many more.
7. It improves team performance by hiring best people.
PRODUCT IMPROVEMENT QUALITY:
There are several ways to improve the quality of the product.
Portability: A software device is said to be portable, if it can be
freely made to work in various operating system environments,
in multiple machines, with other software products, etc.
Usability: A software product has better usability if various
categories of users can easily invoke the functions of the
product.
Reusability: A software product has excellent reusability if
different modules of the product can quickly be reused to
develop new products.
Correctness: A software product is correct if various
requirements as specified in the SRS document have been
correctly implemented.
Maintainability: A software product is maintainable if bugs can
be easily corrected as and when they show up, new tasks can
be easily added to the product, and the functionalities of the
product can be easily modified, etc.
Problems
The assumptions made by the vendor may not hold for
the user of the system.
We shall see later a system that had problems because it
assumed that it would be used under a particular legal
system.
The process of use may not match user processes.
Systems may only be available on limited platforms.
7. Updates continuously
It's easier to update software continuously using the SaaS
model because these updates are the provider's responsibility.
Some vendors create new versions of products as frequently
as every week or every few months to ensure a product
remains useful to current user needs.
2-level DFD: 2-level DFD goes one step deeper into parts of 1-level DFD.
It can be used to plan or record the specific/necessary detail about the
system’s functioning.
• The Class Diagram is the primary diagram for defining Classes and their
Attributes, Operations and relationships. The Class Diagram notation is
based on the Unified Modeling Language (UML).