OOSE_enggTree
OOSE_enggTree
o Reduces complexity
o To minimize software cost
o To decrease time
o Handling big projects
o Reliable software
o Effectiveness
Software Processes
The developer must complete every phase before the next phase begins.
This model is named "Waterfall Model", because its diagrammatic
representation resembles a cascade of waterfalls.
o In this model, the risk factor is higher, so this model is not suitable for
more significant and complex projects.
o This model cannot accept the changes in requirements during
development.
INCREMENTAL MODEL
Evolutionary Model
Disadvantages
SPIRAL MODEL
Objective setting: Each cycle in the spiral starts with the identification of
purpose for that cycle, the various alternatives that are possible for
achieving the targets, and the constraints that exists.
Planning: Finally, the next step is planned. The project is reviewed, and a
choice made whether to continue with a further period of the spiral. If it is
determined to keep, plans are drawn up for the next step of the project.
Advantages
PROTOTYPE MODEL
The prototype model requires that before carrying out the development of
actual software, a working prototype of the system should be built. A
prototype is a toy implementation of the system. A prototype usually turns
out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance as
compared to actual software.
In many instances, the client only has a general view of what is expected
from the software product. In such a scenario where there is an absence of
detailed information regarding the input to the system, the processing
needs, and the output requirement, the prototyping model may be
employed.
Disadvantages
• It needs better communication between the team members. This may not
be achieved all the time.
FORMAL METHODS
Analysis
Feasibility study
Design
Development
Testing
Deployment
Formal methods can help in validating that our software application meets
the client's requirements and needs and can ensure that the deployed
software matches the formally verified models.
Maintenance
AGILE MODEL/AGILITY
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
2. Design the requirements: When you have identified the project, work
with stakeholders to define requirements. You can use the user flow
diagram or the high-level UML diagram to show the work of new features
and show how it will apply to your existing system.
5. Deployment: In this phase, the team issues a product for the user's
work environment.
6. Feedback: After releasing the product, the last step is feedback. In this,
the team receives feedback about the product and works through the
feedback.
The following 12 Principles are based on the Agile Manifesto.
Our highest priority is to satisfy the customer through the early and
continuous delivery of valuable software.
Business people and developers must work together daily throughout the
project.
Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
At regular intervals, the team reflects on how to become more effective, then
tunes and adjusts its behavior accordingly.
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)
Scrum
o Scrum Master: The scrum can set up the master team, arrange the
meeting and remove obstacles for the process
o Product owner: The product owner makes the product backlog,
prioritizes the delay and is responsible for the distribution of
functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle.
eXtreme Programming(XP)
1. Time Boxing
2. MoSCoW Rules
3. Prototyping
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
Communication
Simplicity
Feedback
Courage
Respect
Extreme Programming is one of the Agile software development
methodologies. It provides values and principles to guide the team
behavior. The team is expected to self-organize. Extreme Programming
provides specific core practices where −
Code reviews are effective as the code is reviewed all the time.
Testing is effective as there is continuous regression and testing.
Design is effective as everybody needs to do refactoring daily.
Integration testing is important as integrate and test several times a
day.
Short iterations are effective as the planning game for release
planning and iteration planning.
History of Extreme Programming
Some of the good practices that have been recognized in the extreme
programming model and suggested to maximize their use are given below:
1.REQUIREMENTS ANALYSIS
In any project, the key stakeholders, including end users, generally have final say on the
project scope. Project teams should identify them early and involve them in the
requirements gathering process from the beginning.
2. Understand the project goal
To capture all necessary requirements, project teams must first understand the project's
objective. What business need drives the project? What problem is the product meant to
solve? By understanding the desired end, the project team can define the problem
statement and have more productive discussions when gathering requirements.
3. Capture requirements
At this stage, all relevant stakeholders provide requirements. This can be done through
one-on-one interviews, focus groups or consideration of use cases. Project teams gather
stakeholder feedback and incorporate it into requirements.
4. Categorize requirements
1. Functional requirements.
2. Technical requirements.
3. Transitional requirements.
4. Operational requirements.
Post-categorization, the project team should analyze its set of requirements to determine
which ones are feasible. Interpretation and analysis are easier when requirements are well
defined and clearly worded. Each requirement should have a clear and understood impact
on the end product and the project plan. After all the requirements have been identified,
prioritized and analyzed, project teams should document them in the software
requirements specification (SRS).
6. Finalize SRS and get sign-off on requirements
The SRS should be shared with key stakeholders for sign-off to ensure that they agree
with the requirements. This helps prevent conflicts or disagreements later. Feedback, if
any, should be incorporated. The SRS can then be finalized and made available to the
entire development team. This document provides the foundation for the project's scope
and guides other steps during the software development lifecycle (SDLC), including
development and testing.
2. REQUIREMENT GATHERINGR
Extended Requirements
These are basically “nice to have” requirements that might be out of the scope of the
System.
Example:
Our system should record metrices and analytics.
Service heath and performance monitoring.
Difference between Functional Requirements and Non-Functional Requirements
Functional Requirements Non Functional Requirements
A non-functional requirement
A functional requirement defines a
defines the quality attribute of a
system or its component.
software system.
Non-functional requirement is
Functional requirement is specified by technical peoples e.g.
specified by User. Architect, Technical leaders and
software developers.
4.FEASIBILITY STUDY
Feasibility Study in Software Engineering is a study to evaluate feasibility of
proposed project or system. Feasibility study is one of stage among important four
stages of Software Project Management Process. As name suggests feasibility study is
the feasibility analysis or it is a measure of the software product in terms of how much
beneficial product development will be for the organization in a practical point of view.
Feasibility study is carried out based on many purposes to analyze whether software
product will be right in terms of development, implantation, contribution of project to the
organization etc.
TYPES OF FEASIBILITY STUDY
The feasibility study mainly concentrates on below five mentioned areas. Among these
Economic Feasibility Study is most important part of the feasibility analysis and Legal
Feasibility Study is less considered feasibility analysis.
1. Technical Feasibility: This technical feasibility study gives report whether there
exists correct required resources and technologies which will be used for project
development. Along with this, feasibility study also analyzes technical skills and
capabilities of technical team, existing technology can be used or not,
2. Operational Feasibility: In Operational Feasibility degree of providing service to
requirements is analyzed along with how much easy product will be to operate and
maintenance after deployment.
3. Economic Feasibility: In Economic Feasibility study cost and benefit of the project
is analyzed.
4. Legal Feasibility: In Legal Feasibility study project is analyzed in legality point of
view. Overall it can be said that Legal Feasibility if proposed project conform legal
and ethical requirements.
5. Schedule Feasibility: In Schedule Feasibility Study mainly timelines/deadlines is
analyzed for proposed project which includes how many times teams will take to
complete final project which has a great impact on the organization as purpose of
project may fail if it can’t be completed on time.
7. Cultural and Political Feasibility: This section assesses how the software project
will affect the political environment and organizational culture.
8. Market Feasibility: This refers to evaluating the market’s willingness and ability to
accept the suggested software system. Analyzing the target market, understanding
consumer wants and assessing possible rivals are all part of this study.
9. Resource Feasibility: This method evaluates if the resources needed to complete
the software project successfully It guarantees that sufficient hardware, software,
trained labor and funding are available to complete the project successfully.
Aim of Feasibility Study
The overall objective of the organization are covered and contributed by the system
or not.
The implementation of the system be done using current technology or not.
Can the system be integrated with the other system which are already exist
2.2. Transition
We use a solid arrow to represent the transition or change of control from one state to
another. The arrow is labelled with the event which causes the change in state.
2.3. State
We use a rounded rectangle to represent a state. A state represents the conditions or
circumstances of an object of a class at an instant of time.
2.4. Fork
We use a rounded solid rectangular bar to represent a Fork notation with incoming
arrow from the parent state and outgoing arrows towards the newly created states. We
use the fork notation to represent a state splitting into two or more concurrent states.
2.5. Join
We use a rounded solid rectangular bar to represent a Join notation with incoming
arrows from the joining states and outgoing arrow towards the common goal state. We
use the join notation when two or more states concurrently converge into one on the
occurrence of an event or events.
The UML diagrams we draw depend on the system we aim to represent. Here is just an
example of how an online ordering system might look like :
1. On the event of an order being received, we transit from our initial state to
Unprocessed order state.
2. The unprocessed order is then checked.
3. If the order is rejected, we transit to the Rejected Order state.
4. If the order is accepted and we have the items available we transit to the fulfilled
order state.
5. However if the items are not available we transit to the Pending Order state.
6. After the order is fulfilled, we transit to the final state. In this example, we merge the
two states i.e. Fulfilled order and Rejected order into one final state.
UNIT III
SOFTWARE DESIGN
Unit-3
Quality attributes
Functionality:
Usability:
Reliability:
Supportability:
Design concepts
1. Abstraction
6. Functional independence
7. Refinement
9. Design classes
The analysis and design process of a user interface is iterative and can be represented by
a spiral model. The analysis and design process of user interface consists of four
framework activities.
The design process for software systems often has two levels. At the first
level the focus is on deciding which modules are needed for the system on
the basis of SRS (Software Requirement Specification) and how the
modules should be interconnected.
Function Oriented Design is an approach to software design where the
design is decomposed into a set of interacting units where each unit has a
clearly defined function.
Generic Procedure:
Start with a high level description of what the software / program does.
Refine each part of the description one by one by specifying in greater
details the functionality of each part. These points lead to Top-Down
Structure.
Problem in Top-Down design method:
Mostly each module is used by at most one other module
and that module is called its Parent module.
Solution to the problem:
Designing of reusable module. It means modules use
several modules to do their required functions.
There is another receiver that requests a service from the sender. The
sender is blocked since it hasn’t yet received any acknowledgment from
the first receiver.
The sender isn’t able to serve the second receiver which can create
problems. To solve this drawback, the Pub-Sub model was introduced.
What is Pub/Sub Architecture?
The Pub/Sub (Publisher/Subscriber) model is a messaging pattern used in
software architecture to facilitate asynchronous communication between
different components or systems. In this model, publishers produce
messages that are then consumed by subscribers.
Key points of the Pub/Sub model include:
Publishers: Entities that generate and send messages.
Subscribers: Entities that receive and consume messages.
Topics: Channels or categories to which messages are published.
Message Brokers: Intermediaries that manage the routing of messages
between publishers and subscribers.
The user interface is the front-end application view to which the user
interacts to use the software. The software becomes more popular if its
user interface is:
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to
the system. The user needs to remember the syntax of the command
and its use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can
be represented by a spiral model. The analysis and design process of
user interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based
on the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirement’s developer
understand how to develop the interface. Once all the requirements are
gathered a detailed analysis is conducted. In the analysis part, the tasks
that the user performs to establish the goals of the system are identified,
described and elaborated. The analysis of the user environment focuses
on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface?
3. Does the interface hardware accommodate space, light, or noise
constraints?
4. Are there special human factors considerations driven by environmental
factors?2. Interface Design
The goal of this phase is to define the set of interface objects and actions
i.e., control mechanisms that enable the user to perform desired tasks.
Indicate how these control mechanisms affect the system. Specify the
action sequence of tasks and subtasks, also called a user scenario.
Indicate the state of the system when the user performs a particular task.
Always follow the three golden rules stated by Theo Mandel. Design
issues such as response time, command and action structure, error
handling, and help facilities are considered as the design model is refined.
This phase serves as the foundation for the implementation phase.
UNIT IV
SOFTWARE TESTING
Software testing is widely used technology because it is compulsory
to test each and every software before deployment. Software testing such
as Methods such as Black Box Testing, White Box Testing, Visual Box
Testing and Gray Box Testing.
Types of Testing
Manual Testing Automation Testing
Types of Manual 3
White Box Testing Black Box Testing Grey Box Testing
UNIT TESTING
Unit testing uses modules for testing purpose, and these modules are
combined and tested in integration testing. The Software is developed with
a number of software modules that are coded by different coders or
programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
INTEGRATION TESTING
Incremental Approach
o Top-Down approach
o Bottom-Up approach
Top-Down Approach
The top-down testing strategy deals with the process in which higher level
modules are tested with lower level modules until the successful
completion of testing of all the modules. Major design flaws can be
detected and fixed early because critical modules tested first. In this type of
method, we will add the modules incrementally or one by one and check
the data flow in the same order.
Advantages:
Disadvantages:
The bottom to up testing strategy deals with the process in which lower
level modules are tested with higher level modules until the successful
completion of testing of all the modules. Top level critical modules are
tested at last, so it may cause a defect. Or we can say that we will be
adding the modules from bottom to the top and check the data flow in the
same order.
Advantages
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
We will go for this method, when the data flow is very complex and when it
is difficult to find who is a parent and who is a child. And in such case, we
will create the data in any module bang on all other existing modules and
check if the data is present. Hence, it is also known as the Big bang
method.
performed
Since this testing can be done after completion of all modules due to that
testing team has less time for execution of this process so that internally
linked interfaces and high-risk critical modules can be missed easily.
Advantages:
Disadvantages:
REGRESSION TESTING
This method is used to test the product for modifications or any updates
done. It ensures that any change in a product does not affect the existing
module of the product. Verify that the bugs fixed and the newly added
features not created any problem in the previous working version of the
Software.
Regression tests are also known as the Verification Method. Test cases are
often automated. Test cases are required to execute many times and
running the same test case again and again manually, is time-consuming
and tedious too.
1. Re-test All
In this, we are going to test only the changed unit, not the impact area,
because it may affect the components of the same module.
In this, we are going to test the modification along with the impact area or
regions, are called the Regional Regression testing. Here, we are testing
the impact area because if there are dependable modules, it will affect the
other modules also.
During the second and the third release of the product, the client asks for
adding 3-4 new features, and also some defects need to be fixed from the
previous release. Then the testing team will do the Impact Analysis and
identify that the above modification will lead us to test the entire product.
SYSTEM TESTING
System testing is a type of software testing that evaluates the
overall functionality and performance of a complete and fully integrated
software solution. It tests if the system meets the specified requirements
and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance testing.
System Testing is a type of software testing that is performed on a
complete integrated system to evaluate the compliance of the system
with the corresponding requirements. In system testing, integration
testing passed components are taken as input. The goal of integration
testing is to detect any irregularity between the units that are integrated
together. System testing detects defects within both the integrated units
and the whole system. The result of system testing is the observed
behavior of a component or a system when it is tested. System
Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behavior of the
system and also the expectations of the customer. It is performed to test
the system beyond the bounds mentioned in the software requirements
specification (SRS). System Testing is basically performed by a testing
team that is independent of the development team that helps to test the
quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing.
SMOKE TESTING
Smoke Testing comes into the picture at the time of receiving build
software from the development team. The purpose of smoke testing is to
determine whether the build software is testable or not. It is done at the
time of "building software." This process is also known as "Day 0".
Testing the basic & critical feature of an application before doing one round
of deep, rigorous testing (before checking all possible positive and negative
values) is known as smoke testing.
In the smoke testing, we only focus on the positive flow of the application
and enter only valid data, not the invalid data. In smoke testing, we verify
every build is testable or not; hence it is also known as Build Verification
Testing.
Validation Testing ensures that the product actually meets the client's
needs. It can also be defined as to demonstrate that the product fulfills its
intended use when deployed on appropriate environment.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
Standard Definition of Acceptance Testing
It is formal testing according to user needs, requirements, and business
processes conducted to determine whether a system satisfies the
acceptance criteria or not and to enable the users, customers, or other
authorized entities to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after
System Testing and before making the system available for actual use.
Types of Acceptance Testing
1. User Acceptance Testing (UAT)
User acceptance testing is used to determine whether the product is
working for the user correctly. Specific requirements which are quite often
used by the customers are primarily picked for testing purposes. This is
also termed as End-User Testing.
2. Business Acceptance Testing (BAT)
BAT is used to determine whether the product meets the business goals
and purposes or not. BAT mainly focuses on business profits which are
quite challenging due to the changing market conditions and new
technologies, so the current implementation may have to being changed
which results in extra budgets.
3. Contract Acceptance Testing (CAT)
CAT is a contract that specifies that once the product goes live, within a
predetermined period, the acceptance test must be performed, and it
should pass all the acceptance use cases..
4. Regulations Acceptance Testing (RAT)
RAT is used to determine whether the product violates the rules and
regulations that are defined by the government of the country where it is
being released. This may be unintentional but will impact negatively on
the business. ctly responsible.
5. Operational Acceptance Testing (OAT)
OAT is used to determine the operational readiness of the product and is
non-functional testing. It mainly includes testing of recovery, compatibility,
maintainability, reliability, etc.
6. Alpha Testing
Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
7. Beta Testing
Beta testing is used to assess the product by exposing it to the real end-
users, typically called beta testers in their environment. Feedback is
collected from the users and the defects are fixed. Also, this helps in
enhancing the product to give a rich user experience.
The box testing approach of software testing consists of black box testing
and white box testing. We are discussing here white box testing which also
known as glass box is testing, structural testing, clear box testing,
open box testing and transparent box testing. It tests internal coding
and infrastructure of a software focus on checking of predefined inputs
against expected and desired outputs. It is based on inner workings of an
application and revolves around internal structure testing. In this type of
testing programming skills are required to design test cases. The primary
goal of white box testing is to focus on the flow of inputs and outputs
through the software and strengthening the security of the software.
The term 'white box' is used because of the internal perspective of the
system. The clear box or white box or transparent box name denote the
ability to see through the software's outer shell into its inner workings.
Developers do white box testing. In this, the developer will test every line of
the code of the program. The developers perform the White-box testing and
then send the application or the software to the testing team, where they
will perform the black box testing and verify the application along with the
requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and
sends it to the testing team. Here, fixing the bugs implies that the bug is
deleted, and the particular feature is working fine on the application.
Here, the test engineers will not include in fixing the defects for the
following reasons:
o Fixing the bug might interrupt the other features. Therefore, the test
engineer should always find the bugs, and developers should still be
doing the bug fixes.
o If the test engineers spend most of the time fixing the defects, then
they may be unable to find the other bugs in the application.
The white box testing contains various tests, which are as follows:
o Path testing
o Loop testing
o Condition testing
o Testing based on the memory perspective
o Test performance of the program
Path testing
In the path testing, we will write the flow graphs and test all independent
paths. Here writing the flow graph implies that flow graphs are representing
the flow of the program and also show how every program is added with
one another as we can see in the below image:
And test all the independent paths implies that suppose a path from main()
to function G, first set the parameters and test if the program is correct in
that particular path, and in the same way test all other paths and fix the
bugs.
Loop testing
In the loop testing, we will test the loops such as while, for, and do-while,
etc. and also check for ending condition if working correctly and if the size
of the conditions is enough.
For example: we have one program where the developers have given
about 50,000 loops.
1. {
2. while(50,000)
3. ……
4. ……
5. }
We cannot test this program manually for all the 50,000 loops cycle. So we
write a small program that helps for all 50,000 cycles, as we can see in the
below program, that test P is written in the similar language as the source
code program, and this is known as a Unit test. And it is written by the
developers only.
1. Test P
2. {
3. ……
4. …… }
As we can see in the below image that, we have various requirements such
as 1, 2, 3, 4. And then, the developer writes the programs such as program
1,2,3,4 for the parallel conditions. Here the application contains the 100s
line of codes.
The developer will do the white box testing, and they will test all the five
programs line by line of code to find the bug. If they found any bug in any of
the programs, they will correct it. And they again have to test the system
then this process contains lots of time and effort and slows down the
product release time.
Now, suppose we have another case, where the clients want to modify the
requirements, then the developer will do the required changes and test all
four program again, which take lots of time and efforts.
In this, we will write test for a similar program where the developer writes
these test code in the related language as the source code. Then they
execute these test code, which is also known as unit test programs.
These test programs linked to the main program and implemented as
programs.
Therefore, if there is any requirement of modification or bug in the code,
then the developer makes the adjustment both in the main program and the
test program and then executes the test program.
Condition testing
In this, we will test all logical conditions for both true and false values; that
is, we will verify for both if and else condition.
For example:
1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
8. {
9. …..
10. ……
11. ……
12. }
The above program will work fine for both the conditions, which means that
if the condition is accurate, and then else should be false and conversely.
Black-box testing is a type of software testing in which the tester is not
concerned with the internal knowledge or implementation details of the
software but rather focuses on validating the functionality based on the
provided specifications or requirements.
Prerequisite - Software Testing | Basics
Black box testing can be done in the following ways:
1. Syntax-Driven Testing – This type of testing is applied to systems
that can be syntactically represented by some language. For example,
language can be represented by context-free grammar. In this, the test
cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs
work similarly so instead of giving all of them separately we can group
them and test only one input of each group. The idea is to partition the
input domain of the system into several equivalence classes such that
each member of the class works similarly, i.e., if a test case in one class
results in some error, other members of the class would also result in the
same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into
a minimum of two sets: valid values and invalid values. For
example, if the valid range is 0 to 100 then select one valid input like
49 and one invalid like 104.
2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering
all valid and invalid test cases considering that no two invalid inputs
mask each other. To calculate the square root of a number, the
equivalence classes will be (a) Valid inputs:
The whole number which is a perfect square-output will be an
integer.
The entire number which is not a perfect square-output will be a
decimal number.
Positive decimals
Negative numbers(integer or decimal).
Characters other than numbers like “a”,”!”,”;”, etc.
3. Boundary value analysis – Boundaries are very good places for
errors to occur. Hence, if test cases are designed for boundary values of
the input domain then the efficiency of testing improves and the
probability of finding errors also increases. For example – If the valid
range is 10 to 100 then test for 10,100 also apart from valid and invalid
inputs.
4. Cause effect graphing – This technique establishes a relationship
between logical input called causes with corresponding actions called the
effect. The causes and effects are represented using Boolean graphs.
The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.
For example, in the following cause-effect graph:
Each column corresponds to a rule which will become a test case for
testing. So there will be 4 test cases.
5. Requirement-based testing – It includes validating the requirements
given in the SRS of a software system.
6. Compatibility testing – The test case results not only depends on the
product but is also on the infrastructure for delivering functionality. When
the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Black Box Testing Type
The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional
requirements.
Regression Testing: It ensures that the newly added code is compatible
with the existing code. In other words, a new software update has no
impact on the functionality of the software. This is carried out after a
system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT.
This testing is not functional testing of software. It focuses on the
software’s performance, usability, and scalability.
Tools Used for Black Box Testing:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.
What can be identified by Black Box Testing
1. Discovers missing functions, incorrect function & interface errors
2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any
functions.
4. Discovers the errors in performance or behaviour of software.
Features of black box testing:
1. Independent testing: Black box testing is performed by testers who
are not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted
from the perspective of an end user, which helps to ensure that the
application meets user requirements and is easy to use.
3. No knowledge of internal code: Testers performing black box testing
do not have access to the application’s internal code, which allows
them to focus on testing the application’s external behaviour and
functionality.
4. Requirements-based testing: Black box testing is typically based on
the application’s requirements, which helps to ensure that the
application meets the required specifications.
5. Different testing techniques: Black box testing can be performed
using various testing techniques, such as functional testing, usability
testing, acceptance testing, and regression testing.
6. Easy to automate: Black box testing is easy to automate using
various automation tools, which helps to reduce the overall testing
time and effort.
7. Scalability: Black box testing can be scaled up or down depending on
the size and complexity of the application being tested.
8. Limited knowledge of application: Testers performing black box
testing have limited knowledge of the application being tested, which
helps to ensure that testing is more representative of how the end
users will interact with the application.
Advantages of Black Box Testing:
The tester does not need to have more functional knowledge or
programming skills to implement the Black Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used in finding the ambiguity and contradictions in the functional
specifications.
Disadvantages of Black Box Testing:
There is a possibility of repeating the same tests while implementing
the testing process.
Without clear functional specifications, test cases are difficult to
implement.
It is difficult to execute the test cases because of complex inputs at
different stages of testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and
consumes a lot of time.
UNIT V
there are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
PROJECT SCHEDULEING
Project schedule simply means a mechanism that is used to
communicate and know about that tasks are needed and has to be done
or performed and which organizational resources will be given or
allocated to these tasks and in what time duration or time frame work is
needed to be performed. Effective project scheduling leads to success of
project, reduced cost, and increased customer satisfaction. Scheduling in
project management means to list out activities, deliverables, and
milestones within a project that are delivered. It contains more notes than
your average weekly planner notes. The most common and important
form of project schedule is Gantt chart.
Each of these techniques has its strengths and weaknesses, and the
choice of technique depends on various factors such as the project’s
complexity, available data, and the expertise of the team.
Importance of Project Size Estimation Techniques
Resource Allocation: Appropriate distribution of financial and human
resources is ensured by accurate estimation.
Risk management: Early risk assessment helps with mitigation
techniques by taking into account the complexity of the project.
Time management: Facilitates the creation of realistic schedules and
milestones for efficient time management.
Cost control and budgeting: Both the terms are closely related, which
lowers the possibility of cost overruns.
Resource Allocation: Enables efficient task delegation and work
allocation optimization.
Scope Definition: Defines the scope of a project, keeps project
boundaries intact and guards against scope creep.
Estimating the size of the Software
Estimation of the size of the software is an essential part of Software
Project Management. It helps the project manager to further predict the
effort and time that will be needed to build the project. Various measures
are used in project size estimation. Some of these are:
Lines of Code
Number of entities in the ER diagram
Total number of processes in detailed data flow diagram
Function points
KLOC- Thousand lines of code
NLOC- Non-comment lines of code
KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of the
same kind. The experts use it to predict the required size of various
components of software and then add them to get the total size.
It’s tough to estimate LOC by analyzing the problem definition. Only after
the whole code has been developed can accurate LOC be estimated.
This statistic is of little utility to project managers because project
planning must be completed before development activity can begin.
Two separate source files having a similar number of lines may not
require the same effort. A file with complicated logic would take longer to
create than one with simple logic. Proper estimation may not be
attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This
statistic will differ greatly from one programmer to the next. A seasoned
programmer can write the same logic in fewer lines than a newbie coder.
Advantages
Universally accepted and is used in many models like COCOMO.
Estimation is closer to the developer’s perspective.
Both people throughout the world utilize and accept it.
At project completion, LOC is easily quantified.
It has a specific connection to the result.
Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
No proper industry standard exists for this technique.
It is difficult to estimate the size using this technique in the early
stages of the project.
When platforms and languages are different, LOC cannot be used to
normalize.
2. Number of entities in ER diagram: ER model provides a static view of
the project. It describes the entities and their relationships. The number of
entities in ER model can be used to measure the estimation of the size of
the project. The number of entities depends on the size of the project.
This is because more entities needed more classes/structures thus
leading to more coding.
Advantages:
Size estimation can be done during the initial stages of planning.
The number of entities is independent of the programming
technologies used.
Disadvantages:
No fixed standards exist. Some entities contribute more to project size
than others.
Just like FPA, it is less used in the cost estimation model. Hence, it
must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow
Diagram(DFD) represents the functional view of software. The model
depicts the main processes/functions involved in software and the flow of
data between them. Utilization of the number of functions in DFD to
predict software size. Already existing processes of similar type are
studied and used to estimate the size of the process. Sum of the
estimated size of each process gives the final estimated size.
Advantages:
It is independent of the programming language.
Each major process can be decomposed into smaller processes. This
will increase the accuracy of the estimation.
Disadvantages:
Studying similar kinds of processes to estimate size takes additional
time and effort.
All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of
functions supported by the software are utilized to find FPC(function point
count). The steps in function point analysis are:
Count the number of functions of each proposed type.
Compute the Unadjusted Function Points(UFP).
Find the Total Degree of Influence(TDI).
Compute Value Adjustment Factor(VAF).
Find the Function Point Count(FPC).
The explanation of the above points is given below:
Count the number of functions of each proposed type: Find the
number of functions belonging to the following types:
External Inputs 3 4 6
External Output 4 5 7
External
3 4 6
Inquiries
Internal Logical
7 10 15
Files
External
5 7 10
Interface Files
Find the Function Point Count: Use the following formula to calculate
FPC
FPC = UFP * VAF
Advantages:
It can be easily used in the early stages of project planning.
It is independent of the programming language.
It can be used to compare different projects even if they use different
technologies(database, language, etc).
Disadvantages:
It is not good for real-time systems and embedded systems.
Many cost estimation models like COCOMO use LOC and hence FPC
must be converted to LOC.
DEVOPS TUTORIAL
Why DevOps?
DevOps History
o In 2009, the first conference named DevOpsdays was held in Ghent
Belgium. Belgian consultant and Patrick Debois founded the
conference.
o In 2012, the state of DevOps report was launched and conceived by
Alanna Brown at Puppet.
o In 2014, the annual State of DevOps report was published by Nicole
Forsgren, Jez Humble, Gene Kim, and others. They found DevOps
adoption was accelerating in 2014 also.
o In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA
(DevOps Research and Assignment).
o In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published
"Accelerate: Building and Scaling High Performing Technology
Organizations".
Automation can reduce time consumption, especially during the testing and
deployment phase. The productivity increases, and releases are made
quicker by automation. This will lead in catching bugs quickly so that it can
be fixed easily. For contiguous delivery, each code is defined through
automated tests, cloud-based services, and builds. This promotes
production using automated deploys.
2) Collaboration
3) Integration
4) Configuration management
It ensures the application to interact with only those resources that are
concerned with the environment in which it runs. The configuration files are
not created where the external configuration to the application is separated
from the source code. The configuration file can be written during
deployment, or they can be loaded at the run time, depending on the
environment in which it is running.
Advantages
o DevOps is an excellent approach for quick development and
deployment of applications.
o It responds faster to the market changes to improve business growth.
o DevOps escalate business profit by decreasing software delivery time
and transportation costs.
o DevOps clears the descriptive process, which gives clarity on product
development and delivery.
o It improves customer experience and satisfaction.
o DevOps simplifies collaboration and places all tools in the cloud for
customers to access.
o DevOps means collective responsibility, which leads to better team
engagement and productivity.
Disadvantages
o DevOps professional or expert's developers are less available.
o Developing with DevOps is so expensive.
o Adopting new DevOps technology into the industries is hard to
manage in short time.
DevOps Tools
Here are some most popular DevOps tools with brief explanation
shown in the below image, such as:
1) Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and
release of the technology changes quickly and frequently. It has features of
versioning, automated testing, and continuous delivery. It enables to
manage entire infrastructure as code without expanding the size of the
team.
Features
2) Ansible
Features
3) Docker
Docker is a high-end DevOps tool that allows building, ship, and run
distributed applications on multiple systems. It also helps to assemble the
apps quickly from the components, and it is typically suitable for container
management.
Features
4) Nagios
Nagios is one of the more useful tools for DevOps. It can determine the
errors and rectify them with the help of network, infrastructure, server, and
log monitoring systems.
Features
5) CHEF
A chef is a useful tool for achieving scale, speed, and consistency. The
chef is a cloud-based system and open source technology. This technology
uses Ruby encoding to develop essential building blocks such as recipes
and cookbooks. The chef is used in infrastructure automation and helps in
reducing manual and repetitive tasks for infrastructure management.
Chef has got its convention for different building blocks, which are required
to manage and automate infrastructure.
Features
6) Jenkins
Features
Features
CLOUD PLATFORM
There are tons of ways in which every individual can state the meaning of
the cloud platform. But in the simplest way it can be stated as the operating
system and hardware of a server in an Internet-based data centre are
referred to as a cloud platform. It enables remote and large-scale
coexistence of software and hardware goods.
Cloud systems come in a range of shapes and sizes. None of them are
suitable for all. To meet the varying needs of consumers, a range of
models, forms, and services are available. They are as follows:
Cost
Global scale
The ability to scale elastically is one of the advantages of cloud computing
services. In other words it simply means that we can decide the processing
speed, location of the data centre where data is to be stored, storage and
even the bandwidth for our process and data.
Performance
The most popular cloud computing services are hosted on a global network
of protected datacenters that are updated on a regular basis with the latest
generation of fast and powerful computing hardware.
Security
Speed
It means that the huge amount of calculation and the huge data retrieval as
in download and upload can happen just within the blink of an eye,
obviously depending on the configuration.
Reliability
Understanding what these services are and how they can make our
process smooth and a lot easier by knowing their right objective will help
our organisation grow more.