Manual Testing Notes Nov 2022
Manual Testing Notes Nov 2022
Types of software:
1. Once the Development team receives the testing team’s report, they will
start debugging. This phase aims to locate the bug and remove it from the
software. It is a one-off process and is done manually.
2. In this process, a special tool called a debugger is used in locating the bugs,
most programming environments have the debugger.
3. Some popular Debugger tools: WinDbg, OllyDbg, IDA Pro...
Psychology of testing:
• In software testing, psychology plays an extremely important role.
• It is one of those factors that stay behind the scenes but has a great impact on the end
result.
• It is mainly dependent on the mindset of the developers and testers, as well
as the quality of communication between them. Moreover, the psychology
of testing helps them work towards a common goal.
• The three sections of the psychology of testing are:
o The mindset of Developers and Testers.
o Communication in a Constructive Manner.
o Test Independence.
QA Vs. QC:
Quality Assurance:
• QA is process-oriented.
• QA is a proactive process.
• QA focuses on preventing defects.
• QA team works with the development team to produce quality software.
• QA ensures that approaches and techniques are implemented correctly
(during software development).
• QA is responsible for SDLC.
• E.g., Verification
Quality Control:
• QC is product-oriented.
• QC is a reactive process.
• QC focuses on identifying/detecting the defects.
• QC comes into the picture after Quality Assurance.
• QC verifies that the developed project meets the defined quality standards.
• QC is responsible for STLC.
• E.g., Validation
QE (Quality Engineering):
• Quality Engineer writes the code but for the software testing purpose.
• Quality Engineers are nothing but Automation Testers.
What is QAMS?
• A quality management system is a collection of business processes focused
on consistently meeting customer requirements and enhancing their
satisfaction. It is aligned with an organization's purpose and strategic
direction.
• A quality management system (QMS) is a system that documents the policies,
business processes, and procedures necessary for an organization to create
and deliver its products or services to its customers, and therefore increase
customer satisfaction through high product quality.
1. Initial
2. Repeatable
4. Managed
5. Optimizing
==========================================================================
===
4. Absence-of-errors is a fallacy
• Some organizations expect that testers can run all possible tests and find
all possible defects, but this is impossible. It is a fallacy (i.e., a wrong belief)
to expect that just finding and fixing a large number of defects will ensure
the success of a system.
• For example, testing all specified requirements and fixing all defects found
could still produce a system that is difficult to use but does not fulfill the
users’ needs and expectations.
5. Testing is context-dependent
• Testing is done differently in different contexts.
• For example, testing in an Agile project is done differently than
testing in a sequential software development lifecycle project.
6. Defects cluster together
• This is the idea that certain components or modules of software
usually contain the most number of issues or are responsible for most
operational failures.
• Testing, therefore, should be focused on these areas.
==========================================================================
===
• SDLC is a process used by the software industry to design, develop and test
software.
• SDLC process aims to produce high-quality software that meets customer
expectations.
• The software development should be complete in the pre-defined time frame and
cost.
• SDLC consists of a detailed process that explains how to plan, build,
and maintain specific software.
Here, are prime reasons why SDLC is important for developing a software system.
Phases in SDLC:
1. Requirement Analysis:
2. Design:
3. Coding:
• Once the system design phase is over, the next phase is coding. In this
phase, developers start to build the entire system by writing code using
the chosen programming language.
• In the coding phase, tasks are divided into units or modules and
assigned to the various developers.
• It is the longest phase of the Software Development Life Cycle process.
• In this phase, the developer needs to follow certain predefined coding guidelines.
• They also need to use programming tools like compilers, interpreters,
and debuggers to generate and implement the code.
4. Testing:
• Once the software is complete, it is deployed in the testing environment.
The testing team starts testing the functionality of the entire system. This
is done to verify that the entire application works according to the
customer’s requirements.
• During this phase, QA and testing team may find some bugs/defects which
they communicate to developers. Then development team fixes the bug
and sends it back to QA for a retest. This process continues until the
software is bug-free, stable, and working according to the business needs
of that system.
5. Deployment / Installation:
• Once the software testing phase is over and no bugs or errors are left in
the system then the final deployment process starts. Based on the
feedback given by the project manager, the final software is released and
checked for deployment issues if any.
6. Maintenance:
• Once the system is deployed, and customers start using the developed
system, the following three activities may occur:
✓ Bug fixing - Bugs are reported because of some scenarios
which are not tested at all.
✓ Upgrade - Upgrading the application to the newer versions of the
Software.
✓ Enhancement - Adding some new features to the existing software.
==========================================================================
===
==========================================================================
===
Types (Models) of SDLC:
1. Waterfall Model
2. Spiral Model
3. Rapid Application Development (RAD) Model
4. Iterative or Incremental Model
5. Prototype Model
6. V Model
7. Agile Model
1. Waterfall Model:
• Waterfall is one of the earliest and most commonly used software
development models (processes), in which the development process
looks like the flow, moving step by step through the phases like
analysis, design, coding, testing, deployment/installation, and
support. So, it is also known as the “Linear Sequential Model”.
• This SDLC model includes gradual execution of every stage completely.
This process is strictly documented and predefined with features
expected for every phase of this software development life cycle
model.
Advantages of Waterfall Model:
1. Simple, easy to understand, and use.
2. Easy to implement because of its linear process.
3. Since requirement changes are not allowed, the chances of finding bugs will be
less.
4. Initial investment is less since the testers are hired at the later stages.
5. Preferred for small projects where requirements are frozen (well
understood at an early stage).
==========================================================================
===
==========================================================================
===
3. RAD Model:
RAD stands for “Rapid Application Development”. As per the name itself,
the RAD model is a model to develop fast and high-quality software
products by Requirements using workshops.
==========================================================================
===
4. Iterative Model:
1. In an iterative model application will get divided into small parts
and development will be done by specifying and implementing only
small parts of the software, which can be reviewed to identify
further requirements.
2. This process is repeated, creating a new version of the software for
each cycle of the model. The iterative model is very simple to
understand and use. In this model, we can’t start developing the
complete software with full specification of requirements
==========================================================================
===
5. Prototype Model:
• It is a trial version of the software. It is a sample product that is
designed before starting actual testing.
• This model is used when user requirements are not very clear, and this
software is tested based on raw requirements obtained from the user.
The available types of prototyping are Rapid, Incremental,
Evolutionary, and Extreme.
• Prototype Model will work like --
1. We will take basic requirements
2. Based on the discussion, we will create an initial prototype (A
prototype – is a working model)
3. Once the working prototype is built, we will ask the client to check and use it
4. Next step will be to test and enhance
5. Again, we will call the user to check and use it, and again we will
make changes as per the user's feedback until we get all the
requirements from the user.
6. Once, all requirements are fulfilled and the client will agree, then
the last step will be signed off (Sign off - Deliver the product and
finish the contract)
Advantages of Prototype Model:
1. Users are actively involved in the development
2. Missing functionalities can be identified easily
3. Based on user feedback, the SRS document is finalized
==========================================================================
===
6. V Model:
• In parallel to the software development phase, a corresponding series of
test phases also runs in this model. Each stage ensures a specific type of
testing is done, and once that testing is passed, only then the next phase
starts.
• When the requirement is well-defined and ambiguous (uncertain), we use V-Model.
• It is also known as Verification and Validation model.
Coding Phase:
• After designing, the coding phase is started. Based on the requirements,
a suitable programming language is decided. There are some guidelines
and standards for coding. Before checking in the repository, the final build
is optimized for better performance, and the code goes through many
code reviews to check the performance.
==========================================================================
===
1. Review:
Review conducts on documents to ensure correctness and completeness.
• Requirement reviews
• Design reviews
• Code reviews
• Test Plan reviews
• Test cases reviews etc.
2. Walkthrough:
• It is an informal review.
• It is not pre-planned and can be done whenever required.
• Author reads the documents or code and discusses it with peers.
• Also, the walkthrough does not have minutes of the meeting.
3. Inspection:
• It is the most formal review type.
• Inspection will have a proper schedule which will be intimated
via email to the concerned developer/testers.
• In which at least 3-8 people sit in the meeting 1-reader, 2-
writer, 3-moderator plus concerned people.
1. Unit Testing
2. Integration Testing
3. System Testing
4. User Acceptance Testing (UAT)
==========================================================================
===
DESCRIPTION
1. Requirement Analysis
2. Test Planning
3. Test Design
4. Test Environment Setup
5. Test Execution
6. Test Closure
1. Requirement Analysis: In this phase, the requirements documents are
analyzed and validated, and the scope of testing is defined.
2. Test Planning: In this phase, test plan strategy is defined, estimation of
test effort is defined along with automation strategy and tool selection is
done.
3. Test Design: In this phase test cases are designed; test data is
prepared, and automation scripts are implemented.
4. Test Environment Setup: A test environment closely simulating the
real-world environment is prepared.
5. Test Execution: To perform actual testing as per the test steps.
6. Test Closure: Test Closure is final stage of STLC where we will make all
details documentations which are required submit to client at time of
software delivery. Such as test report, defect report, test cases summary,
RTM details, release note.
==========================================================================
===
What is Test Closure?
Test Closure is final stage of STLC where we will make all details documentations
which are required submit to client at time of software delivery. Such as test
report, defect report, test cases summary, RTM details, release note.
What is the list of test closure documents?
It includes,
1. test case documents, (i.e., Test Case Excel sheet we prepare during actual testing)
2. test plan, test strategy,
3. Release note
4. test scripts,
5. test data,
6. traceability matrix, and
7. test results and reports like bug report, execution report etc.
What is a Build?
• It is a number/identity given to Installable software that is given to the
testing team by the development team
What is release?
• It is a number/ identity given to Installable software that is handed over to
the customer/client by the testing team (or sometimes directly by the
development team)
What is deployment?
• Deployment is the mechanism through which applications, modules,
updates, and patches are delivered from developers to end-
user/client/customer.
How would you define that testing is sufficient and it’s time to enter the
Test Closure phase? Or when we should stop testing?
• Testing can be stopped when one or more of the following conditions are met,
1. After test case execution – The testing phase can be stopped when
one complete cycle of test cases is executed after the last known
bug fix with the agreed-upon value of pass percentage.
2. Once the testing deadline is met - Testing can be stopped after
deadlines get met with no high priority issues left in the system.
3. Based on Mean Time Between Failure (MTBF)- MTBF is the time
interval between two inherent failures. Based on stakeholders’
decisions, if the MTBF is quite a large one can stop the testing
phase.
4. Based on code coverage value – The testing phase can be stopped
when the automated code coverage reaches a specific threshold
value with sufficient pass percentage and no critical bug.
==========================================================================
===
Methods of Testing (Testing Methods) (White box, Black box, Grey box)
1. Black Box testing
2. White Box Testing
3. Grey Box Testing
✓ Grey box testing uses a combination of black and white box testing.
Grey box test cases are designed with knowledge of the internal
logic (algorithms) of an application’s code, but the actual testing is
performed as the black box. Alternately a limited number of white-
box testing is performed followed by conventional black-box testing
Levels of Software Testing:
1. Unit Testing
2. Integration Testing
3. System Testing
4. User Acceptance Testing (UAT)
1. Unit Testing:
• A unit is a single component or module of software.
• Unit testing conducts on a single program or single module.
• Unit testing is a white box testing technique.
• The developers conduct Unit testing.
2. Integration Testing:
• Integration testing performed between two or more modules.
• Integration testing focuses on checking data communication
between multiple modules.
• Integration testing is a white box testing technique.
• Integration testing conducted by the tester at the application level (at the UI
level).
• Integration testing conducted by the developer at the coding level.
3. System Testing: (This is the actual area where testers are mostly involved.)
• Testing the overall functionality of the application with
respective client requirements.
• It is a black box technique.
• The testing team conducts System testing.
• After completion of component (unit) and integration level
testing, we start System testing.
✓ Usability Testing:
• During this testing validates application provide context-sensitive
help or not to the user.
• Checks how easily the end-users can understand and operate the
application is called usability testing.
• This is like a user manual so that the user can read the manual and proceed
further.
✓ Functional Testing:
• In functional testing, we check the functionality of the software.
• Functionality describes what software does. Functionality is
nothing but the behavior of the application.
• Functional testing talks about how your feature should work.
I. Objective Properties Testing
II. Database Testing: DML operations like insert, delete, update, select
III. Error Handling
IV. Calculation/Manipulations Testing
V. Links Existence and Links Execution
VI. Cookies and Sessions
✓ Non-functional Testing
a. Performance Testing
▪ Load Testing
▪ Stress Testing
▪ Volume Testing
b. Security Testing
c. Recovery Testing
d. Compatibility Testing
e. Configuration Testing
f. Installation Testing
g. Sanitation / Garbage Testing
h. Endurance testing
i. Scalability testing.
a. Performance Testing: Speed of the application.
✓ Load: Gradually increase the load on the application slowly
then check the speed of the application. Here load means
data.
✓ Stress: Suddenly increase/decrease the load on the
application and check the speed of the application.
✓ Volume: Check how much data can handle by the
application. Here we apply huge data to system until it gets
hang. Generally, this test is performed to check how system
is responding to bulk of user at a time.
c. Recovery Testing:
✓ Check the system change from abnormal to normal.
d. Compatibility Testing:
✓ Forward Compatibility
✓ Backward Compatibility
✓ Hardware Compatibility (Configuration testing)
e. Configuration Testing:
✓ It is a combination of hardware and software, in which we
need to test whether they are communicating properly or
not. In simple words, we check how the data is flow from one
module to another.
f. Installation Testing:
✓ Check screens are clear to understand.
✓ Screens navigation
✓ Simple or not.
✓ Un-installation.
g. Sanitation / Garbage Testing
✓ If any application provides extra features/functionality,
then we consider them a bug.
==========================================================================
===
Testing:
Non-Functional testing:
✓ Data
✓ Coverage (cover every area/functionality of the feature)
• Test Design Techniques (During Designing Test Cases) (for Black Box Testing):
1. Equivalence Class Partitioning (ECP)
2. Boundary Value Analysis (BVA)
3. Decision Table
4. State Transition
5. Error Guessing
*** Input Domain testing: The value will be verified in the textbox/input fields.
3. Decision Table:
4.State Transition:
==========================================================================
===
Test Plan Vs. Test Strategy:
==========================================================================
===
==========================================================================
===
==========================================================================
===
• Test Environment:
1. Test Environment is a platform specially built for test case execution on the software
product.
2. It is created by integrating the required software and hardware along
with proper network configuration.
3. Test environment simulates production/real-time environment.
4. Another name for the test environment is Test Bed.
5. This is nothing, but an environment created to execute the Test Cases.
==========================================================================
===
• Test Execution:
1. To perform actual testing as per the test steps. i.e., During this phase
test team will carry out the testing, based on the test plans and the test
case prepared.
2. Entry Criteria (Inputs): Test Cases, Test Data, and Test Plan.
3. Activities:
✓ Test cases are executed based on test planning.
✓ Status of test cases is marked, like passed, failed, blocked, run, etc.
✓ Documentation of the test results and log defects for failed cases are done.
✓ All the clocked and failed test cases are assigned bug ids.
✓ Retesting once they are fixed.
✓ Defects are tracked till closure.
4. Deliverables (Outputs): Provides defect report and test case execution
report with completed results.
==========================================================================
===
4. Adding the artifacts: You can start adding the artifacts you have to the
columns. You can now copy and paste requirements, test cases, test
results & bugs in the respective columns. You need to ensure that the
requirements, test cases, and bugs have unique ids. You can add separate
columns to denote the requirement id such as Requirement_id,
TestCaseID, BugID, etc.
==========================================================================
===
==========================================================================
===
Defects/Bugs/Issues:
1. Any mismatched functionality found in an application is called a Defect/Bug/Issue.
2. During Test Execution Test engineers are reporting mismatches as
defects to developers through templates or using tools.
3. Defect Reporting Tools:
o Clear Quest
o DevTrack
o Jira
o Quality Center
o Bug Zilla etc.
Test Management tools and Bug tracking tools are completely different.
Test (Case) Management Tool Vs. Project Management Tool Vs. Bug Tracking Tool.
==========================================================================
===
==========================================================================
===
• Priority
1. P1 (High)
2. P2 (Medium)
3. P3 (Low)
Defect Severity: S-T (S-S-T) (Severity-System)
• Severity is assigned/given by the QA Testers.
• It affects the functionality.
• Severity describes the seriousness of the defect and how much impact
on Business workflow (functionality). It is categorized into Blocker,
Critical, Major, Minor.
[Image: Example]
==========================================================================
===
Defect Resolution:
After receiving the defect report from the testing team, the development
team conducts a review meeting to fix defects. Then they send a Resolution
Type to the testing team for further communication.
Resolution Types:
1. Accept
2. Reject
3. Duplicate
4. Enhancement
5. Need more information
6. Not Reproducible
7. Fixed
8. As Designed.
==========================================================================
===
a. Activities:
b. Deliverables:
c. Test Metrics:
==========================================================================
===
QA/Testing Activities:
• Understanding the requirements and functional specifications of the application.
• Identifying required Test Scenarios.
• Designing Test Cases to validate the application.
• Setting up Test Environment (Test Bed).
• Execute Test Cases to valid applications.
• Log Test results (How many tests cases pass/fail.)
• Defect reporting and tracking.
• Retest fixed defects of the previous build.
• Perform various types of testing in the application.
• Reports to Test Leas about the status of assigned tasks.
• Participated in regular team meetings.
• Creating automation scripts.
• Provides recommendations on whether or not the application/system is ready for
production.
==========================================================================
===
Software Testing Terminologies: (Other Types of Testing):
a. Regression Testing
b. Re-testing
c. Exploratory testing
d. Adhoc Testing
e. Monkey Testing
f. Positive Testing
g. Negative Testing
h. End to End Testing
i. Globalization and Localization Testing
a. Regression testing:
Testing conducted on modified build (updated build) to make sure there will
not be an impact on existing functionality because of changes like
adding/deleting/modifying features. Also, we can say Smoke Testing is a
small part of regression testing.
b. Re-Testing:
1. Whenever the developer fixed a bug, the tester will test the bug fix called re-
testing.
2. Tester closes the bug if worked otherwise re-open and send to a developer.
3. To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build.
4. Example:
i. Build 1.0 was released, test team found some defects
(Defect ID 1.0.1, 1.0.2) and posted them.
ii. Build 1.1 was released, now testing the defects 1.0.1 and
1.0.2 in this build is retesting.
e. Monkey/Gorilla Testing:
• Testing applications randomly without any test cases or any business
requirement.
• Adhoc testing is an informal testing type with an aim to break the system.
• Tester does not have knowledge of the application.
• Suitable for gaming applications.
f. Positive Testing:
• Testing the application with valid inputs is called positive testing.
• It checks whether an application behaves as expected with positive inputs.
g. Negative Testing
• Testing the application with invalid inputs is called negative testing.
• It checks whether an application behaves as expected with the negative
testing.
Positive Vs. Negative Test Cases:
For example, if a text box is listed as a feature and in FRS it is
mentioned as a Text box that accepts 6-20 characters and only
alphabets.
============================================================
=================
Agile – Scrum
Agile Principles:
• Requirement changes are accepted from the customer.
• So, the customer no need to wait, which provides customer satisfaction.
• We develop, test, and release pieces of software to the customer with some features.
• The Whole team works together toward achieving one goal.
• Here we focus on F2F conversation.
• We follow iterative and incremental nature.
Development Task:
• Review the story
• Estimate the story
• Design
• Code
• Unit Testing
• Integration Testing etc.
QA Task:
• Review the story
• Test Cases
• Test Scenarios
• Test Data
• Test Review
• Test Environments
• Execute Test Cases
• Report Bugs etc.
==========================================================================
===
Scrum: Scrum is a framework through which we build the software product
by following Agile Principles.
Scrum workflow -
Scrum Team:
1. Product Owner
2. Scrum Master
3. Development Team
4. QA Team
1. Product Owner:
• Product Owner is the First Point of Contact.
• He will get input from the customers.
• Defines the feature of the product.
• Prioritize the features according to the market value.
• Adjust features or priority every iteration, as needed.
• Product Owner can Accept or Reject the work result.
• He will define features of the product in the form of User Stories or Epics.
2. Scrum Master:
• Scrum Master has the main role of facilitating and driving the Agile Process.
• He acts like a manager/team lead for Scrum Team.
• He leads over all the Scrum ceremonies.
==========================================================================
===
If the below points are ready or clear regarding User Stories, is DOR.
Definition of Done
(DOD): It is achieved
when,
• The story is developed completely
• Testing (QA) complete
• Regression around the story is complete
• The story meets and satisfies the acceptance criteria
• The feature is eligible to be deployed in production.
==========================================================================
===
Scrum Terminologies:
Product Backlog: Contains a list of all requirements (like user stories and
epics). Prepared by Product Owner.
Epic: Collection of related user stories. Epic is nothing but a large (high level) requirement.
User Story: A feature/module in a software. Define the customer needs. It is
nothing but the phrasing of the requirement in the form of a story.
Task: To achieve the business requirements development team create tasks.
Sprint/Iteration: Period/time to complete (means development and testing)
the user stories, decided/selected by the Product Owner and Team. It is usually
for 2-4 weeks of time.
Sprint Backlog: List of committed stories by the Developers and QAs for a specific Sprint.
Sprint Planning Meeting: This is the meeting with the team, to define what
can be delivered in the Sprint and its duration.
Sprint Review Meeting: Here we walkthrough and demonstrate the feature or
story implemented by the team to the stakeholder.
Sprint Retrospective Meeting: Conducts after completion of Sprint only. The
entire team including the Product Owner and Scrum Master should participate.
They discuss majorly on 3 things,
✓ What went well?
✓ What went wrong?
✓ Improvements are needed in the upcoming sprint.
Backlog Grooming Meeting:
• In this meeting, the scrum team along with the scrum master and product owner.
• The product owner presents the business requirements and as per
the priority team discussed over it and identifies the complexity,
dependencies, and efforts.
• The team may also do the story pointing at this stage.
Story Point: Rough estimation given by Developers and QA in the form of the Fibonacci
series.
Time Boxing in Scrum: Timeboxing is nothing but the Sprint which is the
specific amount of time to complete the specified amount of work.
Scum of Scrums:
• Suppose 7 teams are working on a project and each team has 7 members.
Each team leads its particular scrum meeting. Now to coordinate among
the teams a separate meeting has to be organized, thatmeeting is called
Scrum of Scrums.
• An ambassador (team leads) (a designated person who represents the
team) represents the team in the scrum of scrums.
• Few points discussed in the meeting are:
✓ The progress of each team, after the last meeting.
✓ The task is to be done before the next meeting.
✓ Issues that the team had faced while completing the last task.
Spike in Agile Scrum: It is a story that cannot
be estimated. Sprint Zero:
• Sprint zero usually takes place before the formal start of the project.
• This Sprint should be kept lightweight and relatively high level. It is all
about the origination of project exploration and gaining an understanding
of where you want to head while keeping velocity low.
Velocity:
• Velocity (is a metric) used to measure the units of work done
(completed) in the given time frame.
• We can say it is the sum of story points that the Scrum team completed over a
sprint.
Burndown Chart:
• Shows that how much work is pending/remaining in the Sprint. Maintain
by the Scrum Master daily. The progress is tracked by a Burndown chart.
• Burndown chart is nothing but, it shows that estimated Vs. actual efforts of the
Scrum Task.
• To check whether the stories are doing progress towards the
completion of the committed story points or not.
Burnup Chart: Amount of work completed within a project.
==========================================================================
===
Difference between Scrum and Waterfall:
• Feedback from the customers is received at an early stage in Scrum than Waterfall.
• New changes can easily accommodate in Scrum than the Waterfall.
• Rollback or accommodate new changes is easy in Scrum.
• Testing is considered a phase in the Waterfall, unlike Scrum.
There are three types of persons involved in Scrum which are, Product
Owner, Scrum Master, and Scrum Team which includes Developer, Tester,
and BA.
• Product Backlog: Contains a list of all user stories. Prepared by Product Owner.
• Sprint Backlog: Contains the User Stories committed by the Developer
and QAs for a specific Sprint.
============================================================
=================
Extra Questions:
So, in the scrum, which entity is responsible for the deliverables? Scrum Master or
Product Owner?
• Neither the scrum master nor the product owner. It’s the responsibility of
the team that owns the deliverable.
How do you create the Burn-Down chart?
• It is a tracking mechanism by which for a particular sprint; day-to-day tasks
are tracked to check whether the stories are progressing towards the
completion of the committed story points or not. Here, we should remember
that the efforts are measured in terms of user stories and not hours.
How does agile testing (development) methodology differ from other testing
(development) methodologies?
• In agile testing methodology, the entire testing process is broken into a small
piece of codes and in each step, these codes are tested. There are several
processes or plans involved in this methodology like communication with the
team, short strategical changes to get the optimal result, etc.
Describe the places where ‘Scrum’ and ‘Kanban’ are used?
• ‘Scrum’ is used when you need to shift towards a more appropriate or more
prominent process while if you want improvement in running the process
without any changes in the whole scenario, you should use ‘Kanban’.
Do you think scrum can be implemented in all the software development processes?
• Scrum is used mainly for
✓ Complex projects.
✓ Projects which have early and strict deadlines.
✓ When we are developing any software from scratch.
In case you receive a story on the last day of the sprint to test and you find
there are defects, what will you do? Will you mark the story as done?
• No, I will not be able to mark the story as done as it has open defects and the
complete testing of all the functionality of that story is pending. As we are on
the last day of the sprint, we will mark those defects as Deferred for the next
sprint and we can spill over that story to the next Sprint.
How do you measure the complexity or effort in a sprint? Is there a way to
determine and represent it?
• Complexity and effort are measured through “Story Points”. In Scrum, it’s
recommended to use the Fibonacci series to represent it. Considering the
development effort+ testing effort + resolving dependencies and other
factors that would require to complete a story.
When we Estimate with story points, we assign the point value to each item.
• To set the Story Point- Find the simplest story and assign the 1 value to that
story and accordingly on basis of complexity we can assign the values to
user stories.
During Review, suppose the product owner or stakeholder does not agree
with the feature you implemented what would you do?
• First thing we will not mark the story as done.
• We will first confirm the actual requirement from the stakeholder and
update the user story and put it into the backlog. Based on the priority, we
would be pulling the story in the next sprint.
Apart from planning, review, and retrospective, do you know any other ceremony in
scrum?
• These three meetings are the ones which occur on regular basis, apart from
these We have one more meeting which is the Product Backlog Grooming
Meeting where the team, scrum master, and product owner meet to
understand the business requirements, splits it into user stories, and
estimating it.
You are in the middle of a sprint and suddenly the product owner comes with
a new requirement, what will you do?
• In an ideal case, the requirement becomes a story and moves to the backlog.
Then based on the priority, the team can take it up in the next sprint.
• But if the priority of the requirement is really high, then the team will have
to accept it in the sprint, but it has to be very well communicated to the
stakeholder that incorporating a story in the middle of the sprint may result
in spilling over few stories to the next sprint.
Which are the top agile matrices?
• Sprint burndown matric: Shows that how much work is pending/remaining in
the Sprint. Maintain by the Scrum Master daily. The progress is tracked by a
Burndown chart. It shows that estimated Vs. actual efforts of the Scrum Task.
• Velocity: Velocity (is a metric) used to measure the units of work done (completed) in
the given
time frame.
• Work category allocation: This factor provides us a clear idea about where
we are investing our time or where to set priority.
• Defect removal awareness: Quality products can be delivered by active
members and their awareness
• Cumulative flow diagram: With the help of this flow diagram, the uniform
workflow can be checked, where X-axis shows time and Y-axis shows no. of
efforts.
• A business value delivered: Business value delivered is an entity that shows
the team’s working efficiency. This method is used to measure, which around
100 points are associated with each project. Business objectives are given
value from 1,2,3,5 and so on according to complexity, urgency, and ROI.
• Defect resolution time: It’s a process where team member detects the bug
and priority intention by the removal of the error. A series of processes are
involved in fixing the bug:
✓ Clearing the picture of a bug
✓ Schedule fix
✓ Fixation of Defect is done
✓ Report of resolution is handed
• Time coverage: Amount of time given to code in question in testing. It is
measured by the ratio of no. of the line of code called by the test suite by the
total no. of the relative lines of code (in percentage).
• Master test plan: A master test plan is a high-level document for a project
or product’s overall testing goals and objectives. It lists tasks, milestones,
and outlines the size and scope of a testing project. It encapsulates all
other test plans within the project.
• Testing type-specific plan: Test plans can also be used to outline details
related to a specific type of test. For example, you may have a test plan
for unit testing, acceptance testing, and integration testing. These test
plans drill deeper into the specific type of test being conducted.
• Design docs
✓ Process guidelines docs
✓ Corporate standards docs, etc.
Test deliverables are nothing, but the documents prepared after testing. Test
deliverables will be delivered to the client not only for the completed
activities but also for the activities, which we are implementing for better
productivity.
• Test deliverables
• Test plan document,
• Test case document,
• Test Result Documents (will be prepared at the phase of each type of testing,
• Test Report or Project Closure Report (Prepare once we rolled out the project to the
client),
• Coverage matrix, defect matrix, and Traceability Matrix,
• Test design specifications,
• Release notes,
• Tools and their outputs,
• Error logs and execution logs,
• Problem reports and corrective action
==========================================================================
===