0% found this document useful (0 votes)
2 views

2023-Winter-solved-answers-ste

Uploaded by

bhaktinimaj94
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

2023-Winter-solved-answers-ste

Uploaded by

bhaktinimaj94
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

22518

23124
3 Hours / 70 Marks Seat No.

Instructions : (1) All Questions are compulsory.


(2) Illustrate your answers with neat sketches wherever necessary.
(3) Figures to the right indicate full marks.
(4) Assume suitable data, if necessary.

Marks
1. Attempt any FIVE of the following : 10
(a) Compare verification and validation (any 2 point).
- Verification is like checking if you're building the software right.
- Validation is about making sure you're building the right software.
- Verification ensures the software meets its specifications.
- Validation confirms the software meets the user's needs and expectations.
-Verification focuses on the process of building the software correctly.
- Validation focuses on ensuring the software meets the user's actual needs
- Verification is done by developers
- Validation is done by testers

(b) Define failure, error, fault, bug.


Software testing is a process where a software application or program is
evaluated to ensure it meets specified requirements and works correctly .Testing
helps in improving the quality, reliability, and performance of the software
before it is released to users.

Failure is when something doesn't work as expected. In the context of software


testing, a failure occurs when the software does not perform as intended
An error is like when something goes wrong or doesn't work properly. In
software testing, an error happens when there is a mistake in the software that
causes it to behave unexpectedly or incorrectly.

A fault is like a problem or a mistake in something. In software testing, a fault is


an underlying issue or defect in the software's code that can lead to errors when
the software is executed.

A defect is like a problem or a flaw in something, such as software, that causes it


to not work correctly or produce wrong results.

(c) List the objectives of software testing. (any four)

1. Ensure software meets specified requirements.


2. Improve software quality, reliability, and performance.
3. Identify and fix defects to ensure software works correctly before release.
4. Enhance user satisfaction by delivering high-quality, reliable software

(d) (e) Define Driver and stub.


-**Drivers**: Drivers are used when testing a component that calls or depends on
another component. A driver is a simplified version of the dependent component that
enables the testing of the component being developed.
**Stubs**: Stubs are used when testing a component that is called or used by
another component that is not yet developed. A stub is a basic implementation of the
missing component that simulates its behavior to allow testing of the component that
depends on it.

(f) What is GUI testing ? Give one example.


GUI testing, or "graphical user interface" testing, focuses on testing the visual
elements of a software application to ensure they work correctly and provide a
good user experience. It involves checking buttons, menus, icons, and other
graphical components to make sure they are functional and responsive.
For example, in a web application, GUI testing would involve verifying that
buttons can be clicked, menus open correctly, forms accept input, and overall,
the interface looks and behaves as expected.
(g) Write any two root causes of defect.

1. Miscommunication: When there is a lack of clear communication between


team members, such as unclear requirements or misunderstandings about the
project scope, it can lead to defects in the software.

2. Time Pressure: Tight deadlines and rushing through the development process
can result in errors and oversights, leading to defects in the software. It's
essential to balance speed with thorough testing to ensure quality.
(h) Enlist any four software testing tools.

 Selenium

 JUnit

 TestComplete

 Postman

 JIRA

 LoadRunner

2. Attempt any THREE of the following : 12


(a) State the Entry and Exit criteria’s for the software testing.
To determine when to start testing, it is recommended to begin testing as early as
possible in the software development process. This allows for the identification of
defects at an early stage, which can help in reducing the cost and effort required to fix
them. It also ensures that the software development aligns with the specified
requirements, minimizing the risk of critical issues in later stages.
Following are conditions that must taken into account while considering entry criteria:
• Requirements should be clearly defined and approved.
• Test Design and documentation plan is ready.
Testers are trained, and necessary resources are available.
• Availability of proper and adequate test data (like test cases).
On the other hand, knowing when to stop testing is equally important. Testing should
be stopped when the testing objectives have been met, and the software has been
validated against the defined criteria. This includes ensuring that critical functionalities
work as expected, major defects have been fixed, and the software meets the quality
standards set for release.
• Testing Deadline • Completion of test case execution. • Completion of Functional and
code coverage to a certain point. • Bug rates fall below a certain level and no high
priority bugs are identified. • Management decision.

(b) Illustrate process of bi-directional integration testing. State its two advantages &
disadvantages.
Bi-Directional Integration Testing combines both top-down and bottom-up
integration testing approaches. It starts testing from the middle layer of the system
and progresses upwards and downwards simultaneously.
Process:
1. Middle Layer Testing: Begin testing from the middle layer of the system,
usually the most critical or central module.
2. Top-Down Integration: Test higher-level modules incrementally, moving from
the middle layer upwards towards the top-level modules.
3. Bottom-Up Integration: Simultaneously, test lower-level modules
incrementally, moving from the middle layer downwards towards the base
modules.
4. Stubs and Drivers: Use stubs (for top-down) and drivers (for bottom-up) to
simulate modules that are not yet integrated.
5. Combine: After testing both directions, integrate all modules and perform overall
system testing.
Advantages:
1. Comprehensive Coverage: Ensures that both high-level and low-level modules
are tested thoroughly.
2. Early Detection: Errors in both upper and lower modules can be identified early
in the testing process.
Disadvantages:
1. Complex and Time-Consuming: Managing both top-down and bottom-up
testing simultaneously can be complex and requires significant time and
resources.
2. Dependency on Stubs and Drivers: The need for stubs and drivers can
introduce additional complexity and potential issues if they are not well-
designed.

(c) Enlist any four attributes of defect. Describe them with suitable example.

1. Severity: Severity indicates the impact of a defect on the software's functionality


or performance. It ranges from minor issues to critical problems that can halt the
system. For example, a defect where the system crashes when a specific button is
clicked is considered a high severity issue.

2. Priority: Priority defines the order in which defects should be addressed based
on their importance. Defects with higher priority need to be fixed before those with
lower priority. For instance, a defect that prevents users from logging into the
system might have a high priority.

3. Reproducibility: Reproducibility refers to whether a defect can be consistently


reproduced under the same conditions. Defects that are easily reproducible are
typically easier to diagnose and fix. An example could be a defect that occurs
every time a user follows a specific sequence of steps.

4. Status: Status tracks the current state of a defect in the testing process. It can
include statuses like New, Open, In Progress, Fixed, Reopened, Closed, etc. For
example, a defect reported by a tester is initially in the "New" status until it is
assigned to a developer for fixing.
(d) Describe any four factors for selecting a testing tools.

1. Project Requirements and Compatibility

 Description: The testing tool must meet the specific needs of the project. Factors
such as the type of application (web, mobile, desktop) and the technologies used
in the project (programming languages, frameworks, etc.) must be taken into
account. Selecting a tool that matches the technical stack ensures smooth testing
and avoids compatibility issues.

 Example: For a web application built with Angular, a testing tool like Selenium
would be an ideal choice. Selenium supports automated testing across different
browsers, making it a highly compatible option for such projects.

2. Ease of Use and Learning Curve

 Description: A testing tool should be easy to use and have a minimal learning
curve, especially if the testing team is unfamiliar with it. A tool that is
complicated to learn or operate can slow down the testing process, resulting in
delays and inefficiencies.

 Example: Postman is a commonly chosen tool for API testing because of its
intuitive interface and ease of use. Even those new to API testing can quickly
learn and utilize Postman effectively, which speeds up the testing process and
increases team productivity.

3. Budget

 Description: The cost of the testing tool plays a significant role in the selection
process. Projects often operate within budget constraints, and selecting a tool
that is either free or fits within the budget is essential. Balancing the cost with
the tool’s capabilities ensures a cost-effective solution.

 Example: Open-source tools such as Selenium or JUnit are excellent choices for
teams working with tight budgets. These tools provide robust testing capabilities
without incurring licensing fees, making them an economical yet effective
solution.

4. Integration Capabilities

 Description: The ability of the testing tool to integrate with other tools and
platforms used in the project is another important factor. For example, the tool
should work well with bug tracking systems, continuous integration (CI) tools,
or test management systems. Integration ensures a seamless and efficient testing
workflow.

Example: Selenium integrates effortlessly with Jenkins, a widely used CI tool.


This combination makes it easy to set up automated testing as part of the
CI/CD process, improving both testing and deployment efficiency.

Q.3)Attempt any THREE of the following : 12


(a) Differentiate between alpha testing and beta testing. (any four points)

1. **Performed By**:
- Alpha Testing: Always performed by developers at the software
development site.
- Beta Testing: Always performed by customers or end users at their
own site.

2. **Independent Testing Team**:


- Alpha Testing: Sometimes performed by an independent testing team.
- Beta Testing: Not performed by an independent testing team.

3. **Market Exposure**:
- Alpha Testing: Not open to the market and public.
- Beta Testing: Always open to the market and public.

4. **Conducted For**:
- Alpha Testing: Conducted for software application and project.
- Beta Testing: Usually conducted for software product.

5. **Environment**:
- Alpha Testing: Always performed in a virtual environment.
- Beta Testing: Performed in a real-time environment.

6. **Location**:
- Alpha Testing: Always performed within the organization.
- Beta Testing: Always performed outside the organization.

7. **Testing Type**:
- Alpha Testing: Includes both White Box Testing and Black Box
Testing.
- Beta Testing: Only a kind of Black Box Testing.

8. **Performed At**:
- Alpha Testing: Always performed at the developer's premises in the
absence of the users.
- Beta Testing: Always performed at the user’s premises in the absence
of the development team.
(b) Prepare test plan for Notepad application. (Windows based)
**Test Plan for Notepad Application**
Here is a detailed test plan for the Notepad application following the provided
structure:

1. Test Plan Identifier


TestPlan-001-NotepadApp

2. Introduction
This test plan outlines the strategy, scope, resources, and schedule for testing
the Notepad application. The primary goal is to ensure that the Notepad
application functions as expected, is user-friendly, and meets all functional and
non-functional requirements.

3. Test Items
 Notepad Application (latest version).
 Primary features like file creation, saving, editing, and formatting text.
4. Features to be Tested
 Basic text editing (create, open, save, close files).
 Formatting options (font size, style, word wrap).
 File operations (save as, print).
 Search and replace functionality.
 Compatibility across Windows OS versions.

5. Features Not to be Tested


 Advanced features like syntax highlighting (not supported in basic
Notepad).
 Integration with other applications.

6. Approach
 Manual Testing will be used to verify functionality and usability.
 Test cases will be created for each feature, and results will be documented.

7. Item Pass/Fail Criteria


 Pass: The feature works as per the requirements without errors.
 Fail: Any deviation from the expected behavior or unhandled errors.
8. Test Deliverables
 Test Plan Document.
 Test Cases Document.
 Bug Reports.
 Test Summary Report.
9. Schedule
 Test Planning: 1 week.
 Test Case Development: 2 weeks.
 Test Execution: 3 weeks.
 Reporting and Review: 1 week.

(c) Explain defect management process with suitable diagram.


The defect management process in software testing is a
structured approach to identify, track, manage, and resolve
defects (bugs) in a software application. The primary goal is to
ensure that the software meets the required quality standards and
functions as intended.
1. Defect identification – Defects are identified through various testing
activities, such as unit testing, integration testing, and user acceptance
testing.
2. Defect logging – Defects are logged in a defect tracking system, along
with details such as severity, status, reproducibility and priority.
3. Defect triage – The triage process involves evaluating the defects to
determine their priority and the resources required to resolve them.
4. Defect assignment – Defects are assigned to developers or testers for
resolution, based on their expertise and availability.
5. Defect Resolution and Verification: The defect is fixed and then
verified by the tester to ensure it’s correctly resolved without
introducing new issues.
6. Defect Closure and Reporting: After verification, the defect is closed,
and its status is updated in the tracking system. Regular reports are
generated to provide visibility into the overall defect status and
resolution progress.

7.  Deliverable Baseline: Deliverables are considered ready for the next stage when they
meet the required standards for quality and functionality, ensuring that defects are
minimized in future releases.
8.  Process Improvement: To prevent similar defects in future, the team analyzes root
causes and implements corrective actions. This helps improve the development and
testing process for future projects.

 Defect Prevention
Defect prevention involves implementing standard techniques, methodologies, and
standard process throughout the software development lifecycle to minimize the
risk of defects.
 Deliverable Baseline
A deliverable baseline is the process of marking milestones where the software
deliverable is considered complete and ready for further work. Once a deliverable
is baselined, no further changes are allowed unless controlled and approved. Errors
are only counted as defects after the deliverable has been baselined.
 Defect Discovery
Defects are identified during various testing activities like unit testing, integration
testing, and user acceptance testing. A defect is considered "discovered" when it is
documented and acknowledged by the development team as a valid issue.
 Defect Resolution
Defect resolution involves assigning the defect to the appropriate team members,
who prioritize and fix the issue. After the fix, the tester verifies that the defect is
resolved and no new issues are introduced.

 Process Improvement
Process Improvement: To prevent similar defects in future, the team analyzes root causes and
implements corrective actions. This helps improve the development and testing process for future
projects.

 Management Reporting
After verification, the defect is closed, and its status is updated in the tracking
system. Regular reports are generated to provide visibility into the overall defect
status and resolution progress.
(d) State & explain any four benefits of automation in testing.

 Faster Test Execution: Automated testing is faster compared to manual testing, especially
for repetitive tasks and regression testing. Automation tools can run tests concurrently across
multiple platforms or environments, providing quicker feedback on software quality.
 Reusable Test Scripts: A key advantage of automated testing is the reusability of test
scripts. Once created, test scripts can be reused in various testing phases (unit, integration,
system, acceptance, etc.). This eliminates the need to rewrite test cases, saving time and
effort, and improving the efficiency of the testing process.
 Less Prone to Errors: Automated testing is less prone to human errors since it relies on
tools and pre-written scripts. Properly trained employees ensure accurate implementation,
reducing the risk of mistakes in test execution.
 Highly Scalable: Automated testing can scale easily. Even with increased complexity or
larger software projects, automated tests can execute efficiently without significant delays,
maintaining performance.
 Accuracy and Consistency: Automated testing offers greater accuracy and consistency,
as it produces reliable and repeatable results every time, ensuring the software is tested the
same way each time.
 Suitable for Large Projects: Automated testing is ideal for large and complex projects
due to its ability to handle high volumes of tests quickly, ensuring reliability, performance,
and consistent results.

4. Attempt any THREE of the following : 12


(a) What is boundary value analysis ? Explain with suitable example.
BVA focuses on testing the boundaries of input ranges.
 This is one of the software testing technique in which the test cases are designed to
 include values at the boundary.
 - If the input data is used within the boundary value limits, then it is said to be Positive
 Testing.
 -If the input data is picked outside the boundary value limits, then it is said to be Negative
 Testing.
 - Boundary value analysis is another black box test design technique and it is used to find
 the errors at boundaries of input domain rather than finding those errors in the center of
 input.
 There are three guidelines for boundary value analysis :
 1) One test case for exact boundary values of input domains.
 2) One test case for just below boundary value of input domains.
 3) One test case for just above boundary values of input domains.
 Consider a textbox which should accept number from 1 to 100
 One test case for exact boundary values of input domains each means 1 and 100.
 One test case for just below boundary value of input domains each means 0 and 99.
 One test case for just above boundary values of input domains each means 2 and 101.
 For Example: A system can accept the numbers from 1 to 10 numeric values. All
 other numbers are invalid values. Under this technique, boundary values 0, 1,2, 9,10,11
can be tested.
 Boundary values are validated against both the valid boundaries and invalid boundaries.
 The Invalid Boundary Cases for the above example can be given as follows
 0 - for lower limit boundary value
 11 - for upper limit boundary value.

(b) Explain the Regression testing. State when the Regression testing shall be done.
Explanation: Regression testing is a type of software testing that ensures that recent
code updates have not negatively affected the existing functionality of the software.
The purpose is to confirm that the new code works as intended and that previously
developed and tested software still performs correctly after a change.
Regression testing is a crucial aspect of software engineering that ensures the stability
and reliability of a software product.This process is essential for software development
teams to deliver consistent and high-quality products to their users.
When Should Regression Testing Be Done?
1. After Code Changes: Whenever new code is added, modified, or fixed, regression
testing should be performed to ensure that these changes haven't introduced new
bugs into the existing software.
2. After Bug Fixes: When a defect is fixed, regression testing ensures that the fix has
not caused issues in other parts of the software.
3. During Software Upgrades: When the software is upgraded (e.g., moving to a new
version), regression testing checks that all features continue to work as expected.
4. After Configuration Changes: When the environment or configuration settings are
changed, regression testing ensures that the software still behaves as intended.

(c) What is test plan ? What is its need ? List test planning activities.
A test plan is a document that describes the scope, approach, resources and schedule
required for conducting testing activities
The test plan acts as the anchor for the execution, tracking and reporting of the entire
testing project.
A test plan is needed to guide the testing process, clarify objectives, manage resources,
address risks, ensure quality, and facilitate communication among stakeholders.
1. Scope Management: Deciding what features to be tested and not to be tested.
2. Deciding Test approach : Which type of testing shall be done like configuration,
integration, localization etc.
3. Test strategy: The test strategies for the various features and combinations determined
how these features and combinations would be tested.
4. Setting up criteria for testing: There must be clear entry and exit criteria for different
phases of testing.
5. Identifying responsibilities, staffing and training needs.
(d) Prepare defect report for login field of email application.
Defect ID: DEF003
Defect Name: Login Field Not Accepting Valid Email Addresses
Project Name: Authentication Module Testing
Module/Sub-module Name: Login Module
Phase Introduced: Development
Phase Found: Testing
Defect Type: Functional Defect
Severity: High
Priority: High
Summary: The email field on the login page fails to accept valid email addresses,
preventing users from logging in.
Description: When a user attempts to input a valid email address in the email
field of the login page, the field either clears the input or displays an error
message incorrectly stating the email format is invalid, even for properly
formatted email addresses.
Status: Open
Reported By: Bhakti Nimaj
Reported On: [Insert Date]
Assigned To: [Team Member Name]

(e) State any four limitations of manual testing.


 Time-Consuming:
 Explanation: Manual testing requires testers to execute test cases by hand, which
can be very time-consuming, especially for large projects. This slows down the
overall development process and delays software releases.
 Prone to Human Error:
 Explanation: Since manual testing relies on humans, there's a higher risk of
mistakes. Testers might overlook bugs, miss test steps, or provide inconsistent
results, leading to less reliable outcomes.
 Limited Test Coverage:
 Explanation: Manual testing can only cover a certain amount of the application due
to time and resource constraints. This limits the number of test cases that can be
executed, possibly leaving some parts of the software untested.
 Not Suitable for Repetitive Tasks:
 Explanation: Repeating the same test cases manually can be tedious and exhausting,
leading to a lack of focus and potential errors. This makes manual testing inefficient
for tasks that need to be performed multiple times, like regression testing.

5. Attempt any TWO of the following : 12


(a) Describe V-model with labelled diagram.

The V-Model, also known as the Verification and Validation model, is an extension of
the waterfall model. It emphasizes the importance of testing in every phase of the
software development lifecycle.
The V-model is named after its shape, which resembles the letter "V". The left side of
the "V" represents the verification phases, where the focus is on ensuring the product
is built according to the requirements. The right side represents the validation phases,
where the goal is to ensure the product meets the intended purpose.
The verification phases on the left side of the "V" include:
 Business Requirement Analysis:
 This phase is about understanding what the customer wants from the product. It
involves gathering all the requirements from the customer's perspective and making
sure they are clear.
 System Design:
 In this phase, the entire system is planned out. This includes deciding on the
hardware, software, and how everything will work together. It's like creating a
blueprint for the whole system.
 Architectural Design (High-Level Design, HLD):
 This phase focuses on the overall structure of the system. It involves breaking down
the system into smaller parts or modules and understanding how these parts will
interact with each other.
 Module Design (Low-Level Design, LLD):
 Here, the focus is on the detailed design of each part or module of the system. It’s
about specifying how each module will work internally, including the coding details.
The validation phases on the right side of the "V" include:
o Unit Testing: This level involves testing the individual units or components of a
software/system to ensure they function correctly. The purpose of unit testing is to
validate that each unit performs as designed.

o Integration Testing: At this level, individual units are combined and tested as a group
to check for faults in the interaction between integrated units. The goal of integration
testing is to expose any issues that arise when the units are integrated.

o System Testing: This level entails testing the complete integrated system or software
to evaluate its compliance with the specified requirements. The purpose of system
testing is to ensure that the system functions as expected and meets the defined
requirements.

o Acceptance Testing: In acceptance testing, the final system is tested for acceptability
to assess whether it meets the business requirements and is ready for delivery. The
aim of acceptance testing is to determine if the system is acceptable for deployment to
end-users.

(b) Describe with one example each :


(i) Load testing
(ii) Stress testing
(i) Load Testing:

Description: Load testing is a type of performance testing that evaluates how a system behaves
under a specific, expected load. The goal is to determine whether the system can handle the
anticipated number of users or transactions without performance degradation. It helps ensure that
the system can operate efficiently under normal and peak conditions.
Example: Imagine a website for an online store that expects around 10,000 users during a big
sale event. A load test would simulate 10,000 users accessing the site at the same time, placing
items in their carts, and checking out. The test checks whether the website can handle this load
without slowing down, crashing, or causing errors.

(ii) Stress Testing:

Description: Stress testing goes beyond load testing by evaluating how a system behaves under
extreme or unpredictable conditions, often pushing it beyond its normal operational capacity.
The purpose is to identify the breaking point and to see how the system recovers after failure. It
helps determine the system's robustness and stability under stress.

Example: Consider the same online store website. A stress test would simulate 50,000 users
accessing the site simultaneously, far exceeding the expected load. The test would monitor how
the website handles this excessive traffic, whether it crashes, and how it recovers once the traffic
decreases. This helps in understanding the system's limits and its ability to handle unexpected
surges in traffic.

(c) Prepare six test cases for marketing site www.flipkart.com.


6. Attempt any TWO of the following : 12
(a) Explain client-server testing with suitable diagram.

Client-Server Testing is a type of software testing model used to test the interaction
between two components: the client (which requests services) and the server (which
provides services). This type of testing ensures that the communication, functionality,
and performance of both the client and server are working as expected.
Key Components:
1. Client: The front-end component that sends requests to the server for services or
data. This could be a web browser, a desktop application, or a mobile app.
2. Server: The back-end component that processes the requests from the client and
sends the appropriate responses.
3. Database: Often, servers interact with a database to fetch, update, or delete
information based on the client's request.
Client-Server Testing Process:
1. Functional Testing: Ensuring that the client can send correct requests and the server
returns valid responses.
Example: Testing login functionality, file uploads, and API responses.
2. Load Testing: Testing the system under heavy loads to ensure it handles multiple
client requests simultaneously without failure.
Example: Simulating hundreds or thousands of clients to see how the system
performs under peak load.
3. Security Testing: Ensuring secure communication between the client and server,
protecting data integrity and confidentiality.
Testing login authentication mechanisms and ensuring sensitive data like
passwords are encrypted.
4. Performance Testing: Measuring the performance of the client-server system, such
as response time and throughput.
Checking how fast the server responds to requests when multiple clients are
interacting with it.

(b) Write important six test cases for the ‘Login Form’ of the Facebook website.

(c) Describe defect life cycle with neat diagram.

The defect life cycle, also known as the bug life cycle, refers to the various stages a
defect goes through from its identification to its closure.

Defect Life Cycle Stages:

1. New: When a defect is first discovered, it is reported and marked as "New." The
defect is logged in the defect tracking system with details like severity, priority,
etc.
2. Assigned: Once the defect is reviewed, it is assigned to a developer for
resolution. The developer is responsible for fixing the defect.

3. Open: The developer starts working on the defect to analyze and resolve it.
During this phase, they either fix the bug or mark it as invalid or deferred if it's
not a defect or should be fixed later.

4. Fixed: After the defect is resolved by the developer, it is marked as "Fixed" and
sent to the testing team for retesting.

5. Retest: The tester retests the defect to ensure that it has been correctly fixed.

6. Reopen: If the defect persists after the retesting, it is marked as "Reopened"


and sent back to the developer for further analysis.

7. Verified: If the defect is fixed and works as expected, the tester marks it as
"Verified."

8. Closed: If the defect is no longer reproducible and the fix works as expected, it
is marked as "Closed." This indicates that the defect has been successfully
resolved.

9. Deferred: If the defect is valid but not critical to fix immediately (e.g., planned
for a future release), it is deferred for later.

10. Rejected: If the defect is not valid, misunderstood, or does not need any fix, it is
rejected by the development team.

You might also like