2023-Winter-solved-answers-ste
2023-Winter-solved-answers-ste
23124
3 Hours / 70 Marks Seat No.
Marks
1. Attempt any FIVE of the following : 10
(a) Compare verification and validation (any 2 point).
- Verification is like checking if you're building the software right.
- Validation is about making sure you're building the right software.
- Verification ensures the software meets its specifications.
- Validation confirms the software meets the user's needs and expectations.
-Verification focuses on the process of building the software correctly.
- Validation focuses on ensuring the software meets the user's actual needs
- Verification is done by developers
- Validation is done by testers
2. Time Pressure: Tight deadlines and rushing through the development process
can result in errors and oversights, leading to defects in the software. It's
essential to balance speed with thorough testing to ensure quality.
(h) Enlist any four software testing tools.
Selenium
JUnit
TestComplete
Postman
JIRA
LoadRunner
(b) Illustrate process of bi-directional integration testing. State its two advantages &
disadvantages.
Bi-Directional Integration Testing combines both top-down and bottom-up
integration testing approaches. It starts testing from the middle layer of the system
and progresses upwards and downwards simultaneously.
Process:
1. Middle Layer Testing: Begin testing from the middle layer of the system,
usually the most critical or central module.
2. Top-Down Integration: Test higher-level modules incrementally, moving from
the middle layer upwards towards the top-level modules.
3. Bottom-Up Integration: Simultaneously, test lower-level modules
incrementally, moving from the middle layer downwards towards the base
modules.
4. Stubs and Drivers: Use stubs (for top-down) and drivers (for bottom-up) to
simulate modules that are not yet integrated.
5. Combine: After testing both directions, integrate all modules and perform overall
system testing.
Advantages:
1. Comprehensive Coverage: Ensures that both high-level and low-level modules
are tested thoroughly.
2. Early Detection: Errors in both upper and lower modules can be identified early
in the testing process.
Disadvantages:
1. Complex and Time-Consuming: Managing both top-down and bottom-up
testing simultaneously can be complex and requires significant time and
resources.
2. Dependency on Stubs and Drivers: The need for stubs and drivers can
introduce additional complexity and potential issues if they are not well-
designed.
(c) Enlist any four attributes of defect. Describe them with suitable example.
2. Priority: Priority defines the order in which defects should be addressed based
on their importance. Defects with higher priority need to be fixed before those with
lower priority. For instance, a defect that prevents users from logging into the
system might have a high priority.
4. Status: Status tracks the current state of a defect in the testing process. It can
include statuses like New, Open, In Progress, Fixed, Reopened, Closed, etc. For
example, a defect reported by a tester is initially in the "New" status until it is
assigned to a developer for fixing.
(d) Describe any four factors for selecting a testing tools.
Description: The testing tool must meet the specific needs of the project. Factors
such as the type of application (web, mobile, desktop) and the technologies used
in the project (programming languages, frameworks, etc.) must be taken into
account. Selecting a tool that matches the technical stack ensures smooth testing
and avoids compatibility issues.
Example: For a web application built with Angular, a testing tool like Selenium
would be an ideal choice. Selenium supports automated testing across different
browsers, making it a highly compatible option for such projects.
Description: A testing tool should be easy to use and have a minimal learning
curve, especially if the testing team is unfamiliar with it. A tool that is
complicated to learn or operate can slow down the testing process, resulting in
delays and inefficiencies.
Example: Postman is a commonly chosen tool for API testing because of its
intuitive interface and ease of use. Even those new to API testing can quickly
learn and utilize Postman effectively, which speeds up the testing process and
increases team productivity.
3. Budget
Description: The cost of the testing tool plays a significant role in the selection
process. Projects often operate within budget constraints, and selecting a tool
that is either free or fits within the budget is essential. Balancing the cost with
the tool’s capabilities ensures a cost-effective solution.
Example: Open-source tools such as Selenium or JUnit are excellent choices for
teams working with tight budgets. These tools provide robust testing capabilities
without incurring licensing fees, making them an economical yet effective
solution.
4. Integration Capabilities
Description: The ability of the testing tool to integrate with other tools and
platforms used in the project is another important factor. For example, the tool
should work well with bug tracking systems, continuous integration (CI) tools,
or test management systems. Integration ensures a seamless and efficient testing
workflow.
1. **Performed By**:
- Alpha Testing: Always performed by developers at the software
development site.
- Beta Testing: Always performed by customers or end users at their
own site.
3. **Market Exposure**:
- Alpha Testing: Not open to the market and public.
- Beta Testing: Always open to the market and public.
4. **Conducted For**:
- Alpha Testing: Conducted for software application and project.
- Beta Testing: Usually conducted for software product.
5. **Environment**:
- Alpha Testing: Always performed in a virtual environment.
- Beta Testing: Performed in a real-time environment.
6. **Location**:
- Alpha Testing: Always performed within the organization.
- Beta Testing: Always performed outside the organization.
7. **Testing Type**:
- Alpha Testing: Includes both White Box Testing and Black Box
Testing.
- Beta Testing: Only a kind of Black Box Testing.
8. **Performed At**:
- Alpha Testing: Always performed at the developer's premises in the
absence of the users.
- Beta Testing: Always performed at the user’s premises in the absence
of the development team.
(b) Prepare test plan for Notepad application. (Windows based)
**Test Plan for Notepad Application**
Here is a detailed test plan for the Notepad application following the provided
structure:
2. Introduction
This test plan outlines the strategy, scope, resources, and schedule for testing
the Notepad application. The primary goal is to ensure that the Notepad
application functions as expected, is user-friendly, and meets all functional and
non-functional requirements.
3. Test Items
Notepad Application (latest version).
Primary features like file creation, saving, editing, and formatting text.
4. Features to be Tested
Basic text editing (create, open, save, close files).
Formatting options (font size, style, word wrap).
File operations (save as, print).
Search and replace functionality.
Compatibility across Windows OS versions.
6. Approach
Manual Testing will be used to verify functionality and usability.
Test cases will be created for each feature, and results will be documented.
7. Deliverable Baseline: Deliverables are considered ready for the next stage when they
meet the required standards for quality and functionality, ensuring that defects are
minimized in future releases.
8. Process Improvement: To prevent similar defects in future, the team analyzes root
causes and implements corrective actions. This helps improve the development and
testing process for future projects.
Defect Prevention
Defect prevention involves implementing standard techniques, methodologies, and
standard process throughout the software development lifecycle to minimize the
risk of defects.
Deliverable Baseline
A deliverable baseline is the process of marking milestones where the software
deliverable is considered complete and ready for further work. Once a deliverable
is baselined, no further changes are allowed unless controlled and approved. Errors
are only counted as defects after the deliverable has been baselined.
Defect Discovery
Defects are identified during various testing activities like unit testing, integration
testing, and user acceptance testing. A defect is considered "discovered" when it is
documented and acknowledged by the development team as a valid issue.
Defect Resolution
Defect resolution involves assigning the defect to the appropriate team members,
who prioritize and fix the issue. After the fix, the tester verifies that the defect is
resolved and no new issues are introduced.
Process Improvement
Process Improvement: To prevent similar defects in future, the team analyzes root causes and
implements corrective actions. This helps improve the development and testing process for future
projects.
Management Reporting
After verification, the defect is closed, and its status is updated in the tracking
system. Regular reports are generated to provide visibility into the overall defect
status and resolution progress.
(d) State & explain any four benefits of automation in testing.
Faster Test Execution: Automated testing is faster compared to manual testing, especially
for repetitive tasks and regression testing. Automation tools can run tests concurrently across
multiple platforms or environments, providing quicker feedback on software quality.
Reusable Test Scripts: A key advantage of automated testing is the reusability of test
scripts. Once created, test scripts can be reused in various testing phases (unit, integration,
system, acceptance, etc.). This eliminates the need to rewrite test cases, saving time and
effort, and improving the efficiency of the testing process.
Less Prone to Errors: Automated testing is less prone to human errors since it relies on
tools and pre-written scripts. Properly trained employees ensure accurate implementation,
reducing the risk of mistakes in test execution.
Highly Scalable: Automated testing can scale easily. Even with increased complexity or
larger software projects, automated tests can execute efficiently without significant delays,
maintaining performance.
Accuracy and Consistency: Automated testing offers greater accuracy and consistency,
as it produces reliable and repeatable results every time, ensuring the software is tested the
same way each time.
Suitable for Large Projects: Automated testing is ideal for large and complex projects
due to its ability to handle high volumes of tests quickly, ensuring reliability, performance,
and consistent results.
(b) Explain the Regression testing. State when the Regression testing shall be done.
Explanation: Regression testing is a type of software testing that ensures that recent
code updates have not negatively affected the existing functionality of the software.
The purpose is to confirm that the new code works as intended and that previously
developed and tested software still performs correctly after a change.
Regression testing is a crucial aspect of software engineering that ensures the stability
and reliability of a software product.This process is essential for software development
teams to deliver consistent and high-quality products to their users.
When Should Regression Testing Be Done?
1. After Code Changes: Whenever new code is added, modified, or fixed, regression
testing should be performed to ensure that these changes haven't introduced new
bugs into the existing software.
2. After Bug Fixes: When a defect is fixed, regression testing ensures that the fix has
not caused issues in other parts of the software.
3. During Software Upgrades: When the software is upgraded (e.g., moving to a new
version), regression testing checks that all features continue to work as expected.
4. After Configuration Changes: When the environment or configuration settings are
changed, regression testing ensures that the software still behaves as intended.
(c) What is test plan ? What is its need ? List test planning activities.
A test plan is a document that describes the scope, approach, resources and schedule
required for conducting testing activities
The test plan acts as the anchor for the execution, tracking and reporting of the entire
testing project.
A test plan is needed to guide the testing process, clarify objectives, manage resources,
address risks, ensure quality, and facilitate communication among stakeholders.
1. Scope Management: Deciding what features to be tested and not to be tested.
2. Deciding Test approach : Which type of testing shall be done like configuration,
integration, localization etc.
3. Test strategy: The test strategies for the various features and combinations determined
how these features and combinations would be tested.
4. Setting up criteria for testing: There must be clear entry and exit criteria for different
phases of testing.
5. Identifying responsibilities, staffing and training needs.
(d) Prepare defect report for login field of email application.
Defect ID: DEF003
Defect Name: Login Field Not Accepting Valid Email Addresses
Project Name: Authentication Module Testing
Module/Sub-module Name: Login Module
Phase Introduced: Development
Phase Found: Testing
Defect Type: Functional Defect
Severity: High
Priority: High
Summary: The email field on the login page fails to accept valid email addresses,
preventing users from logging in.
Description: When a user attempts to input a valid email address in the email
field of the login page, the field either clears the input or displays an error
message incorrectly stating the email format is invalid, even for properly
formatted email addresses.
Status: Open
Reported By: Bhakti Nimaj
Reported On: [Insert Date]
Assigned To: [Team Member Name]
The V-Model, also known as the Verification and Validation model, is an extension of
the waterfall model. It emphasizes the importance of testing in every phase of the
software development lifecycle.
The V-model is named after its shape, which resembles the letter "V". The left side of
the "V" represents the verification phases, where the focus is on ensuring the product
is built according to the requirements. The right side represents the validation phases,
where the goal is to ensure the product meets the intended purpose.
The verification phases on the left side of the "V" include:
Business Requirement Analysis:
This phase is about understanding what the customer wants from the product. It
involves gathering all the requirements from the customer's perspective and making
sure they are clear.
System Design:
In this phase, the entire system is planned out. This includes deciding on the
hardware, software, and how everything will work together. It's like creating a
blueprint for the whole system.
Architectural Design (High-Level Design, HLD):
This phase focuses on the overall structure of the system. It involves breaking down
the system into smaller parts or modules and understanding how these parts will
interact with each other.
Module Design (Low-Level Design, LLD):
Here, the focus is on the detailed design of each part or module of the system. It’s
about specifying how each module will work internally, including the coding details.
The validation phases on the right side of the "V" include:
o Unit Testing: This level involves testing the individual units or components of a
software/system to ensure they function correctly. The purpose of unit testing is to
validate that each unit performs as designed.
o Integration Testing: At this level, individual units are combined and tested as a group
to check for faults in the interaction between integrated units. The goal of integration
testing is to expose any issues that arise when the units are integrated.
o System Testing: This level entails testing the complete integrated system or software
to evaluate its compliance with the specified requirements. The purpose of system
testing is to ensure that the system functions as expected and meets the defined
requirements.
o Acceptance Testing: In acceptance testing, the final system is tested for acceptability
to assess whether it meets the business requirements and is ready for delivery. The
aim of acceptance testing is to determine if the system is acceptable for deployment to
end-users.
Description: Load testing is a type of performance testing that evaluates how a system behaves
under a specific, expected load. The goal is to determine whether the system can handle the
anticipated number of users or transactions without performance degradation. It helps ensure that
the system can operate efficiently under normal and peak conditions.
Example: Imagine a website for an online store that expects around 10,000 users during a big
sale event. A load test would simulate 10,000 users accessing the site at the same time, placing
items in their carts, and checking out. The test checks whether the website can handle this load
without slowing down, crashing, or causing errors.
Description: Stress testing goes beyond load testing by evaluating how a system behaves under
extreme or unpredictable conditions, often pushing it beyond its normal operational capacity.
The purpose is to identify the breaking point and to see how the system recovers after failure. It
helps determine the system's robustness and stability under stress.
Example: Consider the same online store website. A stress test would simulate 50,000 users
accessing the site simultaneously, far exceeding the expected load. The test would monitor how
the website handles this excessive traffic, whether it crashes, and how it recovers once the traffic
decreases. This helps in understanding the system's limits and its ability to handle unexpected
surges in traffic.
Client-Server Testing is a type of software testing model used to test the interaction
between two components: the client (which requests services) and the server (which
provides services). This type of testing ensures that the communication, functionality,
and performance of both the client and server are working as expected.
Key Components:
1. Client: The front-end component that sends requests to the server for services or
data. This could be a web browser, a desktop application, or a mobile app.
2. Server: The back-end component that processes the requests from the client and
sends the appropriate responses.
3. Database: Often, servers interact with a database to fetch, update, or delete
information based on the client's request.
Client-Server Testing Process:
1. Functional Testing: Ensuring that the client can send correct requests and the server
returns valid responses.
Example: Testing login functionality, file uploads, and API responses.
2. Load Testing: Testing the system under heavy loads to ensure it handles multiple
client requests simultaneously without failure.
Example: Simulating hundreds or thousands of clients to see how the system
performs under peak load.
3. Security Testing: Ensuring secure communication between the client and server,
protecting data integrity and confidentiality.
Testing login authentication mechanisms and ensuring sensitive data like
passwords are encrypted.
4. Performance Testing: Measuring the performance of the client-server system, such
as response time and throughput.
Checking how fast the server responds to requests when multiple clients are
interacting with it.
(b) Write important six test cases for the ‘Login Form’ of the Facebook website.
The defect life cycle, also known as the bug life cycle, refers to the various stages a
defect goes through from its identification to its closure.
1. New: When a defect is first discovered, it is reported and marked as "New." The
defect is logged in the defect tracking system with details like severity, priority,
etc.
2. Assigned: Once the defect is reviewed, it is assigned to a developer for
resolution. The developer is responsible for fixing the defect.
3. Open: The developer starts working on the defect to analyze and resolve it.
During this phase, they either fix the bug or mark it as invalid or deferred if it's
not a defect or should be fixed later.
4. Fixed: After the defect is resolved by the developer, it is marked as "Fixed" and
sent to the testing team for retesting.
5. Retest: The tester retests the defect to ensure that it has been correctly fixed.
7. Verified: If the defect is fixed and works as expected, the tester marks it as
"Verified."
8. Closed: If the defect is no longer reproducible and the fix works as expected, it
is marked as "Closed." This indicates that the defect has been successfully
resolved.
9. Deferred: If the defect is valid but not critical to fix immediately (e.g., planned
for a future release), it is deferred for later.
10. Rejected: If the defect is not valid, misunderstood, or does not need any fix, it is
rejected by the development team.