New Notes
New Notes
• Execute Tests:
• Execute your test cases either individually or collectively.
• Monitor test execution results and debug failures if any.
• Continuous Integration:
• Integrate your automation framework with Continuous Integration (CI) tools like Jenkins,
Bamboo, or Azure DevOps for automated test execution.
CUCUMBER:
Setup Your Project:
• Follow the steps mentioned earlier to set up your Java project with Selenium WebDriver.
• Add Cucumber dependencies to your project. You'll need cucumber-java, cucumber-core,
cucumber-junit, and cucumber-jvm dependencies.
• Write feature files using Gherkin syntax. Feature files describe the behavior of your
application in a human-readable format.
• Define scenarios and steps that represent different test cases.
• Create step definition classes to map each step in your feature files to Java code.
• Implement the logic to interact with your application using Selenium WebDriver methods.
• Utilize regular expressions to match step definitions with corresponding steps in feature files.
• Configure Cucumber options in a test runner class. You can specify the location of your
feature files, glue code (step definitions), and other options.
• Integrate reporting plugins like Cucumber Extent Reporter or Cucumber HTML Reporter to
generate detailed test reports.
• Implement logging within step definitions to capture runtime information.
Continuous Integration:
• Integrate your Cucumber tests with a CI/CD pipeline for automated execution.
• Configure your CI server to run Cucumber tests as part of the build process.
• Use Cucumber feature files as a means of communication between stakeholders, testers, and
developers.
• Ensure that feature files are written in a language that all stakeholders understand.
• Follow best practices for writing feature files and step definitions.
• Regularly refactor and optimize your step definitions for better maintainability and
readability.
WebDriverManager: Utility for managing WebDriver binaries for different browsers. It automatically
downloads and configures the required WebDriver executables, eliminating the need for manual
management.
Test Data Management Utilities: Libraries or methods for reading test data from external sources
such as Excel files, CSV files, or databases. These utilities help in parameterizing tests and separating
test data from test logic.
Wait Utilities: Classes and methods for implementing explicit and implicit waits in tests. They allow
waiting for certain conditions to be met before proceeding with test execution, improving test
stability and reliability.
Logging Utilities: Frameworks for logging runtime information and debugging test failures. These
utilities capture log messages, exceptions, and other relevant information during test execution for
analysis and troubleshooting.
Reporting Utilities: Libraries or plugins for generating test reports with detailed information about
test executions, including test case status, screenshots, logs, and performance metrics. These reports
help in test analysis and result interpretation.
Page Object Model (POM) Implementations: Frameworks or utilities for implementing the Page
Object Model (POM) pattern, which helps in creating reusable and maintainable page objects for
organizing and managing web elements in test scripts.
TestNG or JUnit Integration Utilities: Integrations or extensions for TestNG or JUnit frameworks that
provide additional features or utilities specific to Selenium testing, such as parallel test execution,
test parameterization, and test suite management.
Browser Manipulation Utilities: Classes and methods for handling browser-specific operations, such
as managing cookies, handling alerts, executing JavaScript, and navigating back and forward in
browser history.
Screenshot and Video Capture Utilities: Utilities for capturing screenshots or recording videos of test
executions. These utilities are useful for documenting test results, reproducing failures, and providing
visual evidence of test coverage.
BrowserStack or Sauce Labs Integration Utilities: Integrations with cloud-based testing platforms like
BrowserStack or Sauce Labs for running Selenium tests on a wide range of browsers, devices, and
operating systems in parallel.
Configuration Management Utilities: Libraries or methods for managing test configurations and
environment settings. They help in configuring test parameters, setting up test environments, and
managing test data sources.
HTML,CSS,JacaScript:
Question: What is the purpose of the <div> element in HTML, and how is it commonly used?
Answer:
The <div> element in HTML is a generic container used to group and style other elements or sections
of a web page. It does not have any specific semantic meaning and is primarily used for layout
purposes and organizing content.
Question: In the code block below, there is a bug that prevents the paragraph text from appearing.
Identify the bug and provide a fix.
<div>
<p>Some text here</p>
</div>Code language: HTML, XML (xml)
Answer:
The bug is that the closing </div> tag is missing a forward slash (“/”). The correct code should be:
<div>
<p>Some text here</p>
</div>Code language: HTML, XML (xml)
Question: In the code block below, the CSS selector is incorrect. Identify the error and provide a
fix.
.styed-button {
background-color: blue;
color: white;
padding: 10px;
}Code language: CSS (css)
Answer:
The error is in the CSS selector. It should be .styled-button instead of .styed-button. The correct code
should be:
.styled-button {
background-color: blue;
color: white;
padding: 10px;
}Code language: CSS (css)
Question: Explain the purpose of the querySelector method in JavaScript, and provide an example
of how to use it.
Answer:
The querySelector method in JavaScript allows you to select elements from the DOM using CSS-style
selectors. It returns the first element that matches the selector. Here’s an example:
This example selects the first element with the class “my-class” and assigns it to
the element variable.
Question: In the code block below, there is a syntax error preventing the JavaScript function from
executing. Identify the error and provide a fix.
function sayHello() {
console.log("Hello, world!")
}Code language: JavaScript (javascript)
Answer:
The error is the missing closing parenthesis “)” at the end of the function definition. The correct code
should be:
function sayHello() {
console.log("Hello, world!");
}Code language: JavaScript (javascript)
Question: Explain the purpose of the href attribute in an <a> element in HTML, and how it is used.
Answer:
The href attribute in an <a> element is used to specify the URL or destination that the link should
navigate to when clicked. It is an essential attribute for creating hyperlinks in HTML. The value of
the href attribute can be an absolute URL, a relative URL, or an anchor reference within the same
page.
Question: In the code block below, the CSS property is missing. Identify the missing property and
provide a fix.
.container {
width: 500px;
height: 300px;
background-color: red;
/* Missing CSS property */
}Code language: CSS (css)
Answer:
The missing CSS property is border. To fix the code, add the border property with the desired value,
such as:
.container {
width: 500px;
height: 300px;
background-color: red;
border: 1px solid black;
}Code language: CSS (css)
Question: Explain the concept of event handling in JavaScript and provide an example of attaching
an event listener to a button.
Answer:
In JavaScript, event handling involves responding to user actions or events triggered by the browser.
Event listeners are functions that listen for specific events and execute code in response. Here’s an
example of attaching an event listener to a button:
button.addEventListener("click", () => {
console.log("Button clicked!");
});Code language: JavaScript (javascript)
In this example, the event listener is attached to the button element with the id “myButton.” When
the button is clicked, the anonymous arrow function is executed, and it logs “Button clicked!” to the
console.
Question: In the code block below, the JavaScript function contains an error preventing it from
executing. Identify the error and provide a fix.
function calculateSum(a, b) {
const sum = a + b;
console.log("The sum is: " + sum);
}
Answer:
The error is the missing closing parenthesis “)” at the end of the function call. The correct code
should be:
calculateSum(5, 10);
1. What is HTML?
HTML stands for HyperText Markup Language. It is used to design web pages using a markup
language. HTML is a combination of Hypertext and Markup language. Hypertext defines the link
between the web pages. The markup language is used to define the text document within the tag
which defines the structure of web pages. HTML is used to structure the website and is therefore
used for Web Development.
HTML stands for Hypertext Markup XHTML stands for Extensible Hypertext
1.
Language. Markup Language.
All tags and attributes are not necessarily In this, every tag and attribute should be in
6.
to be in lower or upper case. lower case.
Doctype is not necessary to write at the Doctype is very necessary to write at the top
7.
top. of the file.
It is not necessary to close the tags in the It is necessary to close the tags in the order
8.
order they are opened. they are opened.
The used filename extensions are .html, The used Filename extensions are .xhtml, .xht,
10.
.htm. .xml.
HTML HTML5
It didn’t support audio and video without the use It supports audio and video controls with the
of Flash player support. use of <audio> and <video> tags.
Not possible to draw shapes like circles, HTML5 allows drawing shapes like circles,
rectangles, triangles, etc. rectangles, triangles, etc.
Older versions of HTML are less mobile-friendly. HTML5 language is more mobile-friendly.
The doctype declaration is too long and The doctype declaration is quite simple and
complicated. easy.
Character encoding is long and complicated. Character encoding is simple and easy.
It is almost impossible to get the true GeoLocation One can track the Geo Location of a user
of users with the help of the browser. easily by using JS GeoLocation API.
Attributes like charset, async, and ping are absent Attributes of the charset, async, and ping are
in HTML. a part of HTML 5.
7. What are elements and tags, and what are the differences between them?
HTML Tags: Tags are the starting and ending parts of an HTML element. They begin with < symbol
and end with > symbol. Whatever is written inside < and > are called tags.
Syntax:
<b> </b>
HTML elements: Elements enclose the contents in between the tags. They consist of some kind of
structure or expression. It generally consists of a start tag, content, and an end tag.
Syntax:
<b>This is the content.</b>
Difference between HTML Tag & HTML Element:
HTML Tag HTML Element
Either opening or closing is used to mark the start Collection of a start tag, end tag, and its
or end of an element. attributes.
<section id="home_section">
Information About Page
</section>
Example: When the user clicks on the “Contact Us” link, he will be redirected to the “Contact Us
section” on the same page.
• HTML
<!DOCTYPE html>
<html>
<head>
<style>
div {
width: 100%;
height: 400px;
border: 1px solid black;
}
</style>
</head>
<body>
<h2>Welcome to GeeksforGeeks</h2>
<p>
This is the example of
<i>
Redirect to a particular section
using HTML on same page
</i>
</p>
</html>
Output:
11. Are <b> and <strong> tags same? If not, then why?
HTML strong tag: The strong tag is one of the elements of HTML used in formatting HTML texts. It
is used to show the importance of the text by making it bold or highlighting it semantically.
Syntax:
<strong> Contents... </strong>
HTML bold tag: The bold tag or <b> is also one of the formatting elements of HTML. The text written
under the <b> tag makes the text bold presentationally to draw attention.
Syntax:
<b> Contents... </b>
The main difference between the <bold> tag & <strong> tag is that the strong tag semantically
emphasizes the important word or section of words while the bold tag is just offset text
conventionally styled in bold.
12. What is the difference between <em> and <i> tags?
<i> tag: It is one of the elements of HTML which is used in formatting HTML texts. It is used to define
a text in technical terms, alternative mood or voice, a thought, etc.
Syntax:
<i> Content... </i>
<em> tag: It is also one of the elements of HTML used in formatting texts. It is used to define
emphasized text or statements.
Syntax:
<em> Content... </em>
By default, the visual result is the same but the main difference between these two tags is that the
<em> tag semantically emphasizes the important word or section of words while the <i> tag is just
offset text conventionally styled in italic to show alternative mood or voice. Click Here to know the
difference between them.
Explanation:
• href: The href attribute is used to specify the destination address of the link used.
• Text link: The text link is the visible part of the link.
16. What is the use of the target attribute in the <link> tag?
The HTML <link> target Attribute is used to specify the window or a frame where the linked
document is loaded. It is not supported by HTML 5.
Syntax:
<link target="_blank|_self|_parent|_top|framename">
Attribute Values:
• _blank: It opens the link in a new window.
• _self: It opens the linked document in the same frame.
• _parent: It opens the linked document in the parent frameset.
• _top: It opens the linked document in the full body of the window.
• framename: It opens the linked document in the named frame.
• Test Web
• Provides cross-platform for Native and Hybrid mobile automation
• Support JSON wire protocol
• It does not require recompilation of App
• Support automation test on physical device as well as similar or emulator both
• It has no dependency on mobile device
• ANDROID SDK
• JDK
• TestNG
• Eclipse
• Selenium Server JAR
• Webdriver Language Binding Library
• APPIUM for Windows
• APK App Info On Google Play
• js
4) List out the limitations of using Appium?
• Appium does not support testing of Android Version lower than 4.2
• Limited support for hybrid app testing. E.g., not possible to test the switching action of
application from the web app to native and vice-versa
• No support to run Appium Inspector on Microsoft Windows
• Appium is an “HTTP Server” written using Node.js platform and drives iOS and Android
session using Webdriver JSON wire protocol. Hence, before initializing the Appium Server,
Node.js must be pre-installed on the system
• When Appium is downloaded and installed, then a server is setup on our machine that
exposes a REST API
• It receives connection and command request from the client and execute that command on
mobile devices (Android / iOS)
• It responds back with HTTP responses. Again, to execute this request, it uses the mobile test
automation frameworks to drive the user interface of the apps. Framework like
• Apple Instruments for iOS (Instruments are available only in Xcode 3.0 or later with
OS X v10.5 and later)
• Google UIAutomator for Android API level 16 or higher
• Selendroid for Android API level 15 or less
• For programmer irrespective of the platform, he is automating ( Android or iOS) all the
complexities will remain under single Appium server
• It opens the door to cross-platform mobile testing which means the same test would work
on multiple platforms
• Appium does not require extra components in your App to make it automation friendly
• It can automate Hybrid, Web and Native mobile applications
Cons:
• Running scripts on multiple iOS simulators at the same time is possible with Appium
• It uses UIAutomator for Android Automation which supports only Android SDK platform, API
16 or higher and to support the older API’s they have used another open source library
called Selendroid
10) Mention what are the basic requirement for writing Appium tests?
For writing Appium tests you require,
• Driver Client: Appium drives mobile applications as though it were a user. Using a client
library you write your Appium tests which wrap your test steps and sends to the Appium
server over HTTP.
• Appium Session: You have to first initialize a session, as such Appium test takes place in the
session. Once the Automation is done for one session, it can be ended and wait for another
session
• Desired Capabilities: To initialize an Appium session you need to define certain parameters
known as “desired capabilities” like PlatformName, PlatformVersion, Device Name and so
on. It specifies the kind of automation one requires from the Appium server.
• Driver Commands: You can write your test steps using a large and expressive vocabulary of
commands.
11) Mention what are the possible errors one might encounter using Appium?
The possible errors one might face in Appium includes
• Error 1: The following desired capabilities are needed but not provided: Device Name,
platformName
• Error 2: Could not find adb. Please set the ANDROID_HOME environment variable with the
Android SDK root directory path
• Error 3: openqa.selenium.SessionNotCreatedException: A new session could not be created
• Error 4: How to find DOM element or XPath in a mobile application?
13) Is it possible to interact with my apps using Javascript while I am testing with Appium?
Yes, it is possible to interact with App while using Javascript. When the commands run on Appium,
the server will send the script to your app wrapped into an anonymous function to be executed.
14) Mention what are the most difficult scenarios to test with Appium?
The most difficult scenario to test with Appium is data exchange.
16) In Android, do you need an app’s .apk to automate using Appium or you also need app in my
workspace?
In Android, you only need .apk file to automate using Appium.
# using es7/babe1
#regular es5
API testing is a type of software testing that involves checking if the Application Programming
Interfaces (APIs) meet their functionality, reliability, performance, and security requirements. It
focuses on verifying the communication and data exchange between different software
components or systems.
2. What is Postman?
Postman is a popular tool used for API development and testing. It provides an intuitive user
interface for sending requests to APIs, testing API endpoints, automating API testing workflows,
and generating API documentation.
In Postman, API testing involves creating requests to API endpoints using HTTP methods like GET,
POST, PUT, DELETE, etc. We can then send these requests to the API server and analyze the
responses to verify if the API behaves as expected. Postman provides features like environment
variables, test scripts, assertions, and collections to streamline and automate the API testing
process.
Automation in Postman can be achieved using collections and test scripts. Collections allow you
to organize and group related requests together, while test scripts enable you to define custom
validation logic and assertions. By combining collections with test scripts and leveraging features
like Newman (Postman's command-line tool) or integrating with continuous integration (CI)
systems, we can automate the execution of API tests as part of your development pipeline.
Some common challenges in API testing include handling authentication mechanisms, ensuring
proper data validation and error handling, managing dependencies on external services, testing
for edge cases and boundary conditions, and maintaining test suites as APIs evolve over time.
Effective strategies and tools like Postman can help address these challenges and streamline the
API testing process.
Status codes in API responses indicate the success or failure of a request and provide additional
information about the result. For example, HTTP status code 200 indicates a successful response,
while codes like 404 signify that the requested resource was not found. In API testing, analyzing
status codes helps verify if the API behaves as expected under different conditions and enables
testers to identify and troubleshoot issues effectively.
• Identifying test scenarios: Determine the functionality, performance, security, and edge
cases to be tested.
• Creating reusable test cases: Develop modular and maintainable test cases that can be
reused across different API endpoints.
• Using automation: Automate repetitive tasks and regression tests to increase efficiency and
consistency.
• Validating responses: Verify response data, status codes, headers, and error messages to
ensure correctness and reliability.
• Handling dependencies: Mock external dependencies or use stubs to isolate the API under
test and ensure reliable test execution.
• Implementing security testing: Test for vulnerabilities such as injection attacks,
authentication flaws, and data exposure.
• Monitoring performance: Measure response times, throughput, and resource utilization to
assess performance under different loads.
• Documenting tests: Document test cases, scenarios, and results for traceability and
knowledge sharing within the team.
API mocking means creating pretend responses from API endpoints without talking to the real
backend. It's like practicing with a dummy instead of a real opponent. It helps testers to try out
different situations and problems before everything is fully set up, making testing faster and
more flexible.
Pagination is like breaking a big list into smaller pieces. In Postman, you handle it by asking for
one piece at a time and then automatically getting the next piece until you have everything. It's
like asking for a few pages of a book at a time instead of the whole book at once.
14. What are pre-request scripts and post-request scripts in Postman?
Pre-request scripts are like getting ready before you go out — you might check the weather or
grab your keys. In Postman, it's code you run just before sending a request. Post-request scripts
are like what you do after you come back home — maybe you unpack your bags or write down
what happened. In Postman, it's code you run just after getting a response.
Set up Logging Framework: Before starting debugging, ensure that you have a logging framework set
up in your Java project. One popular choice is Log4j or Logback. These frameworks allow you to
configure logging levels and destinations (e.g., console, file) for your application.
Configure Logging: Configure your logging framework to capture Selenium logs along with your
application logs. This can include browser console logs, WebDriver logs, and any custom logging
you've implemented in your test scripts.
Enable WebDriver Logging: Selenium WebDriver has built-in logging capabilities that can be useful
for debugging. You can enable WebDriver logs to capture detailed information about the interactions
between Selenium and the browser.
Replace "path/to/chromedriver.log" with the path where you want to save the WebDriver logs.
Capture Browser Console Logs: You can use WebDriver's capabilities to capture browser console logs
during test execution. This can be helpful for debugging JavaScript errors or unexpected behaviour.
Use Loggers in Test Code: Incorporate loggers into your test code to log important events, actions,
and assertions. This will provide additional context when analyzing logs.
Analyze Logs: Run your test suite and analyze the logs generated by your application, WebDriver, and
browser. Look for errors, warnings, or unexpected behavior that can help identify and resolve issues
in your test scripts.
Iterate and Refine: Debugging through logs is often an iterative process. Use the information
gathered from the logs to refine your test scripts, improve error handling, and enhance overall test
stability.
Rootcause Analysis:
Root cause analysis (RCA) is a problem-solving technique used to identify the underlying reason or
reasons why an issue occurred. It aims to find the core or root cause of a problem rather than just
addressing its symptoms.
Problem: Users are reporting that a particular feature in a software application is not working as
expected.
1. Identify the Problem: The problem is that the feature is not functioning correctly.
2. Gather Information: Collect user reports, error messages, system logs, and any other relevant
data. Identify when and how the issue occurs.
4. Analyze Causes: The root cause of the issue is that the object was not initialized properly, leading
to a null pointer exception.
6. Implement Solutions: Developers modify the code to fix the initialization issue and deploy the
updated version of the software.
7. Monitor and Verify: Test the updated feature to ensure that it is functioning correctly without
encountering any further issues.
8. Prevent Recurrence: Review development processes to identify areas for improvement. Provide
additional training or guidance to prevent similar issues from occurring in future development
efforts.
SOAP,XML,JSON:
SOAP (Simple Object Access Protocol):
• SOAP is a protocol used for exchanging structured information in the form of XML
documents over a network.
• It's commonly used in web services to facilitate communication between different systems or
applications.
• In automation testing with Java, SOAP is commonly used to test web services.
• Think of SOAP as a standardized way for different programs to talk to each other and
exchange data.
• XML is a markup language used to structure and store data in a format that is both human-
readable and machine-readable.
• It's often used to represent data in a hierarchical format(tree-like structure, where each
element can have child elements nested within it)
• XML is widely used in various domains, including web services, configuration files, and data
exchange between different systems.
• In automation testing with Java, XML is often used to represent request and response
payloads when testing web services or APIs.
• JSON is a data interchange format that is easy for humans to read and write, and easy for
machines to parse and generate.
• It's commonly used for transmitting data between a web server and a web client as an
alternative to XML.
• JSON is based on key-value pairs and supports arrays and nested objects.
• JSON is often preferred over XML for its simplicity, compactness, and ease of use, especially
in web APIs and modern web applications.
Steps for running testcases:
Create a New Job:
Configure Job:
• In the job configuration page, specify the source code repository URL (e.g., Git or
Subversion).
• Define the build steps: For example, specify commands to compile your code or run your
tests.
• Set up any additional configurations, such as environment variables or build triggers.
Save Changes:
• Once the build is complete, you can view the build results in the Jenkins dashboard.
• Check the console output for any errors or failures during the build process.
• If your tests generate test reports (e.g., JUnit XML or TestNG XML), configure Jenkins to
publish these reports.
• Use Jenkins plugins like JUnit or TestNG to parse and display test results in the Jenkins
dashboard.
• Set up Jenkins to automatically trigger builds based on certain events (e.g., code commits,
scheduled times).
• Configure build triggers in the job configuration page to automate the execution of test
cases.
Troubleshoot Failures:
• If a build fails, investigate the cause by reviewing the console output and any error messages.
• Make necessary adjustments to your job configuration or code to fix the issue.
Performance testing is a type of testing that checks how well a software application performs under
various conditions. It evaluates factors like speed, responsiveness, and stability of the application
under different loads.
2. What is JMeter?
Apache JMeter is an open-source tool used for performance testing and load testing of web
applications. It simulates a heavy load on a server, network, or object to test its strength and analyze
overall performance under different scenarios.
JMeter sends requests to a web server, mimicking the behavior of multiple users accessing the
application simultaneously. It measures the response time of these requests and generates
performance metrics like throughput, latency, and error rate.
Assertions in JMeter are used to validate the response received from the server during a test. They
allow testers to define criteria for success or failure of a request, such as checking for specific text in
the response, verifying response codes, or validating the presence of certain elements.
10. Explain the difference between Aggregate Report and Summary Report listeners in JMeter.
• Aggregate Report: It provides aggregate performance metrics such as average response
time, throughput, and error percentage for each sampler in the test plan.
• Summary Report: It provides summary statistics of the entire test, including the total
number of requests, average response time, and error count.
Test Plan:
Test Plan Title
Prepared by:
1. Introduction
2.Test Objectives
3. Testing Resources
4. Scope of Testing
5. Test Environment
6. Testing approaches
7. Test Schedule
8. Test Deliverables:
•List the various documents, reports, and artifacts that will be produced as part of the testing effort.
•Specify the process for obtaining approval and sign-off on the test plan from relevant stakeholders,
such as project managers, clients, and other key decision-makers.
Test Strategy:
1. Introduction:
• This test strategy outlines the approach for testing the [Name of Web Application].
• The web application is designed to [Brief Description of Application].
2. Objective:
• To ensure that the web application meets the specified requirements and provides a positive
user experience.
• To identify and report defects in the application to improve its quality and reliability.
3. Scope:
In-scope:
Out-of-scope:
4. Testing Types:
Functional Testing:
Compatibility Testing:
• Test the application on different web browsers (e.g., Chrome, Firefox, Safari, Edge).
• Ensure compatibility with various devices and screen resolutions.
Performance Testing:
• Test the application's response time and throughput under different load conditions.
• Monitor server resources (CPU, memory, disk I/O) to identify performance bottlenecks.
Usability Testing:
• Evaluate the application's user interface, navigation, and overall user experience.
• Gather feedback from target users to identify areas for improvement.
5. Test Environment:
6. Test Data:
7. Test Execution:
• Test cases will be executed manually using real browsers and devices.
• Automated testing tools (e.g., Selenium WebDriver) may be used for regression testing and
repetitive tasks.
• Testing will be conducted in iterative cycles, with regular feedback and updates.
8. Defect Management:
• Defects will be logged in a defect tracking system with detailed information, including steps
to reproduce, screenshots, and severity.
• Defects will be prioritized based on severity and impact on the application's functionality and
user experience.
9. Reporting:
• Test results will be documented in test reports, including test coverage, test execution status,
and defects found.
• Reports will be shared with relevant stakeholders, including development team, project
managers, and product owners.
10. Risks and Mitigation:
This test strategy requires review and approval from the project manager and relevant stakeholders
before testing begins.
Test Estimation:
CICD:
1. What is Continuous Integration (CI)?
Continuous Integration is the practice of frequently integrating code changes into a shared
repository. This means that whenever a developer makes changes to the code, those changes are
automatically built and tested to ensure they don't break the existing codebase.
Continuous Integration focuses on integrating code changes frequently and automatically running
tests to detect issues early. Continuous Deployment goes further by automatically deploying those
changes to production after passing all tests, making them available to users immediately.
CI/CD helps in improving software quality, reducing risks, and accelerating the delivery process. It
allows for faster feedback loops, detects issues early in the development cycle, and automates
repetitive tasks, leading to faster time-to-market and happier customers.
Some popular CI/CD tools include Jenkins, Travis CI, CircleCI, GitLab CI/CD, and GitHub Actions. These
tools provide automation capabilities for building, testing, and deploying software applications.
A CI/CD pipeline is a series of automated steps that code changes go through, starting from version
control to deployment. It typically includes stages like code compilation, unit testing, integration
testing, code analysis, and deployment. Each stage is automated, ensuring that code changes are
thoroughly tested and validated before deployment.
A build is the process of converting source code into a runnable application or artifact. It involves
compiling code, resolving dependencies, running tests, and packaging the application for
deployment. In CI/CD, builds are typically triggered automatically whenever there are code changes.
Version control systems like Git are used to track changes to the codebase, collaborate with other
developers, and manage different versions of the software. In CI/CD, version control ensures that all
code changes are centralized, traceable, and can be automatically integrated into the CI/CD pipeline.
When a build or test fails in a CI/CD pipeline, it's important to notify the development team
immediately. The failed build should be investigated to identify the cause of failure, and appropriate
actions should be taken to fix the issue, such as rolling back the changes or addressing the failed test
cases.
10. Explain the concept of "Infrastructure as Code" (IaC) in the context of CI/CD.
Infrastructure as Code is the practice of managing and provisioning infrastructure resources (e.g.,
servers, networks, databases) using code and automation tools. In CI/CD, IaC allows for the
automated provisioning and configuration of infrastructure as part of the deployment process,
ensuring consistency and reproducibility across environments.
Automation testing plays a crucial role in the CI/CD process by automating the execution of test
cases, allowing for rapid feedback on code changes. It ensures that new code integrations do not
break existing functionality and accelerates the delivery of high-quality software.
CI/CD enhances the automation testing process by automating the entire software delivery pipeline,
including building, testing, and deployment. It ensures that test suites are executed automatically on
every code change, facilitating early detection of bugs and ensuring that only quality-tested code is
deployed to production.
13. Explain the integration between automation testing tools and CI/CD platforms like Jenkins.
Automation testing tools like Selenium or JUnit can be integrated with CI/CD platforms like Jenkins to
automate the execution of test suites as part of the continuous integration and deployment process.
Jenkins provides plugins and features to trigger automated tests, capture test results, and generate
reports, making it a seamless part of the CI/CD pipeline.
To set up automated tests in Jenkins, you first create a Jenkins job for your project. Then, you
configure the job to pull the source code from your version control system (e.g., Git). Next, you add
build steps to compile the code and run automated tests using appropriate testing frameworks or
tools. Finally, you configure Jenkins to publish test results and generate reports for easy analysis.
15. What are some advantages of using Jenkins for automation testing in CI/CD?
A Jenkins pipeline is a set of instructions defined in code (usually written in Groovy) that defines the
entire CI/CD process, including building, testing, and deployment. In the context of automation
testing, a Jenkins pipeline typically includes stages for compiling code, running automated tests,
publishing test results, and deploying the application. This pipeline ensures consistency and
repeatability in the testing and deployment process.
17. How does Jenkins facilitate continuous testing in the CI/CD pipeline?
Jenkins facilitates continuous testing in the CI/CD pipeline by automatically triggering test executions
whenever there are code changes. It integrates with testing frameworks to run unit tests, integration
tests, and other types of automated tests. Jenkins captures test results, generates reports, and
provides feedback to developers, ensuring that code changes are thoroughly tested before
deployment.
18. Can you explain how Jenkins handles test result reporting and analysis?
Jenkins captures test results from automated test executions and provides built-in support for
various testing frameworks and report formats. It generates test result reports, including pass/fail
status, code coverage metrics, and other relevant information. Jenkins also allows for trend analysis,
comparing test results across multiple builds to identify patterns and track improvements in software
quality over time.
Preconditions: Define any necessary preconditions required for the test case to execute successfully.
This includes setting up the initial state of the application or environment.
Test Steps: Clearly outline each step of the test case, including user actions such as clicking buttons,
entering text, navigating between pages, etc. Keep the steps simple and concise.
Expected Results: Document the expected outcome or behaviour for each step. This serves as a
reference point for verifying the correctness of the application behaviour.
Use of Page Objects: Implement the Page Object Model (POM) to represent web pages as Java
classes. This helps in encapsulating the page elements and their interactions, promoting code
reusability and maintainability.
Reusable Test Code: Identify common functionalities or actions across multiple test cases and
encapsulate them into reusable methods. This reduces duplication of code and makes maintenance
easier.
Data-Driven Testing: Parameterize your test cases to run with different input data sets. This allows
for testing a variety of scenarios with minimal code duplication.
Assertions: Include assertions at appropriate points in your test cases to verify the expected
behaviour of the application. Assertions help in validating whether the actual outcomes match the
expected results.
Error Handling: Implement appropriate error handling mechanisms to gracefully handle unexpected
exceptions or failures during test execution. This ensures that test execution continues smoothly and
provides meaningful error messages for debugging.Logging and Reporting: Incorporate logging
mechanisms to capture relevant information during test execution, such as test steps, errors, and
warnings. Additionally, use reporting tools or frameworks (e.g., TestNG, ExtentReports) to generate
detailed test reports for analysis.
Test Case Independence: Ensure that each test case is independent and does not rely on the state or
outcome of other test cases. This prevents cascading failures and allows for parallel execution of test
cases.
Cleanup Actions: Implement cleanup actions or teardown methods to restore the application or
environment to its initial state after test execution. This helps in maintaining the consistency of test
runs and avoids side effects between test cases.
Cross-Browser and Cross-Platform Testing: If applicable, consider testing your web application
across different browsers and platforms to ensure compatibility and consistent behaviour.
Regular Maintenance: Periodically review and update your test cases to accommodate changes in
the application under test or to improve the efficiency and effectiveness of your test suite.
Critical Functionality:
Issue: During automated testing of a banking application's funds transfer feature, it was observed
that the transfer process failed intermittently when multiple users attempted transfers concurrently.
This led to potential inconsistencies in account balances and disrupted the user experience.
Impact:
• Inaccurate fund transfers could lead to financial losses for customers and damage the
reputation of the banking application.
• Intermittent failures could frustrate users and result in a loss of trust in the application's
reliability.
• Increased customer support inquiries and complaints due to failed transactions.
Resolution:
• The resolution of the concurrency issue resulted in a more robust and reliable funds transfer
feature within the banking application.
• Customers experienced fewer transaction failures, leading to improved user satisfaction and
trust in the application.
• The proactive approach to addressing the issue demonstrated the commitment of the
development team to delivering a high-quality and resilient web application.