0% found this document useful (0 votes)
16 views

unit 04 devops updated

Uploaded by

Lakshmi Ganji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

unit 04 devops updated

Uploaded by

Lakshmi Ganji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

UNIT - IV

INTEGRATING THE SYSTEM


Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• You need a system to build your code, and you need somewhere to build it.
• Jenkins is a flexible open source build server that grows with your needs.
• Some alternatives to Jenkins will be explored as well.

• We will also explore the different build systems and how they affect our DevOps work.

• Why do we build code?
• Most developers are familiar with the process of building code. When we work in the field of DevOps, however,
we might face issues that developers who specializein programming a particular component type won't
necessarily experience.
• For the purposes of this book, we define software building as the process of molding
• code from one form to another. During this process, several things might happen:
• The compilation of source code to native code or virtual machine bytecode,depending on our production
platform.
• Linting of the code: checking the code for errors and generating code quality measures by means of static code
analysis. The term "Linting" originated with a program called Lint, which started shipping with early versions
of the Unix operating system. The purpose of the program was to find bugs
• in programs that were syntactically correct, but contained suspicious codepatterns that could be
identified with a different process than compiling.
• Unit testing, by running the code in a controlled manner.
• The generation of artifacts suitable for deployment.It's a tall order!

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Build systems
A build system is a key component in DevOps, and it plays an important role in the software
development and delivery process. It automates the process of compiling and packaging
source code into a deployable artifact, allowing for efficient and consistent builds.
Here are some of the key functions performed by a build system:
Compilation: The build system compiles the source code into a machine-executable format,
such as a binary or an executable jar file.

Dependency Management: The build system ensures that all required dependencies are
available and properly integrated into the build artifact. This can include external libraries,
components, and other resources needed to run the application.
Testing: The build system runs automated tests to ensure that the code is functioning as
intended, and to catch any issues early in the development process.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
•Packaging: The build system packages the compiled code and its dependencies into a single,
deployable artifact, such as a Docker image or a tar archive.
• Version Control: The build system integrates with version control systems, such as Git, to
track changes to the code and manage releases.
•Continuous Integration: The build system can be configured to run builds automatically
whenever changes are made to the code, allowing for fast feedback and continuous integration
of new code into the main branch.
•Deployment: The build system can be integrated with deployment tools and processes to
automate the deployment of the build artifact to production environments.
• In DevOps, it's important to have a build system that is fast, reliable, and scalable, and that
can integrate with other tools and processes in the software development and delivery
pipeline. There are many build systems available, each with its own set of features and
capabilities, and choosing the right one will depend on the specific needs of the project and
team.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
4.1 The many faces of build systems
• There are many build systems that have evolved over the history of
software development. Sometimes, it might feel as if there are more
build systems than there are programming languages.
• Here is a brief list, just to get a feeling for how many there are:
• For Java, there is Maven, Gradle, and Ant
• For C and C++, there is Make in many different flavors
• For Clojure, a language on the JVM, there is Leiningen and Boot
apart from Maven
• For JavaScript, there is Grunt
• For Scala, there is sbt
• For Ruby, we have Rake
• Finally, of course, we have shell scripts of all kinds
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Depending on the size of your organization and the type of product you are building, you
might encounter any number of these tools. To make life even moreinteresting, it's not
uncommon for organizations to invent their own build tools.
• As a reaction to the complexity of the many build tools, there is also often the idea of
standardizing a particular tool. If you are building complex heterogeneous systems, this is
rarely efficient. For example, building JavaScript software is just easier with Grunt than it
is with Maven or Make, building C code is not very efficient with Maven, and so on. Often,
the tool exists for a reason.
• Normally, organizations standardize on a single ecosystem, such as Java and Maven or
Ruby and Rake. Other build systems besides those that are used for the primary code
base are encountered mainly for native components and third-party components.
• At any rate, we cannot assume that we will encounter only one build system within our
organization's code base, nor can we assume only one programming language.
• I have found this rule useful in practice: it should be possible for a developer to check out the
code and build it with minimal surprises on his or her local developer machine.
• This implies that we should standardize the revision control system and have a single
interface to start builds locally. B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• If you have more than one build system to support, this basically means that you
need to wrap one build system in another. The complexities of the build are thus
hidden and more than one build system at the same time are allowed. Developers
not familiar with a particular build can still expect to check it out and build it with
reasonable ease.
• Maven, for example, is good for declarative Java builds. Maven is also capable of
starting other builds from within Maven builds.
• This way, the developer in a Java-centric organization can expect the following
command line to always build one of the organization's components:
• mvn clean install
• One concrete example is creating a Java desktop application installer with the
Nullsoft NSIS Windows installation system. The Java components are built with
Maven. When the Java artifacts are ready, Maven calls the NSIS installer script to
produce a self-contained executable that will install the application on Windows.
• While Java desktop applications are not fashionable these days, they continue to be
popular in some domains.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
4.2 The Jenkins build server
• Jenkins is an open-source automation server widely used for continuous integration,
continuous delivery, and continuous deployment (CI/CD) processes. It helps automate
the building, testing, and deployment of software applications. Jenkins is written in Java
and provides a web-based interface for configuring and managing various automation
tasks.
• Here are some key features and components of Jenkins:
• Jobs: Jenkins uses jobs as the building blocks for automation. A job represents a task or
a set of tasks to be executed. Jobs can be configured to perform actions like compiling
source code, running tests, deploying applications, or triggering other jobs.
• Plugins: Jenkins has a vast ecosystem of plugins that extend its functionality. Plugins
can be installed to add support for different programming languages, version control
systems, build tools, testing frameworks, deployment targets, and more.
• Build Steps: Jobs in Jenkins are composed of build steps. Build steps define the actions
that Jenkins should perform during the build process, such as executing shell
commands, running scripts, invoking build tools, or executing tests.
• Pipelines: Jenkins supports the concept of pipelines, which allows the definition of entire
build processes as code. Jenkins pipelines provide a way to express the build, test, and
deployment stages in a declarative
B.MAHESH or scripted
(YOUTUBE CHANNELmanner, enabling
:: SV TECH KNOWLEDGE ) better visibility,
reusability, and versioning of the build process.
• Distributed Builds: Jenkins can distribute build workloads across multiple nodes to
improve performance and scalability. It supports the setup of distributed build
environments where different nodes handle different parts of the build process.
• Integration and Extensibility: Jenkins integrates with various tools, version control
systems, and services. It can be easily integrated with popular platforms like Git, SVN,
Docker, JIRA, Slack, and many others. Additionally, Jenkins provides APIs and hooks for
extending its functionality and integrating with custom tools and services.
• Monitoring and Reporting: Jenkins offers comprehensive monitoring and reporting
capabilities. It provides real-time build logs, test results, and trend analysis, allowing
developers and teams to track the progress and health of their builds.
• Security and Authentication: Jenkins provides features for authentication, authorization,
and access control. It supports various security mechanisms, including user
authentication, role-based access control (RBAC), and integration with external
authentication providers like LDAP or Active Directory.
• By leveraging Jenkins, development teams can automate repetitive tasks, ensure code
quality through continuous integration and testing, and streamline the deployment
process, resulting in faster and more reliable software development cycles.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• For example: If any organization is developing a project,
then Jenkins will continuously test your project builds and
show you the errors in early stages of your development.

• Possible steps executed by Jenkins are for example:

o Perform a software build using a build system like Gradle or


Maven Apache
o Execute a shell script
o Archive a build result
• Running software tests

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• What are the Jenkins Features?

• Jenkins offers many attractive features for developers:


Easy Installation
Jenkins is a platform-agnostic, self-contained Java-based program, ready to run with packages for
Windows, Mac OS, and Unix-like operating systems.
Easy Configuration
Jenkins is easily set up and configured using its web interface, featuring error checks and a built-in
help function.
Available Plugins
There are hundreds of plugins available in the Update Center, integrating with every tool in the CI
and CD toolchain.
Extensible
Jenkins can be extended by means of its plugin architecture, providing nearly endless possibilities
for what it can do.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


4.3 Managing build dependencies
• Before running your build command, the buildbot will look for instructions about required
languages and software needed to run your command. These are called dependencies,
and how you declare them depends on the languages and tools used in your build.
• When it comes to managing build dependencies, different build systems have various
approaches. Let's take a look at a few examples:
• Maven (Java):
• Maven uses a Project Object Model (POM) file that specifies the project's
dependencies.
• When you build a Maven project, it automatically fetches the required
dependencies from a central repository if they are not already present on the build
server.
• Grunt (JavaScript):
• Grunt utilizes a build description file (e.g., Gruntfile.js) where you can define the
project's dependencies.
• Similar to Maven, Grunt fetches the specified dependencies automatically if they
are missing from the buildB.MAHESH
server.
(YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Go:
• Go projects often include links to required GitHub repositories in their
build files.
• When building a Go project, the build system can fetch the necessary
dependencies from these repositories.
• C and C++ (using GNU Autotools):
• GNU Autotools, including Autoconf, adapt to the available dependencies
on the build system rather than explicitly listing them.
• To build projects like Emacs, you typically run a configuration script that
determines which dependencies are present on the build system.
• These examples highlight different approaches to managing build
dependencies. Maven and Grunt explicitly define dependencies, allowing
the build system to handle fetching them. Go projects link to external
repositories, while C and C++ projects using GNU Autotools adapt to
available dependencies on the build system.
• Managing build dependencies can become more complex in real-world
scenarios, but understanding the principles and tools used by different build
systems helps streamline the process and ensure that the necessary
dependencies are available for successful builds.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• In enterprise settings, it is crucial to have full control over the build
dependencies and ensure that the software behaves consistently
across different environments. Having surprises or missing
functionality on production servers is undesirable. To address this,
we need to adopt a more deterministic approach to managing build
dependencies.
1.Dependency Locking:

2.Artifact Repositories:

3.Build Pipelines:

4.Testing and Validation:

5.Release Management:

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


The RPM (Red Hat Package Manager) system (#it is also comes under managing dependencies )
• The RPM (Red Hat Package Manager) system provides a solution for managing build dependencies and building
software on systems derived from Red Hat. It revolves around a build descriptor file called a spec file (short for
specification file). Here's a simplified explanation:
1. Spec File:
1. The spec file is a build descriptor written in a macro-based shell script format.
2. It contains various sections, including metadata, dependencies, build commands, and configuration options.
3. The spec file defines the requirements for successfully building the software package.
2. Build Dependencies:
1. The spec file lists the build dependencies required to build the software.
2. These dependencies specify the packages or libraries that need to be installed on the build system for a successful
build.
3. Pristine Build Sources:
1. The RPM system encourages keeping build sources pristine, i.e., unmodified.
2. If modifications are needed, the spec file can include patches that modify the source code before the build process
begins.
3. Patches allow you to adapt the source code to specific requirements or fix issues.

4. Building Software:
1. The RPM system utilizes the spec file to build software packages.
2. The spec file specifies the build commands, such as compiling, linking, and packaging the software.
3. The RPM system follows the instructions in the spec file to generate the final binary package.
• By using the RPM system, you can create RPM packages for your software with well-defined build dependencies
and build instructions. The spec file allows for adaptability
B.MAHESH by ::patching
(YOUTUBE CHANNEL the source
SV TECH KNOWLEDGE ) code, ensuring that the build
process is customizable while maintaining the ability to reproduce pristine build sources.
4.4 Jenkins plugins
• Plugins are the primary means of enhancing the functionality of a Jenkins environment to suit
organization- or user-specific needs. There are over a thousand different plugins which can be
installed on a Jenkins controller and to integrate various build tools, cloud providers, analysis
tools, and much more.
• Jenkins, as an extensible automation server, offers a wide range of plugins that extend its
functionality and enable integration with various tools and technologies. Here's an overview of
Jenkins plugins:
• Source Code Management (SCM) Plugins:
• SCM plugins provide integration with different version control systems like Git, Subversion (SVN),
Mercurial, etc.
• These plugins enable Jenkins to fetch source code from repositories and trigger builds based on
changes.
• Build Tool Plugins:
• Build tool plugins integrate Jenkins with popular build tools like Apache Maven, Gradle, Ant, and
MSBuild.
• They provide build steps and configurations specific to these tools, allowing seamless integration
within Jenkins pipelines or job configurations.
• Testing Framework Plugins:
• Testing framework plugins enable integration
B.MAHESH with::various
(YOUTUBE CHANNEL testing) frameworks like JUnit, NUnit,
SV TECH KNOWLEDGE
TestNG, etc.
• Deployment Plugins:
• Deployment plugins help automate the deployment of applications to different
environments, such as application servers, cloud platforms, or containers.
• Plugins like Kubernetes, Docker, AWS Elastic Beanstalk, or Azure App Service
enable seamless deployment workflows.
• Notification Plugins:
• Notification plugins facilitate sending notifications or alerts based on build
status, test results, or other events.
• Plugins such as Email Notification, Slack Notification, or Microsoft Teams
Integration enable communication and collaboration among team members.
• Monitoring and Reporting Plugins:
• Monitoring and reporting plugins integrate Jenkins with monitoring and
reporting tools like SonarQube, JIRA, Grafana, etc.
• They provide insights into code quality, performance metrics, and project
management, enhancingB.MAHESH
the visibility and management of software projects.
(YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Top Jenkins Plugins
• Git Plugin.
• Kubernetes Plugin
• Jira Plugin. ...
• Docker Plugin. ...
• Maven Integration Plugin. ...
• Blue Ocean Plugin. ...
• Amazon EC2 Plugin. ...
• Pipeline Plugin.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Git Plugin: This plugin integrates Jenkins with Git version control system, allowing you
to pull code changes, build and test them, and deploy the code to production.
• Maven Plugin: This plugin integrates Jenkins with Apache Maven, a build automation
tool commonly used in Java projects.
•Amazon Web Services (AWS) Plugin: This plugin allows you to integrate Jenkins with
Amazon Web Services (AWS), making it easier to run builds, tests, and deployments on
AWS infrastructure.
•Slack Plugin: This plugin integrates Jenkins with Slack, allowing you to receive
notifications about build status, failures, and other important events in your Slack channels.
•Blue Ocean Plugin: This plugin provides a new and modern user interface for Jenkins,
making it easier to use and navigate.
•Pipeline Plugin: This plugin provides a simple way to define and manage complex CI/CD
pipelines in Jenkins.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Jenkins file system layout
• Jenkins has a specific file system layout that organizes its configuration and data. Here
are the key directories typically found in a Jenkins installation:
• Jenkins Home Directory:
•The Jenkins home directory stores the Jenkins configuration, plugins, and job-specific
data.
config.xml (global configuration),
•It contains files and directories like

plugins/ (plugin files), and jobs/ (job configurations and data).


• Workspace Directory:
•Each Jenkins job has its workspace directory where the source code is checked out
and build artifacts are created.
•The workspace directory is unique to each job and can be accessed during the build
process.
• Logs Directory:
•The logs directory contains log files generated by Jenkins, providing information about
build execution, errors, and other events.
•Logs are essential for troubleshooting
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
and monitoring Jenkins.
• Temporary Directory:
•Jenkins utilizes a temporary directory for storing temporary files
during the build process.
•This directory is typically cleared periodically or after each build to
reclaim disk space.
• Plugins Directory:
•The plugins directory stores the plugin files used by Jenkins.
•Each plugin is contained within a separate subdirectory, and
Jenkins manages the installation, updating, and removal of plugins
within this directory.
• Understanding the Jenkins plugin ecosystem and the file system
layout is essential for effective plugin management, configuring job
workflows, and accessing the necessary data and logs for
troubleshooting and monitoring purposes.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


4.5 The host server
• The build server is usually a pretty important machine for the
organization. Building software is processor as well as
memory and disk intensive. Builds shouldn't take too long, so
you will need a server with good specifications for the build
server—with lots of disk space, processor cores, and RAM.
The build server also has a kind of social aspect: it is here that
the code of many different people and roles integrates
properly for the first time. This aspect grows in importance if
the servers are fast enough. Machines are cheaper than
people, so don't let this particular machine be the area you
save money on.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• The build server is a crucial machine for any organization. Its
main purpose is to build software, which requires a lot of
processing power, memory, and disk space. It's important for
builds to complete quickly, so the build server should have
high-performance specifications, including a fast processor,
ample RAM, and a large storage capacity.
• The build server also plays a social role in the organization.
It's where the code from different people and roles comes
together for the first time and integrates properly. This aspect
becomes more significant when the server is fast enough to
handle the workload efficiently. It's worth noting that investing
in machines is generally more cost-effective than hiring
additional people, so it's not an area where you should try to
save money.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
4.5.1 Build slaves
• To reduce build queues and improve efficiency, you can add build slaves to your setup.
The build slaves work alongside the master server and handle specific builds assigned to
them.
• One reason for using build slaves is that certain builds have specific requirements for the
operating system they run on. By assigning builds to appropriate build slaves, you ensure
that each build is executed on the appropriate host operating system.
• Build slaves also help in parallel builds, where multiple builds can be processed
simultaneously, increasing efficiency. They can also be used for building software on
different operating systems. For example, you can have a Jenkins master server running
on Linux and assign Windows slaves for components that require Windows build tools.
Similarly, if you need to build software for Apple Mac, having a Mac build slave is useful
due to Apple's rules regarding virtual servers.
• To add build slaves to a Jenkins master, there are various methods available. One
common approach is using the SSH method, where the Jenkins master issues commands
to the build slave through a secure shell (SSH) connection. Jenkins has a built-in SSH
facility to support this. Another method is starting a Jenkins slave by downloading a Java
Network Launch Protocol (JNLP) client from the master to the slave. This method is useful
when the build slave doesn't have an SSH server installed.
• By adding build slaves to yourB.MAHESH
Jenkins setup,
(YOUTUBE you
CHANNEL :: SV can distribute
TECH KNOWLEDGE ) the workload, reduce
build queues, and improve the overall efficiency of your software building process.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Jenkins Master
• Your main Jenkins server is the Master. The Master’s job is to handle:
 Scheduling build jobs.
 Dispatching builds to the slaves for the actual execution.
 Monitor the slaves (possibly taking them online and offline as required).
 Recording and presenting the build results.
 A Master instance of Jenkins can also execute build jobs directly.
• Jenkins Slave
• A Slave is a Java executable that runs on a remote machine. Following are the characteristics of Jenkins Slaves:
 It hears requests from the Jenkins Master instance.
 Slaves can run on a variety of operating systems.
 The job of a Slave is to do as they are told to, which involves executing build jobs dispatched by the Master.
 You can configure a project to always run on a particular Slave machine or a particular type of Slave machine, or simply let Jenkins pick the
next available Slave.
• The diagram below is self-explanatory. It consists of a Jenkins Master which is managing three Jenkins Slave.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• To host a server in Jenkins, you'll need to follow these steps:
•Install Jenkins: You can install Jenkins on a server by downloading the Jenkins WAR file,
deploying it to a servlet container such as Apache Tomcat, and starting the server.
•Configure Jenkins: Once Jenkins is up and running, you can access its web interface to configure
and manage the build environment. You can install plugins, set up security, and configure build jobs.
•Create a Build Job: To build your project, you'll need to create a build job in Jenkins. This will
define the steps involved in building your project, such as checking out the code from version
control, compiling the code, running tests, and packaging the application.
•Schedule Builds: You can configure your build job to run automatically at a specific time or when
certain conditions are met. You can also trigger builds manually from the web interface.
•Monitor Builds: Jenkins provides a variety of tools for monitoring builds, such as build history,
build console output, and build artifacts. You can use these tools to keep track of the status of your
builds and to diagnose problems when they occur.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


4.6 Software on the host
• Hosted software is hosted and managed by the software manufacturer or a third-
party vendor. Users can access it globally through the Internet.
• Depending on the complexity of your builds, you might need to
install many different types of build tool on your build server.
Remember that Jenkins is mostly used to trigger builds, not
perform the builds themselves. That job is delegated to the build
system used, such as Maven or Make.
• In my experience, it's most convenient to have a Linux-based host
operating system. Most of the build systems are available in the
distribution repositories, so it's very convenient to install them
from there.
• To keep your build server up to date, you can use the same
deployment servers that you use to keep your application servers
up to date.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Software on the host
• Depending on how complex your builds are, you may need to install various
build tools on your build server. It's important to note that Jenkins primarily
triggers builds and delegates the actual building process to specific build
systems like Maven or Make.
• Inmy experience, using a Linux-based host operating system for the build
server is the most convenient option. This is because most build systems
are readily available in the distribution repositories, making it easy to install
them from there.
• To ensure that your build server stays updated, you can utilize the same
deployment servers that you use to keep your application servers up to
date. This way, you can maintain consistency across your infrastructure and
ensure that both your build server and application servers are regularly
updated.
• By having the necessary build tools installed on your build server and
keeping it up to date, you can ensure a smooth and efficient building
process for your software B.MAHESH
projects.
(YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
4.7 Triggers
• You can either use a timer to trigger builds, or you can poll the code
repository for changes and build if there were changes.
• It can be useful to use both methods at the same time:
• Git repository polling can be used most of the time so that every
check in triggers a build.
• Nightly builds can be triggered, which are more stringent than
continuous builds, and thus take a longer time. Since these builds
happen at night when nobody is supposed to work, it doesn't matter if
they are slow.
• An upstream build can trigger a downstream build.
• You can also let the successful build of one job trigger another job.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Triggers in build systems offer different methods for initiating builds based on various
events or conditions. Two common trigger methods are timer-based triggering and code
repository polling:
1.Timer-Based Trigger: This method involves setting a specific time or interval to trigger
builds automatically. For example, you can schedule builds to run every hour, every day,
or at a specific time. Timer-based triggers are useful for regular or scheduled builds.
2.Code Repository Polling: With this method, the build system periodically checks the code
repository for changes. If there are new commits or updates, a build is triggered. Code
repository polling is commonly used for continuous integration (CI) to ensure that builds
are initiated whenever changes are pushed to the repository.
• Using a combination of these trigger methods can provide more flexibility and control over
the build process. Here are a few scenarios where different triggers can be beneficial:
• Git Repository Polling: This method is ideal for most situations, as it triggers a build for
every code check-in. It ensures that each change undergoes an automated build,
facilitating early detection of integration issues.
• Nightly Builds: Nightly builds are more comprehensive and time-consuming, typically
running during off-peak hours when development activity is low. These builds can be
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
triggered by a timer-based trigger, as their slower execution time won't affect daily
• Upstream and Downstream Builds: In complex build processes involving
multiple components or modules, an upstream build's success can trigger
downstream builds. This ensures that dependent components are built and
tested whenever changes are made in the upstream component.
• Job Chaining: Successful completion of one job can be configured to trigger
another job. This can be useful when multiple sequential steps or tasks are
involved in the build process, allowing for a streamlined workflow.
• By utilizing a combination of trigger methods, build systems can be tailored
to specific needs, ensuring efficient and timely execution of builds based on
different events or conditions. This flexibility helps automate the build
process and maintain a reliable and consistent software development
pipeline.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Triggers in the context of software development and build systems refer to events or
conditions that initiate the execution of a build process. Triggers can be configured in build
systems like Jenkins to automatically start a build when certain events occur or criteria are
met. Here are a few common types of triggers:
1.Manual Trigger: This is the simplest form of trigger where a build is initiated manually by a
user or developer. It requires manual intervention to start the build process.
2.Scheduled Trigger: Builds can be scheduled to run at specific times or intervals, such as
daily, weekly, or at a specific time of the day. Scheduled triggers are useful for automating
regular builds or performing tasks at specific times.
3.Source Code Change Trigger: This trigger starts a build when changes are detected in the
source code repository. It can be configured to monitor specific branches, directories, or
files for changes and automatically initiate a build when updates are made.
4.Continuous Integration (CI) Trigger: In a CI workflow, builds are triggered whenever
changes are pushed to the source code repository. This ensures that each code change is
validated through an automated build process, helping to identify and resolve integration
issues early.
• By utilizing triggers effectively, build systems can automate the build process, ensuring
timely and consistent executionB.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
of builds in response to various events or conditions. This
4.8 Job chaining and build pipelines
• Job chaining and build pipelines are useful concepts in build systems like Jenkins. Job chaining
allows for the sequential execution of multiple jobs, where the successful completion of one job
triggers the next job in the chain. This creates a logical flow of tasks, ensuring that dependencies
between jobs are met.
• Build pipelines, on the other hand, provide a visual representation of the entire software delivery
process. They allow for the creation of complex workflows with multiple stages, each represented
by a distinct job. Build pipelines offer a graphical view of the flow, making it easy to track the
progress of builds through different stages.
• In Jenkins, the first job in a chain is called the upstream build, while the second one is referred to
as the downstream build. This basic job chaining approach is often sufficient for many purposes.
• However, there are cases where a more advanced and controlled build chain is required. This is
where pipelines or workflows come into play. Pipelines provide a more sophisticated visualization
of the build steps and offer greater control over the details of the chain.
• Jenkins provides various plugins to create improved pipelines, with the multijob plugin and the
workflow plugin being two examples. The workflow plugin, promoted by CloudBees, is more
advanced and allows the pipeline to be described using a Groovy Domain Specific Language
(DSL), providing more flexibility than working solely with the web user interface.
• Overall, job chaining and build pipelines enhance the organization, visualization, and control of
build processes. They improve the management of complex workflows and contribute to efficient
software delivery.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Job chaining and build pipelines
• Job chaining
• Job chaining in Jenkins refers to the process of linking multiple build jobs together in a sequence. When one job
completes, the next job in the sequence is automatically triggered. This allows you to create a pipeline of builds that are
dependent on each other, so you can automate the entire build process.
• There are several ways to chain jobs in Jenkins:
• Build Trigger: You can use the build trigger in Jenkins to start one job after another. This is done by configuring the
upstream job to trigger the downstream job when it completes.
• Jenkinsfile: If you are using Jenkins Pipeline, you can write a Jenkinsfile to define the steps in your build pipeline. The
Jenkinsfile can contain multiple stages, each of which represents a separate build job in the pipeline.
• JobDSL plugin: The JobDSL plugin allows you to programmatically create and manage Jenkins jobs. You can use this
plugin to create a series of jobs that are linked together and run in sequence.
• Multi-Job plugin: The Multi-Job plugin allows you to create a single job that runs multiple build steps, each of which can
be a separate build job. This plugin is useful if you have a build pipeline that requires multiple build jobs to be run in
parallel.
• By chaining jobs in Jenkins, you can automate the entire build process and ensure that each step is completed before the
next step is started. This can help to improve the efficiency and reliability of your build process, and allow you to quickly
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
and easily make changes to your build pipeline.
• Build Pipelines:
• A build pipeline in DevOps is a set of automated processes that compile, build, and test software, and prepare it for
deployment. A build pipeline represents the end-to-end flow of code changes from development to production.
• Build pipelines extend the concept of job chaining by providing a visual representation of the entire
software delivery process.
• A build pipeline allows for the creation of complex workflows with multiple stages, each
represented by a distinct job.
• The pipeline provides a graphical representation of the flow, including the relationships and
dependencies between different stages or jobs.
• In a build pipeline, each stage represents a specific task or set of tasks, such as building, testing,
deploying, and releasing software.
• Jobs within each stage are executed in parallel or sequentially, depending on the defined
dependencies and requirements. The pipeline enables tracking the progress of builds through
different stages, providing visibility into the software delivery process.
• Build pipelines offer benefits like visual clarity, easy monitoring, and the ability to track the status of
each stage or job. They also facilitate the management of complex workflows involving multiple
teams, environments, and deployment targets.
• Overall, job chaining and build pipelines help automate and streamline the software delivery
process, ensuring that tasks are executed in the desired sequence with proper dependencies and
visibility. They enhance collaboration, improve efficiency, and provide traceability in the build and
release management workflows.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• The steps involved in a typical build pipeline include:
• Code Commit: Developers commit code changes to a version control system such as Git.
• Build and Compile: The code is built and compiled, and any necessary dependencies are resolved.
• Unit Testing: Automated unit tests are run to validate the code changes.
• Integration Testing: Automated integration tests are run to validate that the code integrates correctly with other parts of the
system.
• Staging: The code is deployed to a staging environment for further testing and validation.
• Release: If the code passes all tests, it is deployed to the production environment.
• Monitoring: The deployed code is monitored for performance and stability.
• A build pipeline can be managed using a continuous integration tool such as Jenkins, TravisCI, or CircleCI. These tools
automate the build process, allowing you to quickly and easily make changes to the pipeline, and ensuring that the pipeline
is consistent and reliable.
• In DevOps, the build pipeline is a critical component of the continuous delivery process, and is used to ensure that code
changes are tested, validated, and deployed to production as quickly and efficiently as possible. By automating the build
pipeline, you can reduce the time and effort required to deploy code changes, and improve the speed and quality of your
software delivery process. B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• 4.9 Build servers and infrastructure as code
When you're developing and deploying software, one of the first things to figure out is how to take your code
and deploy your working application to a production environment where people can interact with your
software.

• Most development teams understand the importance of version control to coordinate code commits, and
build servers to compile and package their software, but Continuous Integration (CI) is a big topic.

• Why build servers are important

• Build servers have 3 main purposes:

● Compiling committed code from your repository many times a day


● Running automatic tests to validate code
● Creating deployable packages and handing off to a deployment tool, like Octopus Deploy

•Without a build server you're slowed down by complicated, manual processes and the needless time
constraints they introduce. For example, without a build server:

● Your team will likely need to commit code before a daily deadline or during change windows
● After that deadline passes, no one can commit again until someone manually creates and tests a build
● If there are problems with the code, the deadlines and manual processes further delay the fixes
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Building servers in DevOps involves several steps:
• Requirements gathering: Determine the requirements for the server, such as hardware specifications, operating system,
and software components needed.
• Server provisioning: Choose a method for provisioning the server, such as physical installation, virtualization, or cloud
computing.
• Operating System installation: Install the chosen operating system on the server.
•Software configuration: Install and configure the necessary software components, such as web servers, databases, and
middleware.
• Network configuration: Set up network connectivity, such as IP addresses, hostnames, and firewall rules.
• Security configuration: Configure security measures, such as user authentication, access control, and encryption.
• Monitoring and maintenance: Implement monitoring and maintenance processes, such as logging, backup, and disaster
recovery.
• Deployment: Deploy the application to the server and test it to ensure it is functioning as expected.
• Throughout the process, it is important to automate as much as possible using tools such as Ansible, Chef, or Puppet to
ensure consistency and efficiency in building servers.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Infrastructure as code

• Infrastructure as code (IaC) uses DevOps methodology and versioning with a descriptive model to define and deploy
infrastructure, such as networks, virtual machines, load balancers, and connection topologies. Just as the same source code
always generates the same binary, an IaC model generates the same environment every time it deploys.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• IaC is a key DevOps practice and a component of continuous delivery. With IaC, DevOps teams can work together with a
unified set of practices and tools to deliver applications and their supporting infrastructure rapidly and reliably at scale.

• IaC evolved to solve the problem of environment drift in release pipelines. Without IaC, teams must maintain deployment
environment settings individually. Over time, each environment becomes a "snowflake," a unique configuration that can't be
reproduced automatically. Inconsistency among environments can cause deployment issues. Infrastructure administration and
maintenance involve manual processes that are error prone and hard to track.

• IaC avoids manual configuration and enforces consistency by representing desired environment states via well-documented
code in formats such as JSON. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by
configuration drift or missing dependencies. Release pipelines execute the environment descriptions and version
configuration models to configure target environments. To make changes, the team edits the source, not the target.

• Idempotence, the ability of a given operation to always produce the same result, is an important IaC principle. A
deployment command always sets the target environment into the same configuration, regardless of the environment's
starting state. Idempotency is achieved by either automatically configuring the existing target, or by discarding the existing
target and recreating a fresh environment.

• IAC can be achieved by using tools such as Terraform, CloudFormation, or Ansible to define infrastructure components in
a file that can be versioned, tested, and deployed in a consistent and automated manner.
• Benefits of IAC include:
• Speed: IAC enables quick and efficient provisioning and deployment of infrastructure.
•Consistency: By using code to define and manage infrastructure, it is easier to ensure
consistency across multiple environments.
•Repeatability: IAC allows for easy replication of infrastructure components in different
environments, such as development, testing, and production.
•Scalability: IAC makes it easier to scale infrastructure as needed by simply modifying the
code.
•Version control: Infrastructure components can be versioned, allowing for rollback to
previous versions if necessary.
•Overall, IAC is a key component of modern DevOps practices, enabling organizations to
manage their infrastructure in a more efficient, reliable, and scalable way.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Building by dependency order
• Many build tools have the concept of a build tree where dependencies are
built in the order required for the build to complete, since parts of the build
might depend on other parts.
• In Make-like tools, this is described explicitly; for instance, like this:
• a.out : b.o c.o
• b.o : b.c
• c.o : c.c
• So ,in order to build a.out, b.o and c.o must be built first.
• In tools such as Maven, the build graph is derived from the dependencies we
set for an artifact. Gradle, another Java build tool, also creates a build graph
before building.
• Jenkins has support for visualizing the build order for Maven builds, which
is called the reactor in Maven parlance, in the web user interface.
• This view is not available for Make-style builds, however.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Many build tools, like Make, Maven, and Gradle, employ a build tree concept to
handle dependencies between different parts of a build. The build tree ensures
that the necessary components are built in the required order for the entire build to
successfully complete.
• In Make-like tools, such as Make itself, dependencies are explicitly described in
the build script. For example, in the code snippet provided:
• a.out : b.o c.o
• b.o : b.c
• c.o : c.c
• To build a.out, b.o and c.o must be built first, as indicated by the
dependencies specified.
• In tools like Maven and Gradle, the build graph is derived from the declared
dependencies of artifacts. These tools analyze the dependencies specified in the
build configuration and automatically construct a build graph that represents the
correct order for building the components.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Building by dependency order
• Building by dependency order in DevOps is the process of ensuring that the components of a system are built and deployed
in the correct sequence, based on their dependencies. This is necessary to ensure that the system functions as intended, and
that components are deployed in the right order so that they can interact correctly with each other.
• The steps involved in building by dependency order in DevOps include:
• Define dependencies: Identify all the components of the system and the dependencies between them. This can be
represented in a diagram or as a list.
• Determine the build order: Based on the dependencies, determine the correct order in which components should be built
and deployed.
• Automate the build process: Use tools such as Jenkins, TravisCI, or CircleCI to automate the build and deployment
process. This allows for consistency and repeatability in the build process.
• Monitor progress: Monitor the progress of the build and deployment process to ensure that components are deployed in
the correct order and that the system is functioning as expected.
• Test and validate: Test the system after deployment to ensure that all components are functioning as intended and that
dependencies are resolved correctly.
• Rollback: If necessary, have a rollback plan in place to revert to a previous version of the system if the build or
deployment process fails.
• In conclusion, building by dependency order in DevOps is a critical step in ensuring the success of a system deployment, as
it ensures that components are deployed in the correct order and that dependencies are resolved correctly. This results in a
more stable, reliable, and consistent system.
• Jenkins, being a versatile build system, offers support for
visualizing the build order of Maven builds in its web user
interface. This visualization, referred to as the "reactor" in
Maven terminology, helps developers understand the order
in which Maven modules or artifacts will be built within the
build tree.
• However, it's important to note that Jenkins does not provide
the same built-in visualization for Make-style builds. The
focus on visualizing the build order is primarily tailored
towards Maven builds, where the reactor structure plays a
significant role.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Build phases
• One of the principal benefits of the Maven build tool is that it standardizes
builds.
• This is very useful for a large organization, since it won't need to invent its
own build standards. Other build tools are usually much more lax regarding
how to implement various build phases. The rigidity of Maven has its pros
and cons. Sometimes, people who get started with Maven reminisce about
the freedom that could be had with tools such as Ant.
• You can implement these build phases with any tool, but it's harder to keep the
habit
• going when the tool itself doesn't enforce the standard order: building,
testing, and deploying.
• We will examine testing in more detail in a later chapter, but we should note here that the
testing phase is very important. The Continuous Integration server needs to be very good at
catching errors, and automated testing is very important for achieving that goal.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• One of the main advantages of the Maven build tool is its ability to
standardize builds. This standardization is particularly valuable for large
organizations as they don't have to create their own build conventions. In
contrast, other build tools often provide more flexibility in implementing
different build phases. The strictness of Maven has both its benefits and
drawbacks. Some developers, who are accustomed to more flexible tools
like Ant, may miss the freedom they had.
• Although it's possible to implement build phases with any build tool, it can
be challenging to maintain consistent practices without a tool that enforces a
standardized order, such as building, testing, and deploying.
• It's important to highlight the significance of the testing phase in the build
process. Effective error detection is crucial for a Continuous Integration
server, and automated testing plays a vital role in achieving that objective.
• In summary, Maven's ability to standardize builds offers advantages for
organizations by eliminating the need to define their own build conventions.
However, it may feel restrictive compared to more flexible tools. Regardless
of the build tool used, the B.MAHESH
testing phase holds great importance in ensuring
(YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
error-free software delivery, especially in the context of Continuous
• Build phases
• In DevOps, there are several phases in the build process, including:
• Planning: Define the project requirements, identify the dependencies, and create a build plan.
• Code development: Write the code and implement features, fixing bugs along the way.
• Continuous Integration (CI): Automatically build and test the code as it is committed to a version control system.
• Continuous Delivery (CD): Automatically deploy code changes to a testing environment, where they can be tested and validated.
• Deployment: Deploy the code changes to a production environment, after they have passed testing in a pre-production environment.
• Monitoring: Continuously monitor the system to ensure that it is functioning as expected, and to detect and resolve any issues that may
arise.
• Maintenance: Continuously maintain and update the system, fixing bugs, adding new features, and ensuring its stability.
• These phases help to ensure that the build process is efficient, reliable, and consistent, and that code changes are validated and deployed in
a controlled manner. Automation is a key aspect of DevOps, and it helps to make these phases more efficient and less prone to human error.

• In continuous integration (CI), this is where we build the application for the first time. The build stage is the first stretch of a CI/CD
pipeline, and it automates steps like downloading dependencies, installing tools, and compiling.

• Besides building code, build automation includes using tools to check that the code is safe and follows best practices. The build stage
usually ends in the artifact generation step, where we create a production-ready package. Once this is done, the testing stage can begin.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
The build stage starts from code commit and runs from the beginning up to the test stage
We’ll be covering testing in-depth in future articles (subscribe to the newsletter so you don’t
miss them)..
Build automation verifies that the application, at a given code commit, can qualify for further
testing. We can divide it into three parts:

1. Compilation: the first step builds the application.


2. Linting: checks the code for programmatic and stylistic errors.
3. Code analysis: using automated source-checking tools, we control the code’s quality.
4. Artifact generation: the last step packages the application for release or deployment.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Alternative build servers
• There are several alternative build servers in DevOps, including:
• Jenkins - an open-source, Java-based automation server that supports various plugins and integrations.
• Travis CI - a cloud-based, open-source CI/CD platform that integrates with Github.
•CircleCI - a cloud-based, continuous integration and delivery platform that supports multiple languages and integrates
with several platforms.
•GitLab CI/CD - an integrated CI/CD solution within GitLab that allows for complete project and pipeline
management.
• Bitbucket Pipelines - a CI/CD solution within Bitbucket that allows for pipeline creation and management within the
code repository.
• AWS CodeBuild - a fully managed build service that compiles source code, runs tests, and produces software packages
that are ready to deploy.
• Azure Pipelines - a CI/CD solution within Microsoft Azure that supports multiple platforms and programming
languages.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Alternative build servers
• While Jenkins appears to be pretty dominant in the build server scene in
my experience, it is by no means alone. Travis CI is a hosted solution that is
popular among open source projects. Buildbot is a buildserver that is
written in, and configurable with, Python. The Go server is another one,
from ThoughtWorks. Bamboo is an offering from Atlassian. GitLab also
supports build server functionality now.
• Do shop around before deciding on which build server works best for you.
• When evaluating different solutions, be aware of attempts at vendor lock-
in. Also keep in mind that the build server does not in any way replace the
need for builds that are well behaved locally on a developer's machine.
• Also, as a common rule of thumb, see if the tool is configurable via
configuration files. While management tends to be impressed by graphical
configuration, developers and operations personnel rarely like being
forced to use a tool that can only be configured via a graphical user
interface.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• While Jenkins is widely used as a build server, it is not the only option available.
Other popular build server solutions include Travis CI, which is commonly used
for open-source projects, Buildout, written in Python and highly configurable, the
Go server from Thought Works, Bamboo by Atlassian, and GitLab, which now
supports build server functionality.
• When choosing a build server, it's important to explore different options and
compare them. Avoid vendor lock-in and consider solutions that offer flexibility
and configurability. Remember that the build server should complement, not
replace, the need for well-behaved builds on developers' machines.
• As a general rule, it's beneficial to choose a build server that allows configuration
via configuration files rather than relying solely on a graphical user interface.
While management might appreciate the convenience of a GUI, developers and
operations personnel prefer the flexibility of configuring tools through text-based
configuration files.
• In summary, while Jenkins is a dominant player in the build server scene, there
are other popular options available. It's crucial to evaluate different solutions, be
wary of vendor lock-in, and consider configurability and flexibility when making a
decision. Additionally, prioritizing configuration via files rather than solely relying
on a graphical interface canB.MAHESH
be advantageous forKNOWLEDGE
(YOUTUBE CHANNEL :: SV TECH developers
) and operations
personnel.
Collating quality measures
• A useful thing that a build server can do is the collation of software
quality metrics. Jenkins has some support for this out of the box. Java unit
tests are executed and can be visualized directly on the job page.
• Another more advanced option is using the Sonar code quality visualizer,
which is shown in the following screenshot. Sonar tests are run during
the build phase and propagated to the Sonar server, where they are
stored and visualized.
•A Sonar server can be a great way for a development team to see the
fruits of their efforts at improving the code base.
• The drawback of implementing a Sonar server is that it sometimes slows
down the builds. The recommendation is to perform the Sonar builds in
your nightly builds, once a day.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• A build server can provide valuable features for collating software quality metrics.
Jenkins offers some built-in support for this functionality. It can execute Java unit
tests and display the results directly on the job page, allowing developers to
visualize the test outcomes.
• For more advanced quality measurement, Jenkins integrates with Sonar, a code
quality visualizer. During the build phase, Sonar tests are executed and the results
are sent to a Sonar server, where they are stored and presented in a visual
format. Having a Sonar server can be beneficial for development teams to assess
the progress they make in enhancing the codebase.
• It's important to note that implementing a Sonar server may introduce some build
performance impact. To mitigate this, it is recommended to run Sonar builds
during nightly builds, once a day. This way, the builds are not affected by the
potential slowdown caused by Sonar analysis.
• In summary, a build server like Jenkins can collate software quality metrics. It
provides options for executing unit tests and visualizing results on the job page.
Additionally, integrating with a Sonar server enables more advanced code quality
analysis. However, it's crucial to consider the impact on build performance and
allocate Sonar analysis to nightly builds for better efficiency.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Collating quality measures
• In DevOps, collating quality measures is an important part of the continuous improvement process. The following are
some common quality measures used in DevOps to evaluate the quality of software systems:
• Continuous Integration (CI) metrics - metrics that track the success rate of automated builds and tests, such as build
duration and test pass rate.
• Continuous Deployment (CD) metrics - metrics that track the success rate of deployments, such as deployment frequency
and time to deployment.
• Code review metrics - metrics that track the effectiveness of code reviews, such as review completion time and code
review feedback.
• Performance metrics - measures of system performance in production, such as response time and resource utilization.
• User experience metrics - measures of how users interact with the system, such as click-through rate and error rate.
• Security metrics - measures of the security of the system, such as the number of security vulnerabilities and the frequency
of security updates.
• Incident response metrics - metrics that track the effectiveness of incident response, such as mean time to resolution
(MTTR) and incident frequency.
• By regularly collating these quality measures, DevOps teams can identify areas for improvement, track progress over
time, and make informed decisions about the quality of their systems.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )

You might also like