0% found this document useful (0 votes)
88 views

Devops Project 1 Install Puppet Server On Master Node

The document describes the steps to install Puppet Server on a master node and Puppet Agent on a client node. It involves downloading and installing Puppet packages, configuring the Puppetserver and Puppet configuration files, and starting the Puppet services. Key steps include installing Puppetserver on the master, defining the Puppet master information in the client's Puppet.conf file, and starting the Puppetserver and Puppet services.

Uploaded by

Nagaraju Rj
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Devops Project 1 Install Puppet Server On Master Node

The document describes the steps to install Puppet Server on a master node and Puppet Agent on a client node. It involves downloading and installing Puppet packages, configuring the Puppetserver and Puppet configuration files, and starting the Puppet services. Key steps include installing Puppetserver on the master, defining the Puppet master information in the client's Puppet.conf file, and starting the Puppetserver and Puppet services.

Uploaded by

Nagaraju Rj
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

DevOps Project 1

Install Puppet Server on Master Node


1. Download the latest Puppet version on the master node:

wget https://apt.puppetlabs.com/puppet6-release-focal.deb

2. Once the download is complete, install the package by using:

sudo dpkg -i puppet6-release-focal.deb

3. Update the package repository:

sudo apt-get update -y

4. Install the Puppet server with the following command:


sudo apt-get install puppetserver -y

5. Open the puppetserver file by using:

sudo nano /etc/default/puppetserver

6. In the puppetserver file, modify the following line to change the memory size to 1GB:

JAVA_ARGS="-Xms1g -Xmx1g
-Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger
"

7. Press Ctrl + X to close the puppetserver file. Type Y and press Enter to save the


changes you made.

8. Start the Puppet service and set it to launch on system boot by using:

sudo systemctl start puppetserver


sudo systemctl enable puppetserver
9. Check if the Puppet service is running with:

sudo systemctl status puppetserver

Step 4: Install Puppet Agent on Client Node


1. Download the latest version of Puppet on a client node:

wget https://apt.puppetlabs.com/puppet6-release-focal.deb

2. Once the download is complete, install the package by using:

sudo dpkg -i puppet6-release-focal.deb

3. Update the package repository one more time:

sudo apt-get update -y

4. Install the Puppet agent by using:

sudo apt-get install puppet-agent -y


5. Open the Puppet configuration file:

sudo nano /etc/puppetlabs/puppet/puppet.conf

6. Add the following lines to the end of the Puppet configuration file to define the Puppet
master information:

[main]
certname = puppetclient
server = puppetmaster

7. Press Ctrl + X to close the Puppet configuration file, then type Y and press Enter to
save the changes.

8. Start the Puppet service and set it to launch on system boot by using:

sudo systemctl start puppet


sudo systemctl enable puppet

9. Check if the Puppet service is running with:

sudo systemctl status puppet


The puppet.conf file is Puppet’s main config file. It configures all of the Puppet
commands and services, including Puppet agent, Puppet master, Puppet apply, and
Puppet cert. Nearly all of the settings listed in the configuration reference can be set in
puppet.conf.

It resembles a standard INI file, with a few syntax extensions. Settings can go into
application-specific sections, or into a [main] section that affects all applications.

The puppet.conf file is always located at $confdir/puppet.conf.

Although its location is configurable with the config setting, it can only be set on the
command line (e.g. puppet agent -t --config ./temporary_config.conf).

Agent Config:

[main]
certname = agent01.example.com
server = puppet
environment = production
runinterval = 1h

Master Config:

[main]
certname = puppetmaster01.example.com
server = puppet
environment = production
runinterval = 1h
strict_variables = true

[master]
dns_alt_names =
puppetmaster01,puppetmaster01.example.com,puppet,puppet.example.com
reports = puppetdb
storeconfigs_backend = puppetdb
storeconfigs = true
environment_timeout = unlimited

Format:

The puppet.conf file consists of one or more config sections, each of which can


contain any number of settings.

The file can also include comment lines at any point.

Config sections
 

[main]

certname = puppetmaster01.example.com

A config section is a group of settings. It consists of:

 Its name, enclosed in square brackets. The [name] of the config section must


be on its own line, with no leading space.
 Any number of setting lines, which can be indented for readability.
 Any number of empty lines or comment lines.
As soon as a new config section [name] appears in the file, the former config section
is closed and the new one begins. A given config section should only occur once in
the file.

Puppet uses four config sections:


 main is the global section used by all commands and services. It can be
overridden by the other sections.
 master is used by the Puppet master service and the Puppet cert command.
 agent is used by the Puppet agent service.
 user is used by the Puppet apply command, as well as many of the less
common 
Puppet prefers to use settings from one of the three application-specific sections
(master, agent, or user). If it doesn’t find a setting in the application section, it will
use the value from main. (If main doesn’t set one, it will fall back to the default
value.)

Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.

Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.

Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:

$ sudo apt update


$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.

$ sudo yum install ansible

If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.

$ sudo pip install virtualenv

With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.

SSH and keys


Ansible’s default mechanism for distributing modules is SSH. So you can control
access to your managed nodes with SSH keys, Kerberos, or any other identity
management system that works with SSH. You can even use passwords, but
they’re less secure and can be unwieldy.

Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system,

The following is a guest post from Eric Goebelbecker. Eric has worked in the


financial markets in New York City for 25 years, developing infrastructure for
market data and financial information exchange (FIX) protocol networks. He
loves to talk about what makes teams effective (or not so effective!)

Ansible is a powerful configuration management tool for deploying software and


administering remote systems that you can integrate into any existing
architecture. It relies on industry-standard security mechanisms and takes full
advantage of existing operating system utilities.

Ansible uses no agents and works with your existing security infrastructure. It


employs a simple language to describe your systems, how they relate, and the
jobs you need to manage them. Also, it connects to nodes and pushes Ansible
modules to them. These modules contain your configuration management tasks.

Ansible’s maintainers describe it as a radically simple IT automation engine. They’re


not wrong. In this post, we’ll take a look at why that’s an apt definition, and we’ll
discuss how you can use it for configuration management.
Ansible overview
Before we get into anything else, let’s take a moment to learn some basics about
Ansible

Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.

Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.

Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:

$ sudo apt update


$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.

$ sudo yum install ansible

If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.

$ sudo pip install virtualenv

With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.
SSH and keys
Ansible’s default mechanism for distributing modules is SSH. So you can control
access to your managed nodes with SSH keys, Kerberos, or any other identity
management system that works with SSH. You can even use passwords, but
they’re less secure and can be unwieldy.

Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system, you’re ready to
manage it. If you want to read more, Digital Ocean has an excellent tutorial for
adding keys.

Ansible inventory and configuration


After you’ve set up a host to allow access from the control node, you can add it to
your Ansible inventory. To do that, you need to decide where your Ansible
configuration will go.

Ansible stores its configuration in a set of text files. The default location for these
files is /etc/ansible. To add configuration files there, you need to either create the
file as root or add write permission for an unprivileged user. Another option is that
you could override the location with a configuration file and place it in an area
that doesn’t require privileged access.

So, let’s imagine you want to store your configuration in


the ansible_config directory in ansible_user’s home directory.

First, create a configuration file in your home directory:

First, create a configuration file in your home directory:

$ touch ~/.ansible.cfg

Then, add a line for your inventory:

[defaults]
inventory = /home/ansible_user/ansible_config/hosts
Now you need to create the configuration directory:

$ mkdir ~/ansible_config

Finally, create a hosts file in the new directory to hold your inventory:

$ touch ~/ansible_config/hosts

Now you can add your managed nodes to the file:

127.0.0.1
mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

The following is a guest post from Eric Goebelbecker. Eric has worked in the


financial markets in New York City for 25 years, developing infrastructure for
market data and financial information exchange (FIX) protocol networks. He
loves to talk about what makes teams effective (or not so effective!)

Ansible is a powerful configuration management tool for deploying software and


administering remote systems that you can integrate into any existing
architecture. It relies on industry-standard security mechanisms and takes full
advantage of existing operating system utilities.

Ansible uses no agents and works with your existing security infrastructure. It


employs a simple language to describe your systems, how they relate, and the
jobs you need to manage them. Also, it connects to nodes and pushes Ansible
modules to them. These modules contain your configuration management tasks.

 Photo by Jay Heike on Unsplash


Ansible’s maintainers describe it as a radically simple IT automation engine. They’re
not wrong. In this post, we’ll take a look at why that’s an apt definition, and we’ll
discuss how you can use it for configuration management.

Ansible overview
Before we get into anything else, let’s take a moment to learn some basics about
Ansible

Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.

Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.

Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:

$ sudo apt update


$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible

Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.

$ sudo yum install ansible

If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.
$ sudo pip install virtualenv

With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.

SSH and keys


Ansible’s default mechanism for distributing modules is SSH. So you can control
access to your managed nodes with SSH keys, Kerberos, or any other identity
management system that works with SSH. You can even use passwords, but
they’re less secure and can be unwieldy.

Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system, you’re ready to
manage it. If you want to read more, Digital Ocean has an excellent tutorial for
adding keys.

Ansible inventory and configuration


After you’ve set up a host to allow access from the control node, you can add it to
your Ansible inventory. To do that, you need to decide where your Ansible
configuration will go.

Ansible stores its configuration in a set of text files. The default location for these
files is /etc/ansible. To add configuration files there, you need to either create the
file as root or add write permission for an unprivileged user. Another option is that
you could override the location with a configuration file and place it in an area
that doesn’t require privileged access.

So, let’s imagine you want to store your configuration in


the ansible_config directory in ansible_user’s home directory.

First, create a configuration file in your home directory:

$ touch ~/.ansible.cfg

Then, add a line for your inventory:


[defaults]
inventory = /home/ansible_user/ansible_config/hosts

Now you need to create the configuration directory:

$ mkdir ~/ansible_config

Finally, create a hosts file in the new directory to hold your inventory:

$ touch ~/ansible_config/hosts

Now you can add your managed nodes to the file:

127.0.0.1
mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

This file declares seven managed nodes with their fully-qualified DNS names and
localhost with the loopback IP address. You can use IP addresses or partial
hostnames (assuming the control node can resolve them) too. The names in
square brackets are groups, which you can use to manage sets of systems
instead of listing each name. If you need more information, Ansible

Modules
With a configured system, you can execute modules.

Let’s start with a ping:

$ ansible 127.0.0.1 -m ping

127.0.0.1 | SUCCESS => {


"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

The first argument to ansible is the target host or groups of hosts. In addition to
the groups defined in your inventory, ansible also creates the all group. So you
could ping all hosts:

$ ansible all -m ping

Or your web servers:

$ ansible webservers -m ping

Next, -m indicates the module to run. And ping verifies that you can connect to a


host and execute a remote command. It also gives you some basic information.
In this case, localhost has a Python interpreter installed in /usr/bin.

Run arbitrary commands with Ansible


You’re not limited to modules with Ansible. You can also run arbitrary or ad-hoc
commands with -a. Here’s a listing of the remote user’s home directory.

$ ansible 127.0.0.1 -a "ls -a"


35.174.111.167 | CHANGED | rc=0 >>
total 16
drwx------. 4 ericg ericg 111 Jul 12 17:51 .
drwxr-xr-x. 4 root root 39 Jul 12 17:35 ..
drwx------. 3 ericg ericg 17 Jul 12 17:51 .ansible
-rw-------. 1 ericg ericg 472 Jul 13 16:21 .bash_history
-rw-r--r--. 1 ericg ericg 18 Jan 14 06:10 .bash_logout
-rw-r--r--. 1 ericg ericg 141 Jan 14 06:10 .bash_profile
-rw-r--r--. 1 ericg ericg 312 Jan 14 06:10 .bashrc
drwx------. 2 ericg ericg 48 Jul 13 16:19 .ssh

Docker installation On Ubuntu
Step 1: To install docker on Ubuntu box, first let us update the packages.

1 sudo apt-get update

This will ask for the password. Refer to the below screenshot to get a better
understanding.
Step 2: Now before installing docker, I need to install the recommended packages.
For that, just type in the below command:

1 sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Press “y” to continue. 

Docker Certification Training Course

Explore Curriculum
After this, we are done with the pre-requisites! Now, let’s move ahead and install
Docker.

Step 3: Type in the below command to install docker engine:

1 sudo apt-get install docker-engine

Sometimes it will ask again ask for the password. Hit enter and the installation will
begin.

One this is done, your task to install docker will be completed!


Step 4: So let’s just simply start the docker service. For that, just type in the below
command:

1 sudo service docker start

It says your job is already running. Congratulations! docker has been successfully
installed.

Step 5: Now just to verify that docker is successfully running, let me show you how
to pull a CentOS image from docker hub and run the CentOS container. For that,
just type in the below command:

1 sudo docker pull centos


First, it will check the local registry for CentOS image. If it doesn’t find there, then it
will go to the docker hub and pull the image. Refer to the below screenshot for
better understanding:

So we have successfully pulled a centOS image from docker hub. Next, let us run
the CentOS container. For that, just type in the below command:

1 sudo docker run -it centos


As you can see in the above screenshot, we are now in the CentOS container!

So to summarize, We have first installed docker on Ubuntu, after that we have


pulled a CentOS image from docker hub and using that image, we have successfully
built a CentOS container. To know more about docker containers and how it works,
you can refer this blog on Docker Container.

This is how you install docker and build a container on Ubuntu. 

Structuring the repository

While playing around with docker I've tried different ways to "structure" files
and folders and ended up with the following concepts:

 everything related to docker is placed in a .docker directory on on


the same level as the main application
 in this directory
o each service gets its own subdirectory for configuration
o is a .shared folder containing scripts and
configuration required by multiple services
o is an .env.example file containing variables for the docker-
compose.yml
o is a docker-test.sh file containing high level tests to validate
the docker containers
 a Makefile with common instructions to control Docker is placed in the
repository root
The result looks roughly like this:

<project>/
├── .docker/
| ├── .shared/
| | ├── config/
| | └── scripts/
| ├── php-fpm/
| | └── Dockerfile
| ├── ... <additional services>/
| ├── .env.example
| ├── docker-compose.yml
| └── docker-test.sh
├── Makefile
├── index.php
└── ... <additional app files>/

The .docker folder

As I mentioned, for me it makes a lot of sense to keep the infrastructure


definition close to the codebase, because it is immediately available to every
developer. For bigger projects with multiple components there will be a code-
infrastructure-coupling anyways (e.g. in my experience it is usually not
possible to simply switch MySQL for PostgreSQL without any other
changes) and for a library it is a very convenient (although opinionated) way
to get started.

I personally find it rather frustrating when I want to contribute to an open


source project but find myself spending a significant amount of time setting
the environment up correctly instead of being able to just work on the code.

Ymmv, though (e.g. because you don't want everybody with write access to
your app repo also to be able to change your infrastructure code). We actually
went a different route previously and had a second repository ("-inf") that
would contain the contents of the .docker folder:

<project-inf>/
├── .shared/
| ├── config/
| └── scripts/
├── php-fpm/
| └── Dockerfile
├── ... <additional services>/
├── .env.example
└── docker-compose.yml

<project>/
├── index.php
└── ... <additional app files>/

Worked as well, but we often ran into situations where the contents of the
repo would be stale for some devs, plus it was simply additional overhead
with not other benefits to us at that point. Maybe git submodules will enable
us to get the best of both worlds - I'll blog about it once we try ;)

The .shared folder

When dealing with multiple services, chances are high that some of those
services will be configured similarly, e.g. for

 installing common software


 setting up unix users (with the same ids)
 configuration (think php-cli for workers and php-fpm for web requests)

To avoid duplication, I place scripts (simple bash files) and config files in
the .shared folder and make it available in the build context for each service.
I'll explain the process in more detail under providing the correct build
context.

docker-test.sh

Is really just a simple bash script that includes some high level tests to make
sure that the containers are built correctly. See section Testing if everything
works.

.env.example and docker-compose.yml

docker-compose uses a .env file for a convenient way to define


and substitute environment variables. Since this .env file is
environment specific, it is NOT part of the repository (i.e. ignored
via .gitignore). Instead, we provide a .env.example file that contains the
required environment variables including reasonable default values. A new
dev would usually run cp .env.example .env after checking out the
repository for the first time. See section .env.example.

The Makefile

make and Makefiles are among those things that I've heard about


occasionally but never really cared to understand (mostly because I
associated them with C). Boy, did I miss out. I was comparing different
strategies to provide code quality tooling (style checkers, static analyzers,
tests, ...) and went from custom bash scripts over composer scripts to finally
end up at Makefiles.

The Makefile serves as a central entry point and simplifies the management


of the docker containers, e.g. for (re-)building, starting, stopping, logging in,
etc. See section Makefile and .bashrc.
Defining services: php-fpm, nginx and
workspace

Let's have a look at a real example and "refactor" the php-cli, php-


fpm and nginx containers from the first part of this tutorial series.

This is the folder structure:

<project>/
├── .docker/
| ├── .shared/
| | ├── config/
| | | └── php/
| | | └── conf.d/
| | | └── zz-app.ini
| | └── scripts/
| | └── docker-entrypoint/
| | └── resolve-docker-host-ip.sh
| ├── nginx/
| | ├── sites-available/
| | | └── default.conf
| | ├── Dockerfile
| | └── nginx.conf
| ├── php-fpm/
| | ├── php-fpm.d/
| | | └── pool.conf
| | └── Dockerfile
| ├── workspace/ (formerly php-cli)
| | ├── .ssh/
| | | └── insecure_id_rsa
| | | └── insecure_id_rsa.pub
| | └── Dockerfile
| ├── .env.example
| ├── docker-compose.yml
| └── docker-test.sh
├── Makefile
└── index.php

php-fpm

Click here to see the full php-fpm Dockerfile.

Since we will be having two PHP containers, we need to place the common
.ini settings in the .shared directory.

| ├── .shared/
| | ├── config/
| | | └── php/
| | | └── conf.d/
| | | └── zz-app.ini

For now, zz-app.ini will only contain our opcache setup:

; enable opcache
opcache.enable_cli = 1
opcache.enable = 1
opcache.fast_shutdown = 1
; revalidate everytime (effectively disabled for development)
opcache.validate_timestamps = 0

The pool configuration is only relevant for php-fpm, so it goes in the


directory of the service. Btw. I highly recommend this video on PHP-FPM
Configuration if your php-fpm foo isn't already over 9000.
| ├── php-fpm/
| | ├── php-fpm.d/
| | | └── pool.conf

Modifying the pool configuration

We're using the modify_config.sh script to set the user and group that owns
the php-fpm processes.

# php-fpm pool config


COPY ${SERVICE_DIR}/php-fpm.d/* /usr/local/etc/php-fpm.d
RUN /tmp/scripts/modify_config.sh /usr/local/etc/php-fpm.d/zz-default.conf \
"__APP_USER" \
"${APP_USER}" \
&& /tmp/scripts/modify_config.sh /usr/local/etc/php-fpm.d/zz-default.conf \
"__APP_GROUP" \
"${APP_GROUP}" \
;

Custom ENTRYPOINT

Since php-fpm needs to be debuggable, we need to ensure that


the host.docker.internal DNS entry exists, so we'll use the corresponding
ENTRYPOINT to do that.

# entrypoint
RUN mkdir -p /bin/docker-entrypoint/ \
&& cp /tmp/scripts/docker-entrypoint/* /bin/docker-entrypoint/ \
&& chmod +x -R /bin/docker-entrypoint/ \
;

ENTRYPOINT ["/bin/docker-entrypoint/resolve-docker-host-ip.sh","php-fpm"]
nginx

Click here to see the full nginx Dockerfile.

The nginx setup is even simpler. There is no shared config, so that everything
we need resides in

| ├── nginx/
| | ├── sites-available/
| | | └── default.conf
| | ├── Dockerfile
| | └── nginx.conf

Please note, that nginx only has the nginx.conf file for configuration (i.e.
there is no conf.d directory or so), so we need to define the full config in
there.

user __APP_USER __APP_GROUP;


worker_processes 4;
pid /run/nginx.pid;
daemon off;

http {
# ...

include /etc/nginx/sites-available/*.conf;

# ...
}

There are two things to note:


 user and group are modified dynamically
 we specify /etc/nginx/sites-available/ as the directory that holds
the config files for the individual files via include
/etc/nginx/sites-available/*.conf;

We need to keep the last point in mind, because we must use the same
directory in the Dockerfile:

# nginx app config


COPY ${SERVICE_DIR}/sites-available/* /etc/nginx/sites-available/

The site's config file default.conf has a variable (__NGINX_ROOT) for


the root directive and we "connect" it with the fpm-container
via fastcgi_pass php-fpm:9000;

server {
# ...
root __NGINX_ROOT;
# ...

location ~ \.php$ {
# ...
fastcgi_pass php-fpm:9000;
}
}

php-fpm will resolve to the php-fpm container, because we use php-fpm as


the service name in the docker-compose file, so it will be automatically used
as the hostname:

Other containers on the same network can use either the service name or [an] alias to
connect to one of the service’s containers.
In the Dockerfile, we use

ARG APP_CODE_PATH
RUN /tmp/scripts/modify_config.sh /etc/nginx/sites-available/default.conf \
"__NGINX_ROOT" \
"${APP_CODE_PATH}" \
;

APP_CODE_PATH will be passed via docker-compose when we build the


container and mounted as a shared directory from the host system.

To get your personal access token for GitHub account, navigate to the Settings > Personal
access tokens and click the Generate new token button.

In the opened page, specify the Token description and select


the repo and admin:repo_hook scopes. Click Generate token at the bottom of the page.
Once redirected, copy and save the shown access token anywhere else (as it can’t be viewed
again after you leave this page).

Generating Access Token on GitLab


To generate a personal access token on GitLab, enter your account Settings and switch to
the Access Tokens tab.
Here, specify optional token Name, its Expiry date (can be left blank) and tick
the api permission scope.

Click the Create Personal Access Token button.


In the opened page, copy and temporary store your access token value anywhere else (as you
won’t be able to see it again after leaving this page).

Now, you are ready for package installation.

Install Git-Push-Deploy Package

Git-Push-Deploy package is an add-on, so it can be installed only on top of environment. We


have prepared two separate environments with Tomcat and Apache-PHP application servers to
show the workflow for different programming languages.
If you are going to use previously created environment, note that the package will overwrite the
application deployed to ROOT context. So to keep your already deployed application, move it to
the custom context. We recommend creating a new environment and then proceeding to the
installation:
1. Click the Import button at the top pane of the dashboard and insert manifest.jps link for Git-
Push-Deploy project within the opened URL tab:
https://github.com/jelastic-jps/git-push-deploy/blob/master/manifest.jps

Click Import to continue.
2. In the opened frame, specify the following details about your repository and target
environment:
 Git Repo Url -  HTTPS link to your application repo (either .git or of a common view).
You can fork our sample Hello World application to test the flow
 Branch - a project branch to be used
 User - enter your Git account login
 Token - specify the access token you’ve previously created for webhook generation
 Environment name - choose an environment your application will be deployed to
 Nodes - application server name (is fetched automatically upon selecting the environment)
Click Install to continue.
3. Wait a minute for Jelastic to fetch your application sources from GitHub and configure
webhook for continuous deployment.

Close the notification frame when installation is finished.


4. Depending on a project type, the result will be the following:

 for Java-based infrastructure, you’ll see a new environment appeared at your dashboard


with a Maven build node inside; it will build and deploy your application to
the ROOT context on a web server each time the source code is updated

Pay attention that it might take some time for Maven to compile a project (though the package
installation itself has already finished), so you need to wait a few minutes before launching it.
The current progress of this operation can be tracked in real time via vcs_update log file on
Maven:

 for PHP-based infrastructure (and the rest of supported languages), your application will


be deployed directly to the chosen server ROOT

Herewith, consider that the similar Projects section for Ruby application servers provides


information on used deployment mode (development by default) instead of a context, whilst
actual app location refers to server root as well.
To start your application, click on Open in browser next to your web server.
That’s it! Now a new version of your application is automatically delivered to the application
server upon each commit to a repository.

Test Automated Deploy from Git


And now let’s check how this process actually works. Make some minor adjustment to the code
in a repo and ensure everything is automated:
1. Click Edit this file for some item within your project repository and Commit changes to it -
for example, we’ll modify the text at our HelloWorld start page.

2. As a result, the appropriate webhook will be triggered to deploy the made changes into your
hosting environment -  refer to the repository Settings > Webhooks section for the details.
Upon clicking on this string you’ll see the list of Recent Deliveries, initiated by the webhook,
and result of their execution.
3. As the last checkpoint, return back to your application page and refresh it (whilst remembering
that it may take an extra minute for Maven to build and deploy your Java-based project).

That’s it! As you can see, the modifications were successfully applied, so the solution works as
intended.
Simply update your code, make commits as you usually do, and all the changes will be pushed to
your Jelastic environment automatically. No need to switch between processes or make manual
updates eliminates human errors and accelerates time to market for your application.

If Job 4 fails delete the running container


on Test Server
.

You can use the when keyword to control the condition. If your previous
test step fails then you can find the status. Use that status in the when
condition and tell if it fails. remove the container. You can run the below
command in your playbook with the command module.

docker rm -f $(docker ps -a -q)

Thank You

You might also like