Devops Project 1 Install Puppet Server On Master Node
Devops Project 1 Install Puppet Server On Master Node
wget https://apt.puppetlabs.com/puppet6-release-focal.deb
6. In the puppetserver file, modify the following line to change the memory size to 1GB:
JAVA_ARGS="-Xms1g -Xmx1g
-Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger
"
8. Start the Puppet service and set it to launch on system boot by using:
wget https://apt.puppetlabs.com/puppet6-release-focal.deb
6. Add the following lines to the end of the Puppet configuration file to define the Puppet
master information:
[main]
certname = puppetclient
server = puppetmaster
7. Press Ctrl + X to close the Puppet configuration file, then type Y and press Enter to
save the changes.
8. Start the Puppet service and set it to launch on system boot by using:
It resembles a standard INI file, with a few syntax extensions. Settings can go into
application-specific sections, or into a [main] section that affects all applications.
Although its location is configurable with the config setting, it can only be set on the
command line (e.g. puppet agent -t --config ./temporary_config.conf).
Agent Config:
[main]
certname = agent01.example.com
server = puppet
environment = production
runinterval = 1h
Master Config:
[main]
certname = puppetmaster01.example.com
server = puppet
environment = production
runinterval = 1h
strict_variables = true
[master]
dns_alt_names =
puppetmaster01,puppetmaster01.example.com,puppet,puppet.example.com
reports = puppetdb
storeconfigs_backend = puppetdb
storeconfigs = true
environment_timeout = unlimited
Format:
Config sections
[main]
certname = puppetmaster01.example.com
Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.
Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.
Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:
Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.
If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.
With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.
Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system,
Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.
Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.
Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:
Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.
If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.
With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.
SSH and keys
Ansible’s default mechanism for distributing modules is SSH. So you can control
access to your managed nodes with SSH keys, Kerberos, or any other identity
management system that works with SSH. You can even use passwords, but
they’re less secure and can be unwieldy.
Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system, you’re ready to
manage it. If you want to read more, Digital Ocean has an excellent tutorial for
adding keys.
Ansible stores its configuration in a set of text files. The default location for these
files is /etc/ansible. To add configuration files there, you need to either create the
file as root or add write permission for an unprivileged user. Another option is that
you could override the location with a configuration file and place it in an area
that doesn’t require privileged access.
$ touch ~/.ansible.cfg
[defaults]
inventory = /home/ansible_user/ansible_config/hosts
Now you need to create the configuration directory:
$ mkdir ~/ansible_config
Finally, create a hosts file in the new directory to hold your inventory:
$ touch ~/ansible_config/hosts
127.0.0.1
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
Ansible overview
Before we get into anything else, let’s take a moment to learn some basics about
Ansible
Installing Ansible
Since Ansible uses no agents, most of your preparation is done on the host
where you’ll run it to push configuration changes to the rest of your network. This
system is called the control node.
Since Ansible is written in Python, you may have two choices for how to install it.
If your control node is running Red Hat Enterprise Linux, CentOS, Fedora,
Debian, or Ubuntu, you can install the latest release version using the system’s
OS package manager.
Let’s take a look at installing Ansible on Ubuntu. First, you would add the
Ansible PPA, update apt, and then install the package:
Red Hat, Fedora, and CentOS are a little easier since Ansible is already
available as a mainstream package.
If you want to use alpha or beta releases, or if you prefer working with Python,
you can install Ansible with pip, Python’s package manager. Ansible works with
Python 2.7 or 3.x, so it works with the Python version installed on Linux and
macOS. But support for Python 2.7 is deprecated, so if you’re setting up Ansible
for a production system, it’s a good idea to update to Python 3.
$ sudo pip install virtualenv
With Ansible installed on your control node, you’re ready to add a system to your
configuration management inventory.
Keys are the easiest way to add support for a host. By adding the public key for
the Ansible user to authorized_keys on the target system, you’re ready to
manage it. If you want to read more, Digital Ocean has an excellent tutorial for
adding keys.
Ansible stores its configuration in a set of text files. The default location for these
files is /etc/ansible. To add configuration files there, you need to either create the
file as root or add write permission for an unprivileged user. Another option is that
you could override the location with a configuration file and place it in an area
that doesn’t require privileged access.
$ touch ~/.ansible.cfg
$ mkdir ~/ansible_config
Finally, create a hosts file in the new directory to hold your inventory:
$ touch ~/ansible_config/hosts
127.0.0.1
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
This file declares seven managed nodes with their fully-qualified DNS names and
localhost with the loopback IP address. You can use IP addresses or partial
hostnames (assuming the control node can resolve them) too. The names in
square brackets are groups, which you can use to manage sets of systems
instead of listing each name. If you need more information, Ansible
Modules
With a configured system, you can execute modules.
The first argument to ansible is the target host or groups of hosts. In addition to
the groups defined in your inventory, ansible also creates the all group. So you
could ping all hosts:
Docker installation On Ubuntu
Step 1: To install docker on Ubuntu box, first let us update the packages.
This will ask for the password. Refer to the below screenshot to get a better
understanding.
Step 2: Now before installing docker, I need to install the recommended packages.
For that, just type in the below command:
Explore Curriculum
After this, we are done with the pre-requisites! Now, let’s move ahead and install
Docker.
Sometimes it will ask again ask for the password. Hit enter and the installation will
begin.
It says your job is already running. Congratulations! docker has been successfully
installed.
Step 5: Now just to verify that docker is successfully running, let me show you how
to pull a CentOS image from docker hub and run the CentOS container. For that,
just type in the below command:
So we have successfully pulled a centOS image from docker hub. Next, let us run
the CentOS container. For that, just type in the below command:
While playing around with docker I've tried different ways to "structure" files
and folders and ended up with the following concepts:
<project>/
├── .docker/
| ├── .shared/
| | ├── config/
| | └── scripts/
| ├── php-fpm/
| | └── Dockerfile
| ├── ... <additional services>/
| ├── .env.example
| ├── docker-compose.yml
| └── docker-test.sh
├── Makefile
├── index.php
└── ... <additional app files>/
Ymmv, though (e.g. because you don't want everybody with write access to
your app repo also to be able to change your infrastructure code). We actually
went a different route previously and had a second repository ("-inf") that
would contain the contents of the .docker folder:
<project-inf>/
├── .shared/
| ├── config/
| └── scripts/
├── php-fpm/
| └── Dockerfile
├── ... <additional services>/
├── .env.example
└── docker-compose.yml
<project>/
├── index.php
└── ... <additional app files>/
Worked as well, but we often ran into situations where the contents of the
repo would be stale for some devs, plus it was simply additional overhead
with not other benefits to us at that point. Maybe git submodules will enable
us to get the best of both worlds - I'll blog about it once we try ;)
The .shared folder
When dealing with multiple services, chances are high that some of those
services will be configured similarly, e.g. for
To avoid duplication, I place scripts (simple bash files) and config files in
the .shared folder and make it available in the build context for each service.
I'll explain the process in more detail under providing the correct build
context.
docker-test.sh
Is really just a simple bash script that includes some high level tests to make
sure that the containers are built correctly. See section Testing if everything
works.
.env.example and docker-compose.yml
The Makefile
<project>/
├── .docker/
| ├── .shared/
| | ├── config/
| | | └── php/
| | | └── conf.d/
| | | └── zz-app.ini
| | └── scripts/
| | └── docker-entrypoint/
| | └── resolve-docker-host-ip.sh
| ├── nginx/
| | ├── sites-available/
| | | └── default.conf
| | ├── Dockerfile
| | └── nginx.conf
| ├── php-fpm/
| | ├── php-fpm.d/
| | | └── pool.conf
| | └── Dockerfile
| ├── workspace/ (formerly php-cli)
| | ├── .ssh/
| | | └── insecure_id_rsa
| | | └── insecure_id_rsa.pub
| | └── Dockerfile
| ├── .env.example
| ├── docker-compose.yml
| └── docker-test.sh
├── Makefile
└── index.php
php-fpm
Since we will be having two PHP containers, we need to place the common
.ini settings in the .shared directory.
| ├── .shared/
| | ├── config/
| | | └── php/
| | | └── conf.d/
| | | └── zz-app.ini
; enable opcache
opcache.enable_cli = 1
opcache.enable = 1
opcache.fast_shutdown = 1
; revalidate everytime (effectively disabled for development)
opcache.validate_timestamps = 0
We're using the modify_config.sh script to set the user and group that owns
the php-fpm processes.
Custom ENTRYPOINT
# entrypoint
RUN mkdir -p /bin/docker-entrypoint/ \
&& cp /tmp/scripts/docker-entrypoint/* /bin/docker-entrypoint/ \
&& chmod +x -R /bin/docker-entrypoint/ \
;
ENTRYPOINT ["/bin/docker-entrypoint/resolve-docker-host-ip.sh","php-fpm"]
nginx
The nginx setup is even simpler. There is no shared config, so that everything
we need resides in
| ├── nginx/
| | ├── sites-available/
| | | └── default.conf
| | ├── Dockerfile
| | └── nginx.conf
Please note, that nginx only has the nginx.conf file for configuration (i.e.
there is no conf.d directory or so), so we need to define the full config in
there.
http {
# ...
include /etc/nginx/sites-available/*.conf;
# ...
}
We need to keep the last point in mind, because we must use the same
directory in the Dockerfile:
server {
# ...
root __NGINX_ROOT;
# ...
location ~ \.php$ {
# ...
fastcgi_pass php-fpm:9000;
}
}
Other containers on the same network can use either the service name or [an] alias to
connect to one of the service’s containers.
In the Dockerfile, we use
ARG APP_CODE_PATH
RUN /tmp/scripts/modify_config.sh /etc/nginx/sites-available/default.conf \
"__NGINX_ROOT" \
"${APP_CODE_PATH}" \
;
To get your personal access token for GitHub account, navigate to the Settings > Personal
access tokens and click the Generate new token button.
Click Import to continue.
2. In the opened frame, specify the following details about your repository and target
environment:
Git Repo Url - HTTPS link to your application repo (either .git or of a common view).
You can fork our sample Hello World application to test the flow
Branch - a project branch to be used
User - enter your Git account login
Token - specify the access token you’ve previously created for webhook generation
Environment name - choose an environment your application will be deployed to
Nodes - application server name (is fetched automatically upon selecting the environment)
Click Install to continue.
3. Wait a minute for Jelastic to fetch your application sources from GitHub and configure
webhook for continuous deployment.
Pay attention that it might take some time for Maven to compile a project (though the package
installation itself has already finished), so you need to wait a few minutes before launching it.
The current progress of this operation can be tracked in real time via vcs_update log file on
Maven:
2. As a result, the appropriate webhook will be triggered to deploy the made changes into your
hosting environment - refer to the repository Settings > Webhooks section for the details.
Upon clicking on this string you’ll see the list of Recent Deliveries, initiated by the webhook,
and result of their execution.
3. As the last checkpoint, return back to your application page and refresh it (whilst remembering
that it may take an extra minute for Maven to build and deploy your Java-based project).
That’s it! As you can see, the modifications were successfully applied, so the solution works as
intended.
Simply update your code, make commits as you usually do, and all the changes will be pushed to
your Jelastic environment automatically. No need to switch between processes or make manual
updates eliminates human errors and accelerates time to market for your application.
You can use the when keyword to control the condition. If your previous
test step fails then you can find the status. Use that status in the when
condition and tell if it fails. remove the container. You can run the below
command in your playbook with the command module.
Thank You