0% found this document useful (0 votes)
5 views

Write Up

Uploaded by

Arrya Gavas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Write Up

Uploaded by

Arrya Gavas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

# Experiment 7: Building Images using Docker File

## Introduction

In this experiment, we delve into the realm of containerization using Docker. Docker has revolutionized
the way we package, distribute, and run applications by encapsulating them within lightweight, portable
containers. This experiment aims to familiarize you with Docker, its history, architecture, terminology,
and the process of building images using Dockerfiles.

## What is Docker?

Docker is an open-source platform that enables developers to automate the deployment of applications
within containers. These containers bundle the application and all its dependencies, allowing it to run
reliably across different environments.

## Docker History

Docker was initially released in 2013 by Docker, Inc. It was built upon the containerization features of
Linux kernel and introduced a user-friendly interface for managing containers. Since then, Docker has
gained widespread adoption and has become a standard tool in software development and deployment.

## Docker Architecture

Docker follows a client-server architecture. The Docker client communicates with the Docker daemon,
which manages containers, images, networks, and volumes. Containers run on top of a Docker Engine,
which utilizes features of the host operating system to provide lightweight, isolated environments.

## Docker Terminology

- **Image**: A read-only template that contains the application and its dependencies.

- **Container**: An instance of an image that can be executed and run as a process.

- **Dockerfile**: A text file that contains instructions for building Docker images.

- **Docker Daemon**: The background service responsible for managing Docker objects.

- **Docker Client**: The command-line tool used to interact with the Docker daemon.

- **Registry**: A repository for Docker images, such as Docker Hub or a private registry.

## Windows Subsystem for Linux (WSL)


Windows Subsystem for Linux allows running Linux binaries natively on Windows. This enables
developers to use Docker on Windows without needing a full virtual machine.

## Docker Installation on Windows & Linux

Docker can be installed on both Windows and Linux environments. For Windows, Docker Desktop
provides a straightforward installation process. On Linux, Docker can be installed using package
managers like apt or yum.

## Docker Commands

- `docker pull`: Fetches an image from a registry

- `docker build`: Builds an image from a Dockerfile

- `docker run`: Creates and runs a container from an image

- `docker ps`: Lists running containers

- `docker images`: Lists available images

- `docker stop`: Stops a running container

- `docker rm`: Removes a container

- `docker rmi`: Removes an image

## Dockerfile & its contents

A Dockerfile is a text file that contains instructions for building a Docker image. It typically includes
commands to install dependencies, copy files into the image, and specify the entry point for the
container. Some common instructions include `FROM`, `RUN`, `COPY`, `CMD`, and `ENTRYPOINT`.

## Conclusion

Docker provides a powerful platform for containerization, simplifying the process of building, deploying,
and scaling applications. By understanding its architecture, terminology, and commands, developers can
leverage Docker to create efficient and portable environments for their applications. Experimenting with
Dockerfiles allows for fine-tuning the container environment to suit specific application requirements.
# Experiment 8: Creating Multi-Containers using Docker Compose

## Introduction

In this experiment, we explore the concept of container orchestration using Docker Compose. We'll
compare virtual machines with containers, understand Docker Hub and Docker Registry, and dive into
the details of Docker Compose for managing multi-container applications efficiently.

## Virtual Machine vs Container

Virtual machines (VMs) and containers both provide isolated environments for running applications, but
they differ in their approach. VMs virtualize the entire hardware stack, including the operating system,
while containers virtualize the operating system only, resulting in lighter and more portable
environments.

## Docker Hub

Docker Hub is a cloud-based repository for Docker images. It allows developers to store, distribute, and
collaborate on container images. Docker Hub hosts a vast collection of pre-built images that can be used
as base images for application containers.

## Docker Registry

Docker Registry is a storage and distribution system for Docker images. While Docker Hub is a public
registry hosted by Docker, organizations can also set up private registries to store and manage their own
Docker images securely.

## What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML
file (docker-compose.yml) to define the services, networks, and volumes required for the application.
Docker Compose simplifies the process of orchestrating multiple containers, allowing developers to
define complex application stacks with ease.

## The Benefits of Docker Compose

- **Simplified Configuration**: Docker Compose provides a declarative syntax for defining application
services and their dependencies.

- **Single Command Deployment**: With a single command (`docker-compose up`), Docker Compose
can create and start all containers defined in the configuration file.
- **Environment Consistency**: Docker Compose ensures that all containers in a multi-container
application are built and run consistently across different environments.

- **Scalability**: Docker Compose supports scaling individual services by specifying the desired number
of replicas in the configuration file.

## Steps to Deploy Multi-Container Application with Docker Compose

1. Define the services and their configurations in a `docker-compose.yml` file.

2. Build the Docker images for each service using `docker-compose build`.

3. Start the multi-container application using `docker-compose up`.

4. Access the application through the specified ports or URLs.

## Conclusion

Docker Compose simplifies the deployment and management of multi-container applications by


providing a user-friendly interface for defining and orchestrating Docker services. By leveraging Docker
Compose, developers can streamline the development workflow, ensure consistency across
environments, and deploy complex application stacks with minimal effort. Experimenting with Docker
Compose opens up opportunities for building scalable and robust containerized applications.

# Experiment 9: Installation of Kubernetes

## Introduction

In this experiment, we will explore the installation process of Kubernetes, a powerful open-source
container orchestration platform. We'll delve into the fundamental concepts of Kubernetes, its
components, and the benefits of container orchestration.

## What is Kubernetes?

Kubernetes, also known as K8s, is an open-source platform for automating deployment, scaling, and
management of containerized applications. It provides a robust infrastructure for deploying and
managing containers at scale, offering features for automatic scaling, self-healing, and efficient resource
utilization.
## Orchestration & its Benefits

Container orchestration refers to the automated management of containerized applications. It


streamlines the deployment, scaling, and monitoring of containers, ensuring efficient resource utilization
and high availability. Some benefits of orchestration include improved scalability, resilience, and agility in
deploying applications.

## Kubernetes Components

Kubernetes comprises several key components that work together to orchestrate containerized
applications:

- **Nodes**: The individual machines (virtual or physical) that run containers.

- **Pods**: The smallest deployable units in Kubernetes, consisting of one or more containers.

- **Control Plane**: The centralized component responsible for managing the cluster and its resources.

- **kube-apiserver**: Exposes the Kubernetes API used by other components.

- **etcd**: A distributed key-value store for storing cluster state.

- **kube-scheduler**: Assigns pods to nodes based on resource availability and constraints.

## Diagram

```

[Nodes] <---> [Pods] <---> [Control Plane]

```

## Nodes

Nodes are the worker machines in a Kubernetes cluster. They run the applications and provide the
runtime environment for containers.

## Pods

Pods are the smallest units in Kubernetes, representing one or more containers that share the same
network and storage context.

## Control Plane
The Control Plane is the central management component of Kubernetes, responsible for maintaining the
desired state of the cluster.

## Installation of Kubernetes

The installation of Kubernetes involves setting up the Control Plane components and configuring the
nodes to join the cluster. Kubernetes can be installed using various methods, including kubeadm,
Minikube, and managed Kubernetes services like GKE, AKS, and EKS.

## What is kubectl?

kubectl is the command-line tool used to interact with Kubernetes clusters. It allows users to deploy
applications, manage resources, and monitor the cluster's health.

## kubectl Commands

- `kubectl create`: Create a resource

- `kubectl apply`: Apply configuration changes to resources

- `kubectl get`: Retrieve information about resources

- `kubectl describe`: Show detailed information about a resource

- `kubectl delete`: Delete resources

- `kubectl scale`: Scale the number of replicas of a resource

- `kubectl exec`: Execute a command in a container

- `kubectl logs`: View logs of a container

## Conclusion

Kubernetes simplifies the deployment and management of containerized applications by providing a


robust orchestration platform. By understanding its components and installation process, users can
leverage Kubernetes to automate the deployment, scaling, and monitoring of containerized workloads,
leading to increased agility and efficiency in managing modern applications. Experimenting with
Kubernetes and kubectl commands opens up possibilities for building resilient and scalable containerized
infrastructures.
# Experiment 10: Study and Use of Kubernetes Services

## Introduction

In this experiment, we will explore Kubernetes services, an essential aspect of Kubernetes networking.
We'll learn about the Kubernetes Dashboard, YAML configuration files, and delve into the concept of
Kubernetes services, including their components, types, and how to define them.

## Kubernetes Dashboard

The Kubernetes Dashboard is a web-based user interface for Kubernetes clusters, providing visibility and
management capabilities for resources within the cluster. It allows users to deploy applications,
troubleshoot issues, and monitor the cluster's health.

## YAML

YAML (YAML Ain't Markup Language) is a human-readable data serialization format commonly used for
configuration files in Kubernetes. It is used to define Kubernetes resources such as deployments,
services, and pods in a structured and readable manner.

## What are Kubernetes Services?

Kubernetes Services are an abstraction that defines a set of pods and a policy for accessing them. They
enable communication between different parts of an application or between applications running within
a Kubernetes cluster.

## Components of a Kubernetes Service

- **Service IP**: A virtual IP address assigned to the service.

- **Port Mapping**: Defines how requests to the service port are routed to the pods.

- **Selector**: Determines which pods belong to the service.

- **Endpoints**: The set of IP addresses and ports where the service can be accessed.

## Types of Kubernetes Services

- **ClusterIP**: Exposes the service on a cluster-internal IP, accessible only within the cluster.

- **NodePort**: Exposes the service on each node's IP address at a static port, allowing external access
to the service.
- **LoadBalancer**: Exposes the service externally using a cloud provider's load balancer.

- **ExternalName**: Maps the service to a DNS name external to the cluster.

## Defining

Kubernetes Services are defined using YAML configuration files. The configuration includes metadata,
specifications for the service type, port mappings, selectors, and any additional parameters required.

## Selectors & Labels in Kubernetes

Labels are key-value pairs attached to Kubernetes objects, such as pods or services, to identify and group
them logically. Selectors are used to specify which objects a resource should apply to based on their
labels.

## Conclusion

Kubernetes Services play a crucial role in enabling communication between different components of an
application or between applications within a Kubernetes cluster. By understanding their components,
types, and how to define them using YAML configuration files, users can effectively manage networking
and connectivity within their Kubernetes deployments. Experimenting with Kubernetes Services and
utilizing the Kubernetes Dashboard provides valuable insights into managing and scaling containerized
applications efficiently.

# Experiment 11: Installation and Configuration of Ansible

## Introduction

In this experiment, we will delve into the world of configuration management using Ansible. We'll
explore the fundamental concepts of configuration management, Ansible's design principles, history, and
how it works. Furthermore, we'll cover the installation process of Ansible and basic commands to get
started.

## Configuration Management
Configuration management is the process of systematically handling changes to a system's configuration
in a way that maintains integrity over time. It involves defining, deploying, and maintaining the
configuration of software and infrastructure components.

## Configuration Management Elements

Key elements of configuration management include:

- **Infrastructure as Code (IaC)**: Defining infrastructure using code, enabling automation and
consistency.

- **Version Control**: Managing configuration code and changes using version control systems like Git.

- **Automation**: Automating the deployment and management of infrastructure and software


configurations.

## Configuration Management User Roles

- **System Administrators**: Responsible for configuring and maintaining systems.

- **Developers**: May define infrastructure requirements as code within their development workflows.

- **Operations Teams**: Ensure smooth operation and performance of systems and applications.

## Ansible

Ansible is an open-source configuration management tool that automates software provisioning,


configuration management, and application deployment. It uses simple YAML-based configuration files
and does not require agents to be installed on managed nodes.

## Ansible Design Principles

- **Agentless**: Ansible operates over SSH or WinRM, eliminating the need for agents on managed
nodes.

- **Simple**: Ansible uses YAML syntax for configuration files, making them easy to read and write.

- **Idempotent**: Ansible ensures that the desired state of the system is achieved regardless of the
system's current state, allowing for safe and predictable automation.

## Ansible History
Ansible was created by Michael DeHaan and first released in 2012. It gained popularity due to its
simplicity, scalability, and agentless architecture. In 2015, Ansible, Inc. was acquired by Red Hat, further
solidifying its position in the configuration management and automation space.

## Ansible Working

Ansible works by connecting to managed nodes via SSH or WinRM and executing tasks defined in
playbook files. Playbooks are written in YAML and describe the desired state of the system, including
tasks for configuration, deployment, and orchestration.

## Installation of Ansible

Ansible can be installed on various operating systems, including Linux distributions, macOS, and
Windows. Installation methods include package managers, pip (Python package manager), and source
installation.

## Ansible Commands

- `ansible-playbook`: Execute Ansible playbooks.

- `ansible`: Run ad-hoc commands on managed nodes.

- `ansible-galaxy`: Manage Ansible roles from the Ansible Galaxy community repository.

- `ansible-vault`: Encrypt sensitive data within Ansible playbooks.

## Conclusion

Ansible simplifies the process of configuration management and automation with its agentless
architecture, simple syntax, and idempotent execution model. By understanding its design principles,
history, and installation process, users can leverage Ansible to automate and streamline infrastructure
provisioning, configuration, and deployment tasks. Experimenting with Ansible playbooks and
commands provides valuable hands-on experience in managing and orchestrating complex IT
environments efficiently.

You might also like