Docker is an open source containerization platform that allows users to package applications and their dependencies into standardized executable units called containers. Docker relies on features of the Linux kernel like namespaces and cgroups to provide operating-system-level virtualization and allow containers to run isolated on a shared kernel. This makes Docker highly portable and allows applications to run consistently regardless of the underlying infrastructure. Docker uses a client-server architecture where the Docker Engine runs in the cloud or on-premises and clients interact with it via Docker APIs or the command line. Common commands include build to create images from Dockerfiles, run to launch containers, and push/pull to distribute images to registries. Docker is often used for microservices and multi-container
Docker for Developers talk from the San Antonio Web Dev Meetup in Aug 2023
Never used Docker? This is perfect for you!
New to Docker? You'll learn something for sure!
Links included for all slides, code, and examples
Go from no Docker experience to a fully running web app in one slide deck!
The document provides an overview and agenda for Docker in Action. It discusses key Docker concepts like images and containers, the Docker architecture involving clients, daemons and registries, and daily Docker operations like building new images, deploying code updates, and viewing logs. Installation instructions are also included for Windows, Linux and macOS.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
This document introduces Docker. It discusses that Docker uses containerization rather than virtualization, allowing applications and their dependencies to run in isolated containers that share the host operating system's kernel. It describes Docker's client-server architecture with containers built from images and run by the Docker daemon. Benefits of Docker include low overhead, speed, and portability of applications, while disadvantages include potential backup and management challenges for large numbers of containers.
Containers allow multiple isolated user space instances to run on a single host operating system. Containers are seen as less flexible than virtual machines since they generally can only run the same operating system as the host. Docker adds an application deployment engine on top of a container execution environment. Docker aims to provide a lightweight way to model applications and a fast development lifecycle by reducing the time between code writing and deployment. Docker has components like the client/server, images used to create containers, and public/private registries for storing images.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
Docker is an open platform for developing, shipping, and running applications. docker container, its main benefit is to package applications in “containers” allowing them to be portable among any system running the Linux operating system (OS).
Docker interview questions and answers are provided covering topics such as:
- Docker architecture uses a client-server model with the Docker client sending commands to the Docker daemon.
- Docker images are templates used to create and run containers, which are isolated runtime instances of images that share resources on the host operating system.
- Dockerfiles are text files containing build instructions to automate image creation, and images can be pulled from or pushed to Docker Hub, a public registry for images.
Originally Presented at WebSummit 2015. Find all the materials for the workshop here: https://github.com/emccode/training/tree/master/docker-workshop/websummit
containers and virtualization tools ( Docker )Imo Inyang
This document provides an overview of containers and virtualization tools like Docker. It defines key concepts like virtual machines (VMs), hypervisors, and containers. It explains that VMs emulate real hardware while containers abstract the operating system to increase efficiency. Docker is introduced as an open-source container platform that builds on Linux features for speed and modularity. Instructions are provided on installing Docker and using common Docker commands to build, run, and manage containerized applications.
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
The document provides an overview of containers and Docker. It discusses why containers are important for organizing software, improving portability, and protecting infrastructure. It describes key Docker concepts like images, containers, Dockerfile for building images, and tools like Docker Compose and Docker Swarm for defining and running multi-container apps. The document recommends reading "The Art of War" and scanning systems without being detected before potentially more intrusive activities. It also briefly introduces network security pillars and buffer overflows as an attack technique.
Dockers & kubernetes detailed - Beginners to GeekwiTTyMinds1
Docker is a platform for building, distributing and running containerized applications. It allows applications to be bundled with their dependencies and run in isolated containers that share the same operating system kernel. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups Docker containers that make up an application into logical units for easy management and discovery. Docker Swarm is a native clustering tool that can orchestrate and schedule containers on machine clusters. It allows Docker containers to run as a cluster on multiple Docker hosts.
This document provides an overview of Docker and containers. It begins with a brief introduction to 12 Factor Applications methodology and then defines what Docker is, explaining that containers utilize Linux namespaces and cgroups to isolate processes. It describes the Docker software and ecosystem, including images, registries, Docker CLI, Docker Compose, building images with Dockerfile, and orchestrating with tools like Kubernetes. It concludes with a live demo and links to additional resources.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
This document provides an introduction and overview of Docker and containers. It discusses what containers are, how they differ from virtual machines, and how Docker works. Key points covered include common Docker commands, the Docker architecture, building images with Dockerfiles, and using Docker Compose to run multi-container applications. The benefits of containers for streamlining deployment and rapid scaling are also highlighted.
What is this Docker and Microservice thing that everyone is talking about? A primer to Docker and Microservice and how the two concepts complement each other.
Docker is a tool that makes it easier to use Linux containers (LXC) to deploy applications. It allows applications to run consistently across servers by including dependencies within containers. Containers are more lightweight than virtual machines and use less resources. Docker containers start faster than VMs and allow for easy sharing of application components. The Docker registry stores container images and metadata for easy sharing between developers and production environments.
The document provides an introduction to containerization using Docker. It discusses problems with traditional infrastructure approaches, such as time-consuming installation/configuration, inconsistencies across environments, and operational support challenges. Docker addresses these issues by allowing applications and their dependencies to run in isolated containers that are portable and share resources efficiently. Key Docker concepts are then explained, including images, containers, registries, networking, and common commands. The document demonstrates how to install Docker and run basic operations like pulling, running, and inspecting containers.
Docker is a platform for building, shipping and running applications. It provides lightweight virtual containers that allow applications to run consistently regardless of environment. Key Docker concepts include images, containers, Docker Engine and tools like Docker Compose and Docker Machine. The document then provides steps for setting up WordPress and Laravel projects using Docker, including using Docker Compose to define services and Docker Machine to provision and manage Docker hosts.
Introduction to docker. Docker is open source framework that provides "container virtualization". This does not need hypervisor rather works directly with Kernel. It needs x64 Linux and kernel 3.8+ to provide virtualization
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Leading AI Innovation As A Product Manager - Michael JidaelMichael Jidael
Unlike traditional product management, AI product leadership requires new mental models, collaborative approaches, and new measurement frameworks. This presentation breaks down how Product Managers can successfully lead AI Innovation in today's rapidly evolving technology landscape. Drawing from practical experience and industry best practices, I shared frameworks, approaches, and mindset shifts essential for product leaders navigating the unique challenges of AI product development.
In this deck, you'll discover:
- What AI leadership means for product managers
- The fundamental paradigm shift required for AI product development.
- A framework for identifying high-value AI opportunities for your products.
- How to transition from user stories to AI learning loops and hypothesis-driven development.
- The essential AI product management framework for defining, developing, and deploying intelligence.
- Technical and business metrics that matter in AI product development.
- Strategies for effective collaboration with data science and engineering teams.
- Framework for handling AI's probabilistic nature and setting stakeholder expectations.
- A real-world case study demonstrating these principles in action.
- Practical next steps to begin your AI product leadership journey.
This presentation is essential for Product Managers, aspiring PMs, product leaders, innovators, and anyone interested in understanding how to successfully build and manage AI-powered products from idea to impact. The key takeaway is that leading AI products is about creating capabilities (intelligence) that continuously improve and deliver increasing value over time.
More Related Content
Similar to Getting Started With Docker: Simplifying DevOps (20)
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
Docker is an open platform for developing, shipping, and running applications. docker container, its main benefit is to package applications in “containers” allowing them to be portable among any system running the Linux operating system (OS).
Docker interview questions and answers are provided covering topics such as:
- Docker architecture uses a client-server model with the Docker client sending commands to the Docker daemon.
- Docker images are templates used to create and run containers, which are isolated runtime instances of images that share resources on the host operating system.
- Dockerfiles are text files containing build instructions to automate image creation, and images can be pulled from or pushed to Docker Hub, a public registry for images.
Originally Presented at WebSummit 2015. Find all the materials for the workshop here: https://github.com/emccode/training/tree/master/docker-workshop/websummit
containers and virtualization tools ( Docker )Imo Inyang
This document provides an overview of containers and virtualization tools like Docker. It defines key concepts like virtual machines (VMs), hypervisors, and containers. It explains that VMs emulate real hardware while containers abstract the operating system to increase efficiency. Docker is introduced as an open-source container platform that builds on Linux features for speed and modularity. Instructions are provided on installing Docker and using common Docker commands to build, run, and manage containerized applications.
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
The document provides an overview of containers and Docker. It discusses why containers are important for organizing software, improving portability, and protecting infrastructure. It describes key Docker concepts like images, containers, Dockerfile for building images, and tools like Docker Compose and Docker Swarm for defining and running multi-container apps. The document recommends reading "The Art of War" and scanning systems without being detected before potentially more intrusive activities. It also briefly introduces network security pillars and buffer overflows as an attack technique.
Dockers & kubernetes detailed - Beginners to GeekwiTTyMinds1
Docker is a platform for building, distributing and running containerized applications. It allows applications to be bundled with their dependencies and run in isolated containers that share the same operating system kernel. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups Docker containers that make up an application into logical units for easy management and discovery. Docker Swarm is a native clustering tool that can orchestrate and schedule containers on machine clusters. It allows Docker containers to run as a cluster on multiple Docker hosts.
This document provides an overview of Docker and containers. It begins with a brief introduction to 12 Factor Applications methodology and then defines what Docker is, explaining that containers utilize Linux namespaces and cgroups to isolate processes. It describes the Docker software and ecosystem, including images, registries, Docker CLI, Docker Compose, building images with Dockerfile, and orchestrating with tools like Kubernetes. It concludes with a live demo and links to additional resources.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
This document provides an introduction and overview of Docker and containers. It discusses what containers are, how they differ from virtual machines, and how Docker works. Key points covered include common Docker commands, the Docker architecture, building images with Dockerfiles, and using Docker Compose to run multi-container applications. The benefits of containers for streamlining deployment and rapid scaling are also highlighted.
What is this Docker and Microservice thing that everyone is talking about? A primer to Docker and Microservice and how the two concepts complement each other.
Docker is a tool that makes it easier to use Linux containers (LXC) to deploy applications. It allows applications to run consistently across servers by including dependencies within containers. Containers are more lightweight than virtual machines and use less resources. Docker containers start faster than VMs and allow for easy sharing of application components. The Docker registry stores container images and metadata for easy sharing between developers and production environments.
The document provides an introduction to containerization using Docker. It discusses problems with traditional infrastructure approaches, such as time-consuming installation/configuration, inconsistencies across environments, and operational support challenges. Docker addresses these issues by allowing applications and their dependencies to run in isolated containers that are portable and share resources efficiently. Key Docker concepts are then explained, including images, containers, registries, networking, and common commands. The document demonstrates how to install Docker and run basic operations like pulling, running, and inspecting containers.
Docker is a platform for building, shipping and running applications. It provides lightweight virtual containers that allow applications to run consistently regardless of environment. Key Docker concepts include images, containers, Docker Engine and tools like Docker Compose and Docker Machine. The document then provides steps for setting up WordPress and Laravel projects using Docker, including using Docker Compose to define services and Docker Machine to provision and manage Docker hosts.
Introduction to docker. Docker is open source framework that provides "container virtualization". This does not need hypervisor rather works directly with Kernel. It needs x64 Linux and kernel 3.8+ to provide virtualization
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Leading AI Innovation As A Product Manager - Michael JidaelMichael Jidael
Unlike traditional product management, AI product leadership requires new mental models, collaborative approaches, and new measurement frameworks. This presentation breaks down how Product Managers can successfully lead AI Innovation in today's rapidly evolving technology landscape. Drawing from practical experience and industry best practices, I shared frameworks, approaches, and mindset shifts essential for product leaders navigating the unique challenges of AI product development.
In this deck, you'll discover:
- What AI leadership means for product managers
- The fundamental paradigm shift required for AI product development.
- A framework for identifying high-value AI opportunities for your products.
- How to transition from user stories to AI learning loops and hypothesis-driven development.
- The essential AI product management framework for defining, developing, and deploying intelligence.
- Technical and business metrics that matter in AI product development.
- Strategies for effective collaboration with data science and engineering teams.
- Framework for handling AI's probabilistic nature and setting stakeholder expectations.
- A real-world case study demonstrating these principles in action.
- Practical next steps to begin your AI product leadership journey.
This presentation is essential for Product Managers, aspiring PMs, product leaders, innovators, and anyone interested in understanding how to successfully build and manage AI-powered products from idea to impact. The key takeaway is that leading AI products is about creating capabilities (intelligence) that continuously improve and deliver increasing value over time.
Hands On: Create a Lightning Aura Component with force:RecordDataLynda Kane
Slide Deck from the 3/26/2020 virtual meeting of the Cleveland Developer Group presentation on creating a Lightning Aura Component using force:RecordData.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Buckeye Dreamin 2024: Assessing and Resolving Technical DebtLynda Kane
Slide Deck from Buckeye Dreamin' 2024 presentation Assessing and Resolving Technical Debt. Focused on identifying technical debt in Salesforce and working towards resolving it.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
2. History of Docker
2004
Solaris Containers /
Zones technology
introduced
2008
Linux containers
(LXC 1.0)
introduced
2013
Solomon Hykes
starts Docker as an
internal project
within dotCloud
Mar 2013
Docker released
to open source
Feb 2016
Docker introduces first
commercial product – now
called Docker Enterprise
Edition
Today
Open source community includes:
- 3,300+
contributors
- 43,000+
stars
- 12,000+
forks
4. Historical limitations of application deployment
• Slow deployment times
• Huge costs
• Wasted resources
• Difficult to scale
• Difficult to migrate
• Vendor lock in
17
5. A History Lesson
Hypervisor-based Virtualization
• One physical server can contain multiple applications
• Each application runs in a virtual machine (VM)
6. Benefits of VMs
• Better resource pooling
– One physical machine divided into multiple virtual machines
• Easier to scale
• VMs in the cloud
– Rapid elasticity
– Pay as you go model
7. Limitations of VMs
• Each VM stills requires
– CPU allocation
– Storage
– RAM
– An entire guest operating system
• The more VMs you run, the more resources you need
• Guest OS means wasted resources
• Application portability not guaranteed
8. • Standardized packaging for
software and dependencies
• Isolate apps from each other
• Share the same OS kernel
• Works with all major Linux and
Windows Server
What is a container?
9. Comparing Containers and VMs
Containers are an app
level construct
VMs are an infrastructure level
construct to turn one machine
into many servers
10. Containers and VMs together
Containers and VMs together provide a tremendous amount of
flexibility for IT to optimally deploy and manage apps.
DEV
PROD
11. Key Benefits of Docker Containers
Speed
• No OS to boot =
applications
online in seconds
Portability
• Less
dependencies
between process
layers = ability to
move between
infrastructure
Efficiency
• Less OS
overhead
• Improved VM
density
12. Docker Basics
Image
The basis of a Docker container. The content at rest.
Container
The image when it is ‘running.’ The standard unit for app service
Engine
The software that executes commands for containers. Networking and volumes are part of
Engine. Can be clustered together.
Registry
Stores, distributes and manages Docker images
Control Plane
Management plane for container and cluster orchestration
13. Building a Software Supply Chain
Image Registry
Traditional
Microservices
DEVELOPERS IT OPERATIONS
Control Plane
14. Docker registry
A Docker registry is a storage and distribution system for named Docker images. The
same image might have multiple different versions, identified by their tags.
A Docker registry is organized into Docker repositories , where a repository holds all the
versions of a specific image.
The registry allows Docker users to pull images locally, as well as push new images to the
registry (given adequate access permissions when applicable).
By default, the Docker engine interacts with DockerHub , Docker’s public registry instance.
However, it is possible to run on-premise the open-source Docker registry/distribution, as
well as a commercially supported version called Docker Trusted Registry .
16. Docker run
One of the first and most important commands Docker users learn is the docker run
command. This comes as no surprise since its primary function is to build and run
containers.
There are many different ways to run a container. By adding attributes to the basic syntax,
you can configure a container to run in detached mode, set a container name, mount a
volume, and perform many more tasks.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
17. Docker run
> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
[...]
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
[...]
19. Docker file basics
A Dockerfile is a simple text file that contains a list of commands that the Docker client calls
while creating an image.
It's a simple way to automate the image creation process.
The commands you write in a Dockerfile are almost identical to their equivalent Linux
commands: this means you don't really have to learn new syntax to create your own
dockerfiles.
20. Docker file directives
FROM
The from directive is used to set base image for the subsequent instructions. A Dockerfile must have
FROM directive with valid image name as the first instruction.
FROM ubuntu:20.04
RUN
Using RUN directing ,you can run any command to image during build time. For example you can install
required packages during the build of image.
RUN apt-get update
RUN apt-get install -y apache2 automake build-essential curl
21. Docker file directives
COPY
The COPY directive used for copying files and directories from host system to the image during build.
For example the first commands will copy all the files from hosts html/ directory /var/www/html image
directory.
Second command will copy all files with extension .conf to /etc/apache2/sites-available/ directory.
COPY html/* /var/www/html/
COPY *.conf /etc/apache2/sites-available/
WORKDIR
The WORKDIR directive used to sets the working directory for any RUN, CMD, ENTRYPOINT, COPY
and ADD commands during build.
WORKDIR /opt
22. Docker file directives
CMD
The CMD directive is used to run the service or software contained by your image, along with any
arguments during the launching the container. CMD uses following basic syntax
CMD ["executable","param1","param2"]
CMD ["executable","param1","param2"]
For example, to start Apache service during launch of container, Use the following command.
CMD ["apachectl", "-D", "FOREGROUND"]
EXPOSE
The EXPOSE directive indicates the ports on which a container will listen for the connections. After that
you can bind host system port with container and use them.
EXPOSE 80
EXPOSE 443
23. Docker file directives
ENV
The ENV directive is used to set environment variable for specific service of container.
ENV PATH=$PATH:/usr/local/pgsql/bin/
ENV PG_MAJOR=9.6.0
VOLUME
The VOLUME directive creates a mount point with the specified name and marks it as holding externally
mounted volumes from native host or other containers.
VOLUME ["/data"]
24. Sample docker file
Given this Dockerfile:
FROM alpine
CMD ["echo", "Hello Tor Vergata!"]
Build and run it:
docker build -t hello .
docker run --rm hello
This will output:
Hello Tor Vergata!
25. Sample docker file
FROM nginx:latest
RUN touch /testfile
COPY ./index.html /usr/share/nginx/html/index.html
26. Docker build / push
Use Docker build to build your image locally
docker build -t <registry>/<image name>:<tag> .
And Docker push to publish your image on registry
docker push <registry>/<image name>:<tag>
28. Data persistence
Docker containers provide you with a writable layer on top to make changes to your running container.
But these changes are bound to the container’s lifecycle: If the container is deleted (not stopped), you
lose your changes.
Let’s take a hypothetical scenario where you are running a database in a container without any data
persistence configured.
You create some tables and add some rows to them: but, if some reason, you need to delete this
container, as soon as the container is deleted all your tables and their corresponding data get lost.
Docker provides us with a couple of solutions to persist your data even if the container is deleted.
The two possible ways to persist your data are:
• Bind Mounts
• Volumes
29. Bind mounts
Bind mounts have been around since the early days of Docker.
When you use a bind mount, a file or directory on the host machine is mounted into a
container.
The file or directory is referenced by its absolute path on the host machine.
By contrast, when you use a volume, a new directory is created within Docker’s storage directory
on the host machine, and Docker manages that directory’s contents.
The file or directory does not need to exist on the Docker host already.
It is created on demand if it does not yet exist.
Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific
directory structure available.
If you are developing new Docker applications, consider using named volumes instead.
30. Docker volumes
Volumes are the preferred mechanism for persisting data generated by and used by Docker
containers.
While bind mounts are dependent on the directory structure and OS of the host machine,
volumes are completely managed by Docker.
Volumes have several advantages over bind mounts:
● Volumes are easier to back up or migrate than bind mounts.
● You can manage volumes using Docker CLI commands or the Docker API.
● Volumes work on both Linux and Windows containers.
● Volumes can be more safely shared among multiple containers.
● Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the
contents of volumes, or to add other functionality.
● New volumes can have their content pre-populated by a container.
34. Docker network basics
Docker Networking is used to connect docker container with each other and with the
outside world.
Docker uses CNM (Container Network Model) for networking.
This model standardizes the steps required to provide networking for containers using
multiple network drivers.
35. Bridge networking
Bridge network is a default network created automatically
when you deploy a container.
Bridge network uses a software bridge that allows containers
connected to the same bridge network to communicate.
Bridge networks are used on containers that are running on
the same Docker daemon host.
The bridge network creates a private internal isolated
network to the host so containers on this network can
communicate.
36. Host networking
This takes out any network isolation between the docker
host and the docker containers.
Host mode networking can be useful to optimize
performance.
It does not require network address translation (NAT).
The host networking driver only works on Linux hosts, and
is not supported on Docker Desktop for Mac, Docker
Desktop for Windows, or Docker EE for Windows Server.
37. Overlay networking
Overlay networking is used if container on node A wants
to talk to node B then to make communication between
them we use Overlay networking.
Overlay networking uses VXLAN to create an Overlay
network.
This has the advantage of providing maximum portability
across various cloud and on-premises networks.
By default, the Overlay network is encrypted with the AES
algorithm.
38. Exposing ports
By default, when we create any containers it doesn’t publish or
expose the application ports running on the containers.
We can access these applications only within the docker host
not through network systems.
You can explicitly bind a port or group of ports from container to
host using the -p flag.
docker run [...] -p 8000:5000 docker.io/httpd
42. Docker compose
Docker Compose is a tool that was developed to help define and share multi-container
applications.
With Compose, we can create a YAML file to define the services and with a single
command, can spin everything up or tear it all down.
Each of the containers here run in isolation but can interact with each other when required.
Docker Compose files are very easy to write in a scripting language called YAML, which is
an XML-based language that stands for Yet Another Markup Language.
43. version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
mysql-data:
45. Docker swarm
Docker swarm is a container orchestration tool, meaning that it allows the user to manage
multiple containers deployed across multiple host machines.
A Docker Swarm is a group of either physical or virtual machines that are running the
Docker application and that have been configured to join together in a cluster.
Once a group of machines have been clustered together, you can still run the Docker
commands that you're used to, but they will now be carried out by the machines in your
cluster.
The activities of the cluster are controlled by a swarm manager, and machines that have
joined the cluster are referred to as nodes.
46. Kubernetes
Kubernetes is an open source system to deploy, scale, and manage containerized
applications.
It automates operational tasks of container management and includes built-in commands for
deploying applications, rolling out changes to your applications, scaling your applications up
and down to fit changing needs, monitoring your applications, and more.
Application developers, IT system administrators and DevOps engineers use Kubernetes to
automatically deploy, scale, maintain, schedule and operate multiple application containers
across clusters of nodes.
Containers run on top of a common shared operating system (OS) on host machines but
are isolated from each other unless a user chooses to connect them.