Hands-on lab discovering containers (through docker), the need for container orchestration (using Kubernetes), and the place for a container PaaS (via OpenShift)
OpenShift In a Nutshell - Episode 01 - IntroductionBehnam Loghmani
Episode 01 of "OpenShift in a nutshell" presentations in Iran OpenStack community group
This episode is about different versions of OpenShift, supported platforms, terminology and architecture of OpenShift.
I hope you will find it useful.
OpenShift, Docker, Kubernetes: The next generation of PaaSGraham Dumpleton
The document discusses how platforms like OpenShift, Docker, and Kubernetes have evolved from earlier PaaS technologies to provide next generation platforms that enable automated builds, deployments, orchestration, and security across containers. It notes how these platforms allow applications to be deployed across custom strategies rather than being constrained to a single way of working, and how they integrate with existing CI/CD tools. The document encourages gradually adopting new tooling as it makes sense and provides various resources for trying OpenShift.
This document summarizes an agenda for a Basefarm Tech MeetUp on OpenShift. The agenda includes welcome remarks, presentations on DevOps, microservices, containers and OpenShift architecture from Red Hat speakers, and a live demo of a "Safely Agile" application on OpenShift. Basefarm also provides OpenShift installation and operations services to help customers implement and manage OpenShift platforms.
An application path to production does not end with a deployment, even if you are using Kubernetes (K8s) as your application deployment platform. Reliable BCDR (backup and disaster recovery) plan and framework is a must for any production-ready system.
This presentation accompanies meetups and webinars in which Oleg Chunikhin, CTO at Kublr, shows how Velero BCDR framework works and demonstrates how it can be used to backup and recover realistic applications running on Kubernetes in different clouds and environments.
What is covered:
- general notions of Kubernetes applications BCDR
- Velero BCDR framework
- demo Velero BCDR for stateful applications running on AWS and Azure clouds
- demo Velero BCDR using Strimzi / Kafka cluster and ArgoCD CI/CD manager as example application
Kubernetes or OpenShift - choosing your container platform for Dev and OpsTomasz Cholewa
Kubernetes has become the most popular choice among container orchestrators with strong community and growing numbers of production deployments. There is no shortage of various K8s distros, at the moment 20+ and counting. There are many distributions available that just simply add toolsets and products that embed it and adds more features. In this presentation, you'll learn about OpenShift and how it compares to vanilla Kubernetes - their major differences, best features and how they can help to build a consistent platform for Dev and Ops cooperation.
DevOps @ OpenShift Online
Presenter: Adam Miller
As the Release Engineer and a member of Operations team for OpenShift Online, a downstream consumer of OpenShift Origin and the largest Public implementation of OpenShift to date, Adam Miller will discuss what it's like behind the scenes at OpenShift.com and share lessons learned and bring his thoughts and feedback on the future direction of Origin.
Presentation given at Open Source Summit Japan 2016 about the state of the cloud native technology (Cloud Native Computing Foundation) and the standardization of container technology (Open Container Initiative)
Deploying OpenStack Services with Linux Containers - Brisbane OpenStack Meetu...Ken Thompson
The Kolla Project aims to deploy OpenStack services using Docker containers to reduce complexity. Using containers packages services with their dependencies, making deployment and management easier. Kubernetes can orchestrate containers at scale across hosts, while Atomic provides a lightweight container-hosting environment with security, isolation, and portability across systems.
The document discusses serverless computing and functions as a service (FaaS) platforms. It provides an overview of Amazon Lambda including details on storage limits, duration limits, memory sizes, and how functions are executed in containers. It also summarizes the key points of Amazon's serverless manifesto and lists several FaaS providers and event sources. The document raises questions about the challenges of orchestrating containers at scale and implementing serverless architectures for low latency and state management.
OpenShift Overview Presentation by Marek Jelen for Zurich Geeks EventOpenShift Origin
The document discusses OpenShift, Red Hat's free Platform as a Service (PaaS) for deploying applications in the cloud. It provides an overview of what cloud and PaaS are, and explains that OpenShift allows developers to easily deploy and automatically scale their applications. The document notes that OpenShift has a free tier for development use and more resources can be accessed by signing up. It also shares ways developers can install OpenShift locally for experimentation purposes using Vagrant.
Presentation give at the Melbourne Docker Meetup on container related projects within OpenStack. Specifically looking at Project Magnum and Project Kolla and how they are leveraging technologies like Docker, Kubernetes and Atomic.
DockerCon EU 2015: Speed Up Deployment: Building a Distributed Docker Registr...Docker, Inc.
This document discusses how to build a distributed Docker registry at scale. It describes the basic Docker registry setup and how to scale globally by using multiple registry instances across different locations, load balancing requests between them, and replicating registry data between instances using SnapMirror for high availability and disaster recovery. The full Nginx configuration for load balancing registry requests is also provided.
DevOps, PaaS and the Modern Enterprise CloudExpo Europe presentation by Diane...OpenShift Origin
The rise in application complexity is answered by the emergence of DevOps and simplified by adding a PaaS bringing agility, speed, and compliance to the modern Enterprise.
This document discusses Red Hat's cloud platforms, including Infrastructure as a Service (OpenStack), Platform as a Service (OpenShift), and container technologies. It notes that business demands are driving IT transformation toward cloud-based architectures using open source technologies. Red Hat is a top contributor to OpenStack and OpenShift and offers integrated products like Red Hat Atomic Enterprise and OpenShift Enterprise to help customers deploy and manage container-based applications at scale across hybrid cloud environments.
DevOps Best Practices with Openshift - DevOpsFusion 2020Andreas Landerer
This document discusses DevOps best practices using OpenShift. It describes setting up a CI/CD pipeline with Jenkins on OpenShift to build and deploy a sample application. The pipeline builds a Docker image using OpenShift build configs and deploys the application. It also discusses logging, metrics, distributed tracing and avoiding emulating others' practices without considering your own needs.
Rancher provides a complete container management platform that simplifies deploying and managing containers in production. It offers robust container orchestration with Kubernetes, allowing deployment of container workloads within five minutes. Rancher provides a centralized interface for managing the entire container stack, including orchestration, security, networking, storage and monitoring tools. It has gained popularity with over 30 million downloads and support for over 100 enterprise customers.
Kangaroot open shift best practices - straight from the battlefieldKangaroot
This document discusses best practices for Day 2 operations on OpenShift infrastructure from experts with 20 years of experience in Linux and open source. It provides recommendations around designing highly available etcd clusters, implementing federated Prometheus monitoring across multiple clusters using Prometheus or Thanos, centralized logging with ElasticStack, persistent storage options, container registry considerations, backup solutions using Minio and Velero, application deployments with GitOps, and secrets storage with Vault. The company also provides 24/7 support for customers.
Transforming Application Delivery with PaaS and Linux ContainersGiovanni Galloro
This document discusses Red Hat OpenShift Enterprise and how it helps with application delivery using Platform as a Service (PaaS) and Linux containers. It covers OpenShift's architecture using Linux containers, Docker, Kubernetes, and RHEL Atomic Host. It also discusses OpenShift's application deployment flow, adoption trends, and challenges with container adoption as well as Red Hat's strategy to address these challenges through container certification and simplifying adoption for partners.
OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Red Hat OpenShift on Bare Metal and Containerized StorageGreg Hoelzer
OpenShift Hyper-Converged Infrastructure allows building a container application platform from bare metal using containerized Gluster storage without virtualization. The document discusses building a "Kontainer Garden" test environment using OpenShift on RHEL Atomic hosts with containerized GlusterFS storage. It describes configuring and testing the environment, including deploying PHP/MySQL and .NET applications using persistent storage. The observations are that RHEL Atomic is mature enough to evaluate for containers, and Docker/Kubernetes with containerized storage provide an alternative to virtualization for density and scale.
OpenShift is Red Hat's container application platform that provides a full-stack platform for deploying and managing containerized applications. It is based on Docker and Kubernetes and provides additional capabilities for self-service, automation, multi-language support, and enterprise features like authentication, centralized logging, and integration with Red Hat's JBoss middleware. OpenShift handles building, deploying, and scaling applications in a clustered environment with capabilities for continuous integration/delivery, persistent storage, routing, and monitoring.
Presentation given at Open Source Summit Japan 2016 about the state of the cloud native technology (Cloud Native Computing Foundation) and the standardization of container technology (Open Container Initiative)
Deploying OpenStack Services with Linux Containers - Brisbane OpenStack Meetu...Ken Thompson
The Kolla Project aims to deploy OpenStack services using Docker containers to reduce complexity. Using containers packages services with their dependencies, making deployment and management easier. Kubernetes can orchestrate containers at scale across hosts, while Atomic provides a lightweight container-hosting environment with security, isolation, and portability across systems.
The document discusses serverless computing and functions as a service (FaaS) platforms. It provides an overview of Amazon Lambda including details on storage limits, duration limits, memory sizes, and how functions are executed in containers. It also summarizes the key points of Amazon's serverless manifesto and lists several FaaS providers and event sources. The document raises questions about the challenges of orchestrating containers at scale and implementing serverless architectures for low latency and state management.
OpenShift Overview Presentation by Marek Jelen for Zurich Geeks EventOpenShift Origin
The document discusses OpenShift, Red Hat's free Platform as a Service (PaaS) for deploying applications in the cloud. It provides an overview of what cloud and PaaS are, and explains that OpenShift allows developers to easily deploy and automatically scale their applications. The document notes that OpenShift has a free tier for development use and more resources can be accessed by signing up. It also shares ways developers can install OpenShift locally for experimentation purposes using Vagrant.
Presentation give at the Melbourne Docker Meetup on container related projects within OpenStack. Specifically looking at Project Magnum and Project Kolla and how they are leveraging technologies like Docker, Kubernetes and Atomic.
DockerCon EU 2015: Speed Up Deployment: Building a Distributed Docker Registr...Docker, Inc.
This document discusses how to build a distributed Docker registry at scale. It describes the basic Docker registry setup and how to scale globally by using multiple registry instances across different locations, load balancing requests between them, and replicating registry data between instances using SnapMirror for high availability and disaster recovery. The full Nginx configuration for load balancing registry requests is also provided.
DevOps, PaaS and the Modern Enterprise CloudExpo Europe presentation by Diane...OpenShift Origin
The rise in application complexity is answered by the emergence of DevOps and simplified by adding a PaaS bringing agility, speed, and compliance to the modern Enterprise.
This document discusses Red Hat's cloud platforms, including Infrastructure as a Service (OpenStack), Platform as a Service (OpenShift), and container technologies. It notes that business demands are driving IT transformation toward cloud-based architectures using open source technologies. Red Hat is a top contributor to OpenStack and OpenShift and offers integrated products like Red Hat Atomic Enterprise and OpenShift Enterprise to help customers deploy and manage container-based applications at scale across hybrid cloud environments.
DevOps Best Practices with Openshift - DevOpsFusion 2020Andreas Landerer
This document discusses DevOps best practices using OpenShift. It describes setting up a CI/CD pipeline with Jenkins on OpenShift to build and deploy a sample application. The pipeline builds a Docker image using OpenShift build configs and deploys the application. It also discusses logging, metrics, distributed tracing and avoiding emulating others' practices without considering your own needs.
Rancher provides a complete container management platform that simplifies deploying and managing containers in production. It offers robust container orchestration with Kubernetes, allowing deployment of container workloads within five minutes. Rancher provides a centralized interface for managing the entire container stack, including orchestration, security, networking, storage and monitoring tools. It has gained popularity with over 30 million downloads and support for over 100 enterprise customers.
Kangaroot open shift best practices - straight from the battlefieldKangaroot
This document discusses best practices for Day 2 operations on OpenShift infrastructure from experts with 20 years of experience in Linux and open source. It provides recommendations around designing highly available etcd clusters, implementing federated Prometheus monitoring across multiple clusters using Prometheus or Thanos, centralized logging with ElasticStack, persistent storage options, container registry considerations, backup solutions using Minio and Velero, application deployments with GitOps, and secrets storage with Vault. The company also provides 24/7 support for customers.
Transforming Application Delivery with PaaS and Linux ContainersGiovanni Galloro
This document discusses Red Hat OpenShift Enterprise and how it helps with application delivery using Platform as a Service (PaaS) and Linux containers. It covers OpenShift's architecture using Linux containers, Docker, Kubernetes, and RHEL Atomic Host. It also discusses OpenShift's application deployment flow, adoption trends, and challenges with container adoption as well as Red Hat's strategy to address these challenges through container certification and simplifying adoption for partners.
OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Red Hat OpenShift on Bare Metal and Containerized StorageGreg Hoelzer
OpenShift Hyper-Converged Infrastructure allows building a container application platform from bare metal using containerized Gluster storage without virtualization. The document discusses building a "Kontainer Garden" test environment using OpenShift on RHEL Atomic hosts with containerized GlusterFS storage. It describes configuring and testing the environment, including deploying PHP/MySQL and .NET applications using persistent storage. The observations are that RHEL Atomic is mature enough to evaluate for containers, and Docker/Kubernetes with containerized storage provide an alternative to virtualization for density and scale.
OpenShift is Red Hat's container application platform that provides a full-stack platform for deploying and managing containerized applications. It is based on Docker and Kubernetes and provides additional capabilities for self-service, automation, multi-language support, and enterprise features like authentication, centralized logging, and integration with Red Hat's JBoss middleware. OpenShift handles building, deploying, and scaling applications in a clustered environment with capabilities for continuous integration/delivery, persistent storage, routing, and monitoring.
DevConf 2017 - Realistic Container Platform SimulationsJeremy Eder
The document discusses realistic container platform simulations presented at DevConf 2017. It describes the aos-scalability team and adjunct professors presenting on workload classification, test utilities, and a demo. The team's focus is on classifying workloads and developing test harnesses like the System Verification Test Suite and cluster-loader tool to simulate thousands of deployment configurations. Gold mining successes are also summarized, including addressing performance issues with iptables, HAProxy, logging, and metrics.
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
Red Hat OpenShift V3 Overview and Deep DiveGreg Hoelzer
OpenShift is a platform as a service product from Red Hat that allows developers to easily deploy and manage applications using containers. It provides developers with a common platform to build, deploy and update applications quickly using containers. For IT operations, OpenShift improves efficiency and infrastructure utilization through automated provisioning and management of application services. Some key customers highlighted include a large enterprise software company, a major online travel agency, and a leading financial analytics software provider.
KubeCon NA, Seattle, 2016: Performance and Scalability Tuning Kubernetes for...Jeremy Eder
earn tips and tricks on how to best configure and tune your container infrastructure for maximum performance and scale. The Performance Engineering Group at Red Hat is responsible for performance of the complete container portfolio, including Docker, RHEL Atomic, Kubernetes and OpenShift. We will share: - Latest Performance Features in OpenShift, Docker and RHEL Atomic, tips and tricks on how to best configure and tune your system for maximum performance and scale - Latest performance and scale test results, using RHEL Atomic, OpenvSwitch, Cockpit multi-server container management - DevOps, Agile approach to Performance Analysis of OpenShift, Kubernetes, Docker and RHEL Atomic - Test harness code and example scripts
Audience
The audience is anyone interested in deploying containers to run performance sensitive workloads, as well as architecting highly scalable distributed systems for hosting those workloads. This includes workloads that require NUMA awareness, direct hardware access and kernel-bypass I/O.
Microservices, DevOps, and Containers with OpenShift and Fabric8Christian Posta
The document discusses microservices, DevOps, and containers. It introduces the speaker, Christian Posta, and his background working with microservices at a large company. It then asks questions about the organization's motivations for considering microservices and discusses challenges with keeping up with change. The document promotes OpenShift and Fabric8 as open-source platforms that can help automate build, deployment, and integration processes in a cloud-native way. It highlights features like CI/CD, management tools, and libraries to simplify developing microservices applications.
Container Storage Best Practices in 2017Keith Resar
Docker Storage Drivers are a rapidly moving target. Considering the addition of new graphdrivers and continued maturing of the existing set, we evaluate how each works, performance implications from their implementation architecture, and ideal use cases for each.
OpenShift is a DevOps platform that provides a container application platform for deploying and managing containerized applications and microservices. It uses Kubernetes for orchestration and Docker containers. OpenShift provides features for the complete application lifecycle including continuous integration/delivery (CI/CD), automated image builds, deployments, networking, authentication, and integration with external services and registries. Developers can create and deploy applications from source code, templates, or Docker images to OpenShift without needing deep knowledge of Docker or Kubernetes.
Importing Code and Existing Containers to OpenShift - Minneapolis Docker Meet...Keith Resar
This document is a bio for Keith Resar who wears many hats as a coder, open source contributor, and infrastructure architect at RedHat. It discusses his work with containers in development and production using Docker and Kubernetes as well as networking, registries, build automation, CI/CD pipelines, and how OpenShift fits into this landscape.
Achieving Cost and Resource Efficiency through Docker, OpenShift and KubernetesDean Delamont
The document discusses how adopting containerization and microservices technologies like Docker, Kubernetes, and OpenShift can help organizations achieve cost savings, resource efficiency, reduced complexity, accelerated time to market, and greater portability when deploying solutions on OpenStack. Currently, deploying applications on OpenStack using virtual machines is costly due to high resource usage from large VM sizes, installed operating systems, overprovisioned resources, and maintaining active standby instances. The presentation explores how a container-based approach addresses these issues and improves business outcomes.
[D2 COMMUNITY] Open Container Seoul Meetup - Kubernetes를 이용한 서비스 구축과 openshiftNAVER D2
Junho Lee is a Solutions Architect who has worked at Rockplace Inc. since 2014. The document compares Kubernetes (k8s), OpenShift, and Google Kubernetes Engine (GKE). k8s is an open-source container cluster manager originally designed by Google. OpenShift is Red Hat's container application platform based on k8s. GKE provides k8s clusters on Google Cloud Platform. Both OpenShift and GKE add services on top of k8s like app stores, logging, monitoring and technical support. The document outlines the key components, architectures and capabilities of each platform.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
OpenShift v3 uses an overlay VXLAN network to connect pods within a project. Traffic between pods on a node uses Linux bridges, while inter-node communication uses the VXLAN overlay network. Services are exposed using a service IP and iptables rules to redirect traffic to backend pods. For external access, services are associated with router pods using a DNS name, and traffic is load balanced to backend pods by HAProxy in the router pod.
Scalable Python with Docker, Kubernetes, OpenShiftAarno Aukia
This document summarizes a presentation about scaling Python applications using Docker, Kubernetes, and OpenShift. It discusses how the speaker previously ran Python applications on virtual servers, the shortcomings of that approach, and how containerization tools address those issues. It provides an overview of Docker for building application images, Kubernetes for orchestrating containers, and OpenShift for deploying applications to production. The speaker advocates these tools to gain benefits like continuous deployment, easy scaling, and portability across infrastructures.
More tips and tricks for running containers like a pro - Rancher Online MEetu...Shannon Williams
This document outlines the agenda for a Rancher meetup on tips and tricks for running containers like a pro. The agenda includes presentations on integrated secrets management, autoscaling with Rancher webhooks, using Traefik for load balancing, and the Kubernetes dashboard and Helm. It also provides information on the latest Rancher releases.
Whether a startup or a large corporation, employing containerization technology can provide significant advantages in terms of agility, portability, flexibility, and speed. Here are some examples from the real world of how containers are used in various business use cases.
Bahrain ch9 introduction to docker 5th birthday Walid Shaari
A hands-on workshop will go over the foundations of the containers platform, including an overview of the platform system components: images, containers, repositories, clustering, and orchestration. The strategy is to demonstrate through "live demo, and hands-on exercises." The reuse case of containers in building a portable distributed application cluster running a variety of workloads including HPC workload.
A Hitchhiker's Guide to the Cloud Native StackQAware GmbH
Devoxx 2017, Poland: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware).
Abstract: Cloud native applications are popular these days. They promise superior reliability and almost arbitrary scalability. They follow three key principles: they are built and composed as microservices. They are packaged and distributed in containers. The containers are executed dynamically in the cloud. But which technology is best to build this kind of application? This talk will be your guidebook.
In this hands-on session, we will briefly introduce the core concepts and some key technologies of the cloud native stack and then show how to build, package, compose and orchestrate a cloud native microservice application on top of a cluster operating system such as Kubernetes. To make this session even more entertaining we will be using off-the-shelf MIDI controllers to visualize the concepts and to remote control a Kubernetes cluster.
The document is a presentation on cloud native applications. It discusses key principles like building microservices, packaging in containers, and dynamic execution in the cloud. It also covers containerization, composition using tools like Docker Compose, and orchestration with Kubernetes. The presentation provides demonstrations of these concepts and recommends designing applications for principles like distribution, performance, automation, and delivery for cloud environments.
- Docker celebrated its 5th birthday with events worldwide including one in Cluj, Romania. Over 100 user and customer events were held.
- The Docker platform now has over 450 commercial customers, 37 billion container downloads, and 15,000 Docker-related jobs on LinkedIn.
- The event in Cluj included presentations on Docker and hands-on labs to learn Docker, as well as social activities like taking selfies with a birthday banner.
An introduction to the open source project that empowers modern workflows to build, deploy and manage the lifecycle of containers. You will learn what OpenShift is, what are its use cases, and more about all the fuss around Cloud computing, microservices, DevOps and whatnot.
This document provides a summary of a presentation about Microsoft's focus on Linux, open source, cloud and DevOps technologies. The presentation introduces the speaker and their background, then discusses how cloud computing represents a new way to think about datacenters. It outlines key DevOps practices like infrastructure as code and continuous integration/deployment. It demonstrates tools for containerization including Kubernetes and Helm. Finally, it discusses how tools like Draft and the Open Service Broker for Azure can simplify developing and deploying applications on Kubernetes clusters.
Docker Bday #5, SF Edition: Introduction to DockerDocker, Inc.
In celebration of Docker's 5th birthday in March, user groups all around the world hosted birthday events with an introduction to Docker presentation and hands-on-labs. We invited Docker users to recognize where they were on their Docker journey and the goal was to help them take the next step of their journey with the help of mentors. This presentation was done at the beginning of the events (this one is from the San Francisco event in HQ) and gives a run down of the birthday event series, Docker's momentum, a basic explanation of containers, the benefits of using the Docker platform, Docker + Kubernetes and more.
Studio 5000® Application Code Manager: Introduction and DemonstrationRockwell Automation
This session combines presentation with instructor-led
demonstration of Application Code Manager (ACM) capability. This session will cover how to quickly build your automation projects using reusable code stored in libraries. See how configuration, not programming, is used by selecting library objects (control modules, equipment modules, etc.) and providing configuration data, such as object name and descriptions, equipment set points, control
interlocks, I/O mapping, etc., required for your project. Once all the configuration, not programming, is provided the project will be built (ACD file) which can be downloaded to a controller.
An introduction to the open source project that empowers modern workflows to build, deploy and manage the lifecycle of containers. You will learn what OpenShift is, what are its use cases, and more about all the fuss around Cloud computing, microservices, DevOps and whatnot.
Microservices Training | Microservices Docker Example | Microservices Tutoria...Edureka!
( Microservices Architecture Training - https://www.edureka.co/microservices-architecture-training)
This Edureka Microservices Training (Microservices Blog Series: https://goo.gl/WA5k9u) will help you to implement Microservices with the help of Docker and Node Js.
This video helps you to learn the following topics:
• Use Case
• Before and After Microservices
• Microservices Architecture
• What Is Docker
• How Docker Is Useful for Microservices
• Implementation of The Use Case
• Edureka’s Microservices Course Content
Check out our Microservices Tutorial for Beginners video: https://www.youtube.com/watch?v=L4aDJtPYI8M
This document summarizes a presentation about deploying microservices applications with Docker. The key points are:
- Microservices break applications into small, independent services that communicate over a network. Containers automate deploying applications as lightweight, portable packages.
- Docker solves issues like dependency problems and allows building once and running anywhere through containerization. A container pipeline shows the flow from base images to developer apps to operations.
- Docker Compose allows running a multi-container microservices app with one command by defining services and their dependencies in a compose file.
- The presenter demonstrates a sample Node.js microservices app deployed with Docker Compose, linking services like Nginx, Node apps
CWIN17 london becoming cloud native part 2 - guy martin dockerCapgemini
This document discusses how organizations can become cloud native by embracing the full opportunity from cloud. It identifies six key steps: 1) delivering business visible and impactful benefits, 2) technical solutions that deliver the business case, 3) empowering a dedicated cloud services team, 4) creating a cloud service vending machine, 5) establishing a blueprint for integrating cloud into existing IT, and 6) implementing automated application and infrastructure pipelines. It then discusses how Docker can help organizations modernize traditional applications and build a secure software supply chain through containerization.
Docker Enterprise Edition Overview by Steven Thwaites, Technical Solutions En...Ashnikbiz
This was presented by Steven Thwaites, Technical Solutions Engineer at Docker at Cloud Expo Asia. Docker is the only Containers-as-a-Service platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. It covers topics like:
VMs vs Containers
The Docker Ecosystem
How to Build and Ship your Docker Image
Unique Advantages with Docker EE and more
Docker Birthday #3 - Intro to Docker SlidesDocker, Inc.
High level overview of Docker + Birthday #3 overview (app and challenge portion)!
Learn more about Docker Birthday #3 celebrations here: https://www.docker.com/community/docker-birthday-3
The document provides an agenda and information for Docker Birthday #3 event. The agenda includes an introduction to the Docker ecosystem, learning Docker with a birthday app training, a birthday app challenge, and socializing. The training involves building and deploying a simple voting app locally using Docker Toolbox to demonstrate Docker basics. Participants can then submit hacks or improvements to the app for prizes by the deadline. Mentors will be available to help beginners complete the training.
DockerCon EU 2017 - General Session Day 1Docker, Inc.
This document discusses Docker and its container platform. It highlights Docker's momentum in the industry with over 21 million Docker hosts and 24 billion container downloads. The document then summarizes Docker's container platform and how it enables applications across diverse infrastructures and throughout the lifecycle. It also discusses how Docker can help modernize traditional applications and provide portability, agility and security. The remainder of the document focuses on how MetLife leveraged Docker to containerize applications, seeing benefits like a 70% reduction in VMs and 66% reduction in costs. It outlines Docker Enterprise Edition and its value in areas like security, multi-tenancy, policy automation and management capabilities for Swarm and Kubernetes.
This document discusses innovation with open source tools and application modernization. It begins by outlining the challenges of cloud migration versus modernization. It then covers how applications have shifted from monolithic to microservices architectures using containers and Kubernetes. Various scenarios for containerization and app modernization are presented, including lift-and-shift, microservices, machine learning, and serverless architectures. Microsoft Azure tools that can help with containerization, Kubernetes management, DevOps, and app modernization are also described. The document emphasizes that open source tools and containers allow developers to innovate faster while Azure services provide security, management and governance.
Docker provides a platform for building, shipping, and running distributed applications across environments using containers. It allows developers to quickly develop, deploy and scale applications. Docker DataCenter delivers Docker capabilities as a service and provides a unified control plane for both developers and IT operations to standardize, secure and manage containerized applications. It enables organizations to adopt modern practices like microservices, continuous integration/deployment and hybrid cloud through portable containers.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Buckeye Dreamin 2024: Assessing and Resolving Technical DebtLynda Kane
Slide Deck from Buckeye Dreamin' 2024 presentation Assessing and Resolving Technical Debt. Focused on identifying technical debt in Salesforce and working towards resolving it.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
Rock, Paper, Scissors: An Apex Map Learning JourneyLynda Kane
Slide Deck from Presentations to WITDevs (April 2021) and Cleveland Developer Group (6/28/2023) on using Rock, Paper, Scissors to learn the Map construct in Salesforce Apex development.
"Client Partnership — the Path to Exponential Growth for Companies Sized 50-5...Fwdays
Why the "more leads, more sales" approach is not a silver bullet for a company.
Common symptoms of an ineffective Client Partnership (CP).
Key reasons why CP fails.
Step-by-step roadmap for building this function (processes, roles, metrics).
Business outcomes of CP implementation based on examples of companies sized 50-500.
Automation Hour 1/28/2022: Capture User Feedback from AnywhereLynda Kane
Slide Deck from Automation Hour 1/28/2022 presentation Capture User Feedback from Anywhere presenting setting up a Custom Object and Flow to collection User Feedback in Dynamic Pages and schedule a report to act on that feedback regularly.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
DevFestMN 2017 - Learning Docker and Kubernetes with Openshift
1. LEARNING DOCKER AND KUBERNETES
WITH OPENSHIFT
A Hands-on Lab Exclusively for DevFestMN
Keith Resar
Container PaaS Solution Architect
February 4th, 2017
@KeithResar [email protected]
3. @KeithResar
1: GETTING TO CONTAINERS
THE BASICS, WHERE WE EXPLORE “WHY
CONTAINERS?” AND “WHY ORCHESTRATION?”
2: ARCHITECTURE AND DISCOVERY LAB
DIVE INTO KUBERNETES, OPENSHIFT
3: SOURCE TO IMAGE AND APP LAB
FROM SOURCE CODE TO RUNNING APP
35. Source 2 Image Walk Through
Code
Developers can leverage existing
development tools and then access
the OpenShift Web, CLI or IDE
interfaces to create new application
services and push source code via
GIT. OpenShift can also accept
binary deployments or be fully
integrated with a customer’s
existing CI/CD environment.
36. Source 2 Image Walk Through
Container
Image
Registry
Build
OpenShift automates the Docker
image build process with Source-
to-Image (S2I). S2I combines
source code with a corresponding
Builder image from the integrated
Docker registry. Builds can also be
triggered manually or automatically
by setting a Git webhook. Add in
Build pipelines
37. Source 2 Image Walk Through
Container
Image
Registry
Deploy
OpenShift automates the
deployment of application
containers across multiple Node
hosts via the Kubernetes
scheduler. Users can automatically
trigger deployments on application
changes and do rollbacks,
configure A/B deployments & other
custom deployment types.
#3: New release of OCP / OpenShift origin 1.4/3.4 last week.
#8: IT organizations must evolve to meet organizational demands
This is driving many of the conversations we are having with our customers about DevOps and more
Many of these customers are looking to shift their development and deployment processes, from traditional Waterfall development and Agile methods to DevOps and they want to learn how to enable that
We are also seeing customers asking about Microservices, as they switch their application architectures, away from existing monolithic and n-tier applications to highly distributed, componentized services-based apps
On the infrastructure side, we are seeing a lot of interest in Linux Containers, driven by the popularity of Docker, as an alternative to existing virtualization technologies. Containers can be a key enabler of DevOps and Microservices.
And finally we are seeing customers increasingly wanting to deploy their applications across a hybrid cloud environment
#9: IT organizations must evolve to meet organizational demands
This is driving many of the conversations we are having with our customers about DevOps and more
Many of these customers are looking to shift their development and deployment processes, from traditional Waterfall development and Agile methods to DevOps and they want to learn how to enable that
We are also seeing customers asking about Microservices, as they switch their application architectures, away from existing monolithic and n-tier applications to highly distributed, componentized services-based apps
On the infrastructure side, we are seeing a lot of interest in Linux Containers, driven by the popularity of Docker, as an alternative to existing virtualization technologies. Containers can be a key enabler of DevOps and Microservices.
And finally we are seeing customers increasingly wanting to deploy their applications across a hybrid cloud environment
#10: By applying it to commodity hardware, this makes it cost-effective.
#14: New release of OCP / OpenShift origin 1.4/3.4 last week.
#16: OpenShift Commons is an interactive community for OpenShift Users, Customers, Contributors, Partners, Service Providers and Developers
Commons participants collaborate with Red Hat and other participants to share ideas, code, best practices, and experiences
Get more information at http://origin.openshift.com/commons
#18: Speaker:
* This is a high level architecture diagram of the OpenShift 3 platform. On the subsequent slides we will dive down and investigate how these components interact within an OpenShift infrastructure.
Discussion:
* Set the stage for describing the OpenShift architecture.
Transcript:
OpenShift has a complex multi-component architecture. This presentation is usable to help prospects understand how the components work together.
#19: Speaker:
* From bare metal physical machines to virtualized infrastructure, or in private or certified public clouds, OpenShift is supported anywhere that Red Hat Enterprise Linux is.
* This includes all the supported virtualization platforms - RHEV, vSphere or Hyper-V.
* Red Hat’s OpenStack platform and certified public cloud providers like Amazon, Google and more are supported, too.
* You can even take a hybrid approach and deploy instances of OpenShift Enterprise across all of these infrastructures.
Discussion:
* Show the flexibility of deploying OpenShift
* Technically only x86 platforms are supported.
Transcript:
OpenShift is fully supported anywhere Red Hat Enterprise Linux is. Hybrid deployments across multiple infrastructures can be achieved, but many customers are still adopting OpenShift inside their existing, traditional virtualized environments.
#20: Speaker:
* OpenShift has two types of systems. The first are called nodes.
* Nodes are instances of RHEL 7 or RHEL Atomic with the OpenShift software installed.
* Ultimately, Nodes are where end-user applications are run.
Discussion:
* The Nodes are what the Masters will orchestrate - you will learn about Masters shortly.
* Nodes are just instances of RHEL or Atomic.
* OpenShift’s node daemon and other software runs on a node.
Transcript:
OpenShift can run on either RHEL or RHEL Atomic. Nodes are just instances of RHEL or Atomic that will ultimately host application instances in containers.
#21: Speaker:
* Application instances and application components run in Docker containers
* Each OpenShift Node can run many containers
Discussion:
* A node’s capacity is related to the memory and CPU capabilities of the underlying “hardware”
Transcript:
All of the end-user application instances and application components will run inside Docker containers on the nodes. RHEL instances with bigger CPU and memory footprints can run more applications.
#22: Speaker:
* While app components run in Docker containers, the “unit” that OpenShift is orchestrating and managing is called a Pod
* In this case, a Docker image is the executable or runnable components, and the container is the actual running instance with runtime and environment parameters
* One or more containers make up a Pod, and OpenShift will schedule and run all containers in a Pod together
* Complex applications are made up of many Pods each with their own containers, all interacting with one another inside an OpenShift environment
Discussion:
* OpenShift consumes Docker Images and runs them in containers wrapped by the meta object called a Pod.
* The Pod is what OpenShift schedules, manages and runs.
* Pods can have multiple containers but there are not many well-defined use cases
* Services, described soon, are how different application components are “wired” together. Different application components (eg: app server, db) are not placed in a single Pod
Transcript:
OpenShift can use any native Docker image, and schedules and runs containers in a unit called a Pod. While Pods can have multiple containers, generally a Pod should provide a single function, like an app server, as opposed to multiple functions, like a database and an app server. This allows for individual application components to be easily scaled horizontally.
#23: Speaker:
* The other type of OpenShift system is the Master
* Masters keep and understand the state of the environment and orchestrate all activities on the Nodes
* Masters are also instances of RHEL or Atomic, and multiple masters can be used in an environment for high availability
* Masters have four primary jobs or functions
Discussion:
* The OpenShift Master is the orchestrator of the entire OpenShift environment.
* The OpenShift Master knows about and maintains state within the OpenShift environment.
* Just like Nodes, Masters run on RHEL or Atomic
* You will discuss the four primary functions of the Master in the next slides
Transcript:
Just like Nodes, the OpenShift Master is installed on RHEL or Atomic. The Master is the orchestration and scheduling engine for OpenShift, and is responsible for knowing and maintaining the state of the OpenShift environment. The Master has four primary functions which you will now describe.
#24: Speaker:
* The Master provides the single API that all tooling and systems interact with. Everything must go through this API.
* All API requests are SSL-encrypted and authenticated. Authorizations are handled via fine-grained role-based access control (RBAC)
* The Master can be tied into external identity management systems, from LDAP and AD to OAuth providers like GitHub and Google
Discussion:
* All requests go through the Master’s API
* The Master evaluates requests for both AuthentacioN (AuthN - you are who you say you are) and AuthoriZation (AuthZ - you’re allowed to do what you requested)
* The Master can be tied into external identity management systems like LDAP/AD, OAuth, and more
* Apache modules can also be used for authentication in front of the API
Transcript:
The Master is the gateway to the OpenShift environment. All requests must go through the Master and must be both authenticated and authorized. A wide array of external identity management systems can be the source of authentication information in an OpenShift environment.
#25: Speaker:
* The desired and current state of the OpenShift environment is held in the data store
* The data store uses etcd, a distributed key-value store
* Other things like the RBAC rules, application environment information, and non-application-user-data are kept in the data store
Discussion:
* etcd holds information about the desired and current state of OpenShift
* Additionally, user account info, RBAC rules, environment variables, secrets, and many other bits of OpenShift information are held in the data store
* More generally, any OpenShift data object is stored in etcd
Transcript:
Etcd is another of the critical components of the OpenShift architecture. It is the distributed key-value data store for state and other information within the OpenShift environment. Reads and writes of information generally are going to hit the data store.
#26: Speaker:
* The scheduler portion of the Master is the specific component that is responsible for determining Pod placement
* The scheduler takes the current memory, CPU and other environment utilization into account when placing Pods on the various nodes
Discussion:
* The scheduler is responsible for Pod placement
* Current CPU, Memory and other environment utilization is considered during the scheduling process
Transcript:
The OpenShift scheduler uses a combination of configuration and environment state to determine the best fit for running Pods across the Nodes in the environment.
#27: Speaker:
* The real-world topology of the OpenShift deployment (regions, zones, and etc) is used to inform the configuration of the scheduler
* Administrators can configure complex scenarios for scheduling workloads
Discussion:
* The scheduler is configured with a simple JSON file in combination with node labels to carve up the OpenShift environment to make it look like the real world topology
Transcript:
The topology of the real-world environment is used by platform administrators to determine how to configure the scheduler and the Node labels.
#28: Speaker:
* OpenShift’s service layer enables application components to easily communicate with one another
* The service layer provides for internal load balancing as well as discovery and code reusability
Discussion:
* The service layer is used to connect application components together
* OpenShift automatically injects some service information into running containers to provide for ease of discovery
* Services provide for simple internal load balancing across application components
Transcript:
OpenShift’s service layer is how application components communicate with one another. A front-end web service would connect to database instances by communicating with the database service. OpenShift would automatically handle load balancing across the database instances. Service information is injected into running containers and provides for ease of application
#29: Speaker:
* The Master is responsible for monitoring the health of Pods, and for automatically scaling them as desired
* Users configure Pod probes for liveness and readiness
* Currently pods may be automatically scaled based on CPU utilization
Discussion:
* The Master handles checking the health of pods by executing the defined liveness and readiness probes
* Probes can be easily defined by users
* Autoscaling is available today against CPU only
* The next slides will depict a health event for you to describe
Transcript:
The OpenShift Master is capable of monitoring application health via user-defined Pod probes. The Master is also capable of scaling out Pods based on CPU utilization metrics.
#30: Speaker:
* What happens when the Master sees that a Pod is failing its probes?
* What happens if containers inside the Pod exited because of a crash or other issue?
Discussion:
* Here you are describing the beginning of how the Master detects failures and remediates them
Transcript:
This first slide allows you to ask the question “What happens when…?”. The next slide will describe the remediation process for failed Pods.
#31: Speaker:
* The Master will automatically restart Pods that have failed probes or possibly exited due to container crashes.
* Pods that fail too often are marked as bad and are temporarily not restarted.
* OpenShift’s service layer makes sure that it only sends traffic to healthy Pods, maintaining component availability -- all orchestrated by the Master automatically.
Discussion:
* The Master will restart Pods that are determined to be failing, for whatever reason.
* The Service layer is automatically updated with only healthy Pods as endpoints.
* All of this is done automatically.
* Pods that continually fail are not restarted for a while. This is a “crash loop back-off”.
Transcript:
The OpenShift Master is capable of remediating Pod failures automatically. It manages the traffic through the Service layer to ensure application availability and handles restarting Pods, all automatically and without any user intervention.
#32: Speaker:
* Applications are only as useful as the data they can manipulate.
* Docker containers are natively ephemeral - data is not saved when containers are restarted or created.
* OpenShift provides a persistent storage subsystem that will automatically connect real-world storage to the right Pods, allowing for stateful applications to be used on the platform.
* A wide array of persistent storage types are usable, from raw devices (iSCSI, FC) to enterprise storage (NFS) to cloud-type options (Gluster/Ceph, EBS, pDisk, etc)
Discussion:
* OpenShift’s persistent volume system automatically connects storage to Pods
* The persistent volume system allows stateful apps to be used on the platform
* OpenShift provides flexibility in the types of storage that Pods can consume
Transcript:
OpenShift’s persistent volume system allows end users to consume a wide array of storage types to enable stateful applications to be run on the platform. Whether OpenShift is running locally in the datacenter or in the cloud, there are storage options that can be used.
#33: Speaker:
* Not every consumer of applications exists inside the OpenShift platform. External clients need to be able to access things running inside OpenShift
* The routing layer is a close partner to the service layer, providing automated load balancing to Pods for external clients
* The routing layer is pluggable and extensible if a hardware or non-OpenShift software router is desired
Discussion:
* The OpenShift router runs in pods on the platform itself, but receives traffic from the outside world and proxies it to the right pods
* The router uses the service endpoint information to determine where to load balance traffic, but it does not send traffic through the service layer
* The router is built with HAProxy, but is a pluggable solution. Red Hat currently supports F5 integration as another option.
Transcript:
OpenShift’s routing layer provides access for external clients to reach applications running on the platform. The routing layer runs in Pods inside OpenShift, and features similar load balancing and auto-routing around unhealthy Pods as the Service layer.
#34: Speaker:
* All users, either operators, developers, or application administrators, access OpenShift through the same standard interfaces
* Ultimately, the Web UI, CLI and IDEs all go through the authenticated and RBAC-controlled API
* Users do not need system-level access to any of the OpenShift hosts -- even for complicated debugging and troubleshooting.
* Continuous Integration (CI) and Continuous Deployment (CD) systems can be easily integrated with OpenShift through these interfaces, too.
* Operators and Administrators can utilize existing management and monitoring tooling in many ways.
Discussion:
* All interaction with the OpenShift environment and its tools goes through the API and is controlled by the defined RBAC settings.
* The tools and the API provide for ways to access application instances -- even for things like shells and terminals for debugging
* Existing RHEL management tooling and many existing monitoring suites can be integrated with OpenShift
* CI/CD solutions can be integrated with OpenShift to provide for complete automated lifecycle management.
Transcript:
Interacting with OpenShift boils down to interacting with the API, no matter what tools are being used. CI, CD, management, monitoring and other tooling can all go through the OpenShift API for automation purposes. And, since OpenShift is built on top of RHEL, existing systems management and systems monitoring tools can be used, too.
#39: OCP Meetup:
Grant Shipley - director of OCP platform
Burr Sutter - director of developer experience
#40: DevFestMN Link Traffic Stats:
http://bit.do/devfestmn-