This is a presentation from the OpenStack Austin Summit. It talks about managing containers in an OpenStack native way where containers are treated as first class citizens.
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A design is proposed where Senlin policies and triggers drive autoscaling at both the container and cluster level. Scaling would be coordinated between applications and clusters through control flows in both directions.
3. Integration with existing autoscaling in container orchestration engines like Kubernetes is also considered, where Senlin could leverage autoscalers but also handle scaling requirements not met by the native solutions.
Senlin clustering service deep dive on Austin summit. This slide introduces the background, the overall architecture of the senlin project. It also highlights features released in 1.0.0 and features planed for 2.0.0.
Exploring Magnum and Senlin integration for autoscaling containersTon Ngo
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A key goal is to coordinate scaling at both the application and cluster level based on policies. This includes scaling containers within clusters managed by COEs like Kubernetes, as well as adding or removing nodes to clusters.
3. The demo shows autoscaling a Kubernetes cluster managed by Magnum using Senlin profiles, policies, and triggers to scale pods and nodes based on metrics from Ceilometer. This provides an integrated OpenStack solution for multi-level autoscaling of containers.
This document discusses autoscaling in Kubernetes. It describes horizontal and vertical autoscaling, and how Kubernetes can autoscale nodes and pods. For nodes, it proposes using Google Compute Engine's managed instance groups and cloud autoscaler to automatically scale the number of nodes based on resource utilization. For pods, it discusses using an autoscaler controller to scale the replica counts of replication controllers based on metrics from cAdvisor or Google Cloud Monitoring. Issues addressed include rebalancing pods and handling autoscaling during rolling updates.
Kubernetes is an open source container orchestration system that automates the deployment, maintenance, and scaling of containerized applications. It groups related containers into logical units called pods and handles scheduling pods onto nodes in a compute cluster while ensuring their desired state is maintained. Kubernetes uses concepts like labels and pods to organize containers that make up an application for easy management and discovery.
2016 08-30 Kubernetes talk for Waterloo DevOpscraigbox
This document discusses Kubernetes and container orchestration on Google Cloud Platform. It provides an overview of Kubernetes and how it allows users to manage applications and deploy containers across clusters. Key points include that Kubernetes was created at Google and is now open source, it provides tools for scheduling, load balancing and ensuring availability of containerized applications, and that adoption is growing rapidly across startups and enterprises due to benefits like portability and ease of updating clusters.
Why do containers suddenly matter so much when they have been around since 1998? Take a look at the potential of OpenStack's Magnum, Murano and Nova-Docker in the context leveraging the incredible interest in Linux Containers brought about by Docker.
Check out www.stackengine.com to learn more about our excellent container management solution.
This document discusses Kubernetes and container technologies. It provides an overview of Kubernetes architecture, components like pods and services, and tools for managing Kubernetes clusters. It also discusses running Kubernetes on bare metal and Oracle's Kubernetes installer for easily deploying Kubernetes on Oracle Cloud Infrastructure.
The document discusses challenges of deploying Kubernetes on-premise, including how load balancers are provisioned without cloud providers, using Nginx and Haproxy for load balancing on bare metal. It also covers how persistent volumes are provisioned with CSI drivers like Ember CSI to interface with storage backends, and tools for deploying and managing on-premise Kubernetes clusters like RKE.
The Operator Pattern - Managing Stateful Services in KubernetesQAware GmbH
Cloud Native Night, January 2018, Mainz: Talk by Jakob Karalus (@krallistic, IT Consultant at codecentric)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night
Abstract: While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
[Spark Summit 2017 NA] Apache Spark on KubernetesTimothy Chen
This document summarizes a presentation about running Apache Spark on Kubernetes. It discusses how Spark jobs can be scheduled and run on Kubernetes, including scheduling the driver and executor pods. Key points of the design include the Kubernetes scheduler backend for Spark and components like the file staging server. The roadmap outlines upcoming support for features like Spark Streaming and improvements to dynamic allocation.
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
Kubernetes is making the promise of changing the datacenter from being a group of computer to "a computer" itself. This presentation outlines the new features in K8S with 1.1 and 1.2 release.
This document discusses Google Kubernetes Engine (GKE). It introduces containers and Kubernetes, then summarizes GKE as a container platform that fully manages master nodes. GKE provides automated operations like cluster autoscaling and node auto-repair. It allows creating multiple node pools with different configurations. GKE also enables high availability clusters across zones and monitoring with Stackdriver. Demos show using GKE to run game servers and implementing continuous integration and delivery pipelines.
Kafka on Kubernetes: Keeping It Simple (Nikki Thean, Etsy) Kafka Summit SF 2019confluent
Cloud migration: it's practically a rite of passage for anyone who's built infrastructure on bare metal. When we migrated our 5-year-old Kafka deployment from the datacenter to GCP, we were faced with the task of making our highly mutable server infrastructure more cloud-friendly. This led to a surprising decision: we chose to run our Kafka cluster on Kubernetes. I'll share war stories from our Kafka migration journey, explain why we chose Kubernetes over arguably simpler options like GCP VMs, and present the lessons we learned while making our way toward a stable and self-healing Kubernetes deployment. I'll also go through some improvements in the more recent Kafka releases that make upgrades crucial for any Kafka deployment on immutable and ephemeral infrastructure. You'll learn what happens when you try to run one complex distributed system on top of another, and come away with some handy tricks for automating cloud cluster management, plus some migration pitfalls to avoid. And if you're not sure whether running Kafka on Kubernetes is right for you, our experiences should provide some extra data points that you can use as you make that decision.
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
Stateful set in kubernetes implementation & usecases Krishna-Kumar
This document summarizes a presentation on StatefulSets in Kubernetes. It discusses why StatefulSets are useful for running stateful applications in containers, the differences between stateful and stateless applications, how volumes are used in StatefulSets, examples of running single-instance and multi-instance stateful applications like Zookeeper, and the current status and future roadmap of StatefulSets in Kubernetes.
Being a cloud native developer requires learning some new language and new skills like circuit-breakers, canaries, service mesh, linux containers, dark launches, tracers, pods and sidecars. In this session, we will introduce you to cloud native architecture by demonstrating numerous principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x, while leveraging Istio on Kubernetes with OpenShift.
OpenStack on Kubernetes (BOS Summit / May 2017 update)rhirschfeld
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points:
1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack.
2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements.
3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!smalltown
This document summarizes a talk about building, shipping, and running applications in production using containers on AWS. It discusses migrating an existing service from an on-premise data center to AWS, refactoring the application into microservices and containerizing it using Docker. It then covers setting up a Kubernetes cluster on CoreOS to orchestrate the containers across AWS, addressing challenges like application state, updates and monitoring. Terraform is presented as a way to define infrastructure as code and provision AWS resources. Logging, metrics collection and monitoring the Kubernetes cluster are also discussed.
Implement Advanced Scheduling Techniques in Kubernetes Kublr
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations?
Oleg Chunikhin addressed those questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity. You’ll get a run-down of the pitfalls and things to keep in mind for this route.
This document discusses methods for providing high availability services in Kubernetes including NodePort, cloud provider load balancers, Ingress, and Keepalived VIP. NodePort exposes services on each node's IP at a static port. Cloud provider load balancers rely on the cloud platform to provide an external IP for services. Ingress is for HTTP load balancing but does not fully support external networking. Keepalived VIP uses a virtual IP address, IP to service mapping, and daemonset to provide high availability services on bare metal clusters without a cloud provider.
The document discusses Kubernetes cluster autoscaler, including how it works, deployment steps, configuration, and limitations. It describes setting up the cluster autoscaler manager, preparing extra nodes, protecting nodes from scale down, and installing the autoscaler using Helm. Some key points are that it can automatically scale a cluster from 0 to 1000 nodes handling 30 pods each, but has restrictions like not supporting regional instance groups and taking 10 minutes to scale nodes down.
Kubernetes pods / container scheduling 201 - pod and node affinity and anti-affinity, node selectors, taints and tolerations, persistent volumes constraints, scheduler configuration and custom scheduler development and more.
This presentation about Kubernetes, targeted for Java Developers was given for the first time (in French) at the Montreal Java User Group on May 2nd, 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
This document discusses Kubernetes and container technologies. It provides an overview of Kubernetes architecture, components like pods and services, and tools for managing Kubernetes clusters. It also discusses running Kubernetes on bare metal and Oracle's Kubernetes installer for easily deploying Kubernetes on Oracle Cloud Infrastructure.
The document discusses challenges of deploying Kubernetes on-premise, including how load balancers are provisioned without cloud providers, using Nginx and Haproxy for load balancing on bare metal. It also covers how persistent volumes are provisioned with CSI drivers like Ember CSI to interface with storage backends, and tools for deploying and managing on-premise Kubernetes clusters like RKE.
The Operator Pattern - Managing Stateful Services in KubernetesQAware GmbH
Cloud Native Night, January 2018, Mainz: Talk by Jakob Karalus (@krallistic, IT Consultant at codecentric)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night
Abstract: While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
[Spark Summit 2017 NA] Apache Spark on KubernetesTimothy Chen
This document summarizes a presentation about running Apache Spark on Kubernetes. It discusses how Spark jobs can be scheduled and run on Kubernetes, including scheduling the driver and executor pods. Key points of the design include the Kubernetes scheduler backend for Spark and components like the file staging server. The roadmap outlines upcoming support for features like Spark Streaming and improvements to dynamic allocation.
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
Kubernetes is making the promise of changing the datacenter from being a group of computer to "a computer" itself. This presentation outlines the new features in K8S with 1.1 and 1.2 release.
This document discusses Google Kubernetes Engine (GKE). It introduces containers and Kubernetes, then summarizes GKE as a container platform that fully manages master nodes. GKE provides automated operations like cluster autoscaling and node auto-repair. It allows creating multiple node pools with different configurations. GKE also enables high availability clusters across zones and monitoring with Stackdriver. Demos show using GKE to run game servers and implementing continuous integration and delivery pipelines.
Kafka on Kubernetes: Keeping It Simple (Nikki Thean, Etsy) Kafka Summit SF 2019confluent
Cloud migration: it's practically a rite of passage for anyone who's built infrastructure on bare metal. When we migrated our 5-year-old Kafka deployment from the datacenter to GCP, we were faced with the task of making our highly mutable server infrastructure more cloud-friendly. This led to a surprising decision: we chose to run our Kafka cluster on Kubernetes. I'll share war stories from our Kafka migration journey, explain why we chose Kubernetes over arguably simpler options like GCP VMs, and present the lessons we learned while making our way toward a stable and self-healing Kubernetes deployment. I'll also go through some improvements in the more recent Kafka releases that make upgrades crucial for any Kafka deployment on immutable and ephemeral infrastructure. You'll learn what happens when you try to run one complex distributed system on top of another, and come away with some handy tricks for automating cloud cluster management, plus some migration pitfalls to avoid. And if you're not sure whether running Kafka on Kubernetes is right for you, our experiences should provide some extra data points that you can use as you make that decision.
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
Stateful set in kubernetes implementation & usecases Krishna-Kumar
This document summarizes a presentation on StatefulSets in Kubernetes. It discusses why StatefulSets are useful for running stateful applications in containers, the differences between stateful and stateless applications, how volumes are used in StatefulSets, examples of running single-instance and multi-instance stateful applications like Zookeeper, and the current status and future roadmap of StatefulSets in Kubernetes.
Being a cloud native developer requires learning some new language and new skills like circuit-breakers, canaries, service mesh, linux containers, dark launches, tracers, pods and sidecars. In this session, we will introduce you to cloud native architecture by demonstrating numerous principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x, while leveraging Istio on Kubernetes with OpenShift.
OpenStack on Kubernetes (BOS Summit / May 2017 update)rhirschfeld
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points:
1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack.
2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements.
3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!smalltown
This document summarizes a talk about building, shipping, and running applications in production using containers on AWS. It discusses migrating an existing service from an on-premise data center to AWS, refactoring the application into microservices and containerizing it using Docker. It then covers setting up a Kubernetes cluster on CoreOS to orchestrate the containers across AWS, addressing challenges like application state, updates and monitoring. Terraform is presented as a way to define infrastructure as code and provision AWS resources. Logging, metrics collection and monitoring the Kubernetes cluster are also discussed.
Implement Advanced Scheduling Techniques in Kubernetes Kublr
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations?
Oleg Chunikhin addressed those questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity. You’ll get a run-down of the pitfalls and things to keep in mind for this route.
This document discusses methods for providing high availability services in Kubernetes including NodePort, cloud provider load balancers, Ingress, and Keepalived VIP. NodePort exposes services on each node's IP at a static port. Cloud provider load balancers rely on the cloud platform to provide an external IP for services. Ingress is for HTTP load balancing but does not fully support external networking. Keepalived VIP uses a virtual IP address, IP to service mapping, and daemonset to provide high availability services on bare metal clusters without a cloud provider.
The document discusses Kubernetes cluster autoscaler, including how it works, deployment steps, configuration, and limitations. It describes setting up the cluster autoscaler manager, preparing extra nodes, protecting nodes from scale down, and installing the autoscaler using Helm. Some key points are that it can automatically scale a cluster from 0 to 1000 nodes handling 30 pods each, but has restrictions like not supporting regional instance groups and taking 10 minutes to scale nodes down.
Kubernetes pods / container scheduling 201 - pod and node affinity and anti-affinity, node selectors, taints and tolerations, persistent volumes constraints, scheduler configuration and custom scheduler development and more.
This presentation about Kubernetes, targeted for Java Developers was given for the first time (in French) at the Montreal Java User Group on May 2nd, 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
Get you Java application ready for Kubernetes !Anthony Dahanne
In this demos loaded talk we’ll explore the best practices to create a Docker image for a Java app (it’s 2019 and new comers such as Jib, CNCF buildpacks are interesting alternatives to Docker builds !) - and how to integrate best with the Kubernetes ecosystem : after explaining main Kubernetes objects and notions, we’ll discuss Helm charts and productivity tools such as Skaffold, Draft and Telepresence.
Tell the history of Container/Docker/Kubernetes, and show the key elements of them.
After view this document, you could know the main feature of Container Docker and Kubernetes.
Very basic infomation about how these technique work together.
Dev opsec dockerimage_patch_n_lifecyclemanagement_kanedafromparis
Lors de cette présentation, nous allons dans un premier temps rappeler la spécificité de docker par rapport à une VM (PID, cgroups, etc) parler du système de layer et de la différence entre images et instances puis nous présenterons succinctement kubernetes.
Ensuite, nous présenterons un processus « standard » de propagation d’une version CI/CD (développement, préproduction, production) à travers les tags docker.
Enfin, nous parlerons des différents composants constituant une application docker (base-image, tooling, librairie, code).
Une fois cette introduction réalisée, nous parlerons du cycle de vie d’une application à travers ses phases de développement, BAU pour mettre en avant que les failles de sécurité en période de développement sont rapidement corrigées par de nouvelles releases, mais pas nécessairement en BAU où les releases sont plus rares. Nous parlerons des diverses solutions (jfrog Xray, clair, …) pour le suivie des automatique des CVE et l’automatisation des mises à jour. Enfin, nous ferons un bref retour d’expérience pour parler des difficultés rencontrées et des propositions d’organisation mises en oeuvre.
Cette présentation bien qu’illustrée par des implémentations techniques est principalement organisationnelle.
Kubernetes is a container cluster manager that aims to provide a platform for automating deployment, scaling, and operations of application containers across clusters of machines. It uses pods as the basic building block, which are groups of application containers that share storage and networking resources. Kubernetes includes control planes for replication, scheduling, and services to expose applications. It supports deployment of multi-tier applications through replication controllers, services, labels, and pod templates.
Federated Kubernetes: As a Platform for Distributed Scientific ComputingBob Killen
A high level overview of Kubernetes Federation and the challenges encountered when building out a Platform for multi-institutional Research and Distributed Scientific Computing.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are explained, along with its integration points within OpenStack.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are described.
1. Docker EE will include an unmodified Kubernetes distribution to provide orchestration capabilities alongside Docker Swarm.
2. When running mixed workloads across orchestrators, resource contention is a risk and it is recommended to separate workloads by orchestrator on each node for now.
3. Docker EE aims to address the shortcomings of running mixed workloads to better support this in the future.
This document provides an overview of Kubernetes including:
1) Kubernetes is an open-source platform for automating deployment, scaling, and operations of containerized applications. It provides container-centric infrastructure and allows for quickly deploying and scaling applications.
2) The main components of Kubernetes include Pods (groups of containers), Services (abstract access to pods), ReplicationControllers (maintain pod replicas), and a master node running key components like etcd, API server, scheduler, and controller manager.
3) The document demonstrates getting started with Kubernetes by enabling the master on one node and a worker on another node, then deploying and exposing a sample nginx application across the cluster.
Kubernetes is designed to be an extensible system. But what is the vision for Kubernetes Extensibility? Do you know the difference between webhooks and cloud providers, or between CRI, CSI, and CNI? In this talk we will explore what extension points exist, how they have evolved, and how to use them to make the system do new and interesting things. We’ll give our vision for how they will probably evolve in the future, and talk about the sorts of things we expect the broader Kubernetes ecosystem to build with them.
Robert Barr presents on Kubernetes for Java developers. He discusses Quarkus, Micronaut and Spring Boot frameworks for building cloud-native Java applications. He provides an overview of Docker and how it can package applications. Barr then explains why Kubernetes is useful for orchestrating containers at scale, describing its architecture and key concepts like pods, deployments and services. He demonstrates running a sample application on Kubernetes and integrating with its Java client.
Kubernetes is an open-source container management platform. It has a master-node architecture with control plane components like the API server on the master and node components like kubelet and kube-proxy on nodes. Kubernetes uses pods as the basic building block, which can contain one or more containers. Services provide discovery and load balancing for pods. Deployments manage pods and replicasets and provide declarative updates. Key concepts include volumes for persistent storage, namespaces for tenant isolation, labels for object tagging, and selector matching.
DevNetCreate - ACI and Kubernetes IntegrationHank Preston
This document provides an overview of Kubernetes and how it can be integrated with Cisco Application Centric Infrastructure (ACI) through the ACI Networking plugin for Kubernetes. It discusses Kubernetes concepts like pods, deployments, services and namespaces. It then explains how the ACI plugin maps these Kubernetes objects to ACI objects like endpoint groups, contracts and virtual device contexts to provide network isolation and policies. The rest of the document outlines a hands-on lab where users can set up their own Kubernetes cluster integrated with ACI and deploy applications with different levels of network isolation.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
ContainerD is a daemon that controls the runC runtime to execute and manage containers according to the OCI specification. It has a gRPC API and a low-level CLI (ctr) for debugging. ContainerD is designed to be embedded in larger systems rather than directly used by end-users. It focuses on container execution, images, storage, and networking.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
A short tech show on how to achieve VM HA by integrating Heat, Ceilometer and Nova; and another show about deploying a cluster of VMs across multiple regions then scale it.
An experience sharing of the OpenStack deployment at Suning.com, a large online retailer in China. The talk presents the challenges and opportunities on orchestrating the enterprise workloads using Heat.
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with SenlinQiming Teng
This is a talk from the Austin OpenStack summit. It demonstrates how a resilient, elastic and load-balanced cluster can be deployed using senlin, heat, ceilometer, lbaas v2, nova.
Senlin is an OpenStack project that provides autonomic management and auto-scaling capabilities for collections of cloud applications. It allows users to define clusters of resources and attach policies to control their behavior. Senlin supports core concepts like profiles to define resource types, clusters to group resources, policies to control cluster behavior through scaling and placement rules, and events to notify of state changes. The project aims to provide a standalone service for auto-scaling that is not dependent on Heat but can still integrate with it and other OpenStack services.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
"Rebranding for Growth", Anna VelykoivanenkoFwdays
Since there is no single formula for rebranding, this presentation will explore best practices for aligning business strategy and communication to achieve business goals.
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Buckeye Dreamin 2024: Assessing and Resolving Technical DebtLynda Kane
Slide Deck from Buckeye Dreamin' 2024 presentation Assessing and Resolving Technical Debt. Focused on identifying technical debt in Salesforce and working towards resolving it.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Learn the Basics of Agile Development: Your Step-by-Step GuideMarcel David
New to Agile? This step-by-step guide is your perfect starting point. "Learn the Basics of Agile Development" simplifies complex concepts, providing you with a clear understanding of how Agile can improve software development and project management. Discover the benefits of iterative work, team collaboration, and flexible planning.
2. OTSUKA, Motohiro/Yuanying
NEC Solution Innovators
OpenStack Magnum Core Reviewer
Haiwei Xu
NEC Solution Innovators
OpenStack Senlin Core Reviewer
Qiming Teng
IBM, Research Scientist
OpenStack Senlin PTL, OpenStack Heat Core Reviewer
3. Agenda
• Why containers if you already have OpenStack
• What are the use cases?
• The many roads leading to Roma
• Container as first-class citizens on OpenStack
• Deployment and management
• Technology Gaps
• Experience Sharing and Outlook
• What we can do today
• Things to expect in Newton cycle
5. Photographer: Captain Albert E. Theberge, NOAA Corps (ret.)
from http://www.photolib.noaa.gov/coastline/line3174.htm
6. X-ray: NASA/CXC/RIKEN/D.Takei et al; Optical: NASA/STScI; Radio: NRAO/VLA
from http://www.nasa.gov/sites/default/files/thumbnails/image/gkper.jpg
7. Advantages of container technology
Server
Host OS
Hypervisor
Guest OS
libs / bins
Application
Guest OS
libs / bins
Application
Server
Host OS
libs / bins
Application
libs / bins
Application
Virtual Machine Container
8. Container Image
libs / bins
Application
Container Image
libs / bins
Application
Advantages of container technology
Server A
Host OS
Container Image
libs / bins
Application
Container Image
Server B
Host OS
libs / bins
Application
Dockerfile
Docker Registry
Development Production
Version managiment
10. Major Use Cases
• For application/service users
• IF self-serviced THEN
deploy/launch; simple configuration
ENDIF
• go...
• For application developers
• Develop, Commit, Test
• Build, Deploy, Push
• Pull, Patch, Push
• For cloud deployers/operators
• Build Infrastructure
• Install, Configure, Upgrade
• Monitor, Fix, Bill,
11. All Roads Lead to Roma
• How many roads do we have?
• nova lxc
• nova docker
• heat docker
• heat deployment
• magnum bay
• docker swarm
• kubernetes
• mesos
• marathon, ...
• openstack ansible
• kolla
• kolla-mesos
• .....
12. Nova: Docker / LXC
Ironic
Bare
Metal
Bare
Metal
VM VM VM
VM VM VM
Docker/LXC
???
Virtualization Bare metal Container
VM VM VM
VM VM VM
VM VM VM
VM VM VM
libvirt VMware Xen
Nova
driver
nova-docker virt driver
LXC (libvirt) driver
18. Balancing across the abstraction layer
• Container as another compute API?
• maybe pm, vm, lwVM
• so many backends
• An abstraction over all existing container management
software?
• it is possible, but many questions to be answered, e.g. why?
• do you really need to switch between these software frequently?
• are you willing to develop a client software to interact with all of them?
• So ... container clustering
• better integration with OpenStack
• ease of use
21. Senlin Features
• Profiles: A specification for the objects to be managed
• Policies: Rules to be checked/enforced before/after actions are performed
21
(others)
Senlin
Nova
Docker
Heat
Ironic BareMetal
VMs
Stacks
Containers
placement
deletion
scaling
health
load-balance
affinity
Policies as Plugins Profiles as Plugins Cluster/Nodes Managed
22. Senlin Server Architecture
openstacksdk
identity
compute
orchestration
network
...
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
openstack
dummy
(others)
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
os.heat.stack
(others)
os.nova.server
senlin-api
WSGI
middleware
apiv1
23. Senlin Server Architecture (for containers)
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
docker-py
dummy
lxc
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
container.docker
(others)
container.lxc
senlin-api
WSGI
middleware
apiv1
29. Container node and container cluster
node
Heat stack
Nova server
Profile type
Container
Nova server
Heat stack
Container
Nova server
Template for Heat Heat stack
container
cluster
Template for container
Template for Nova
Nova serverNova server
Heat stackHeat stack
containercontainer
30. How to create a container cluster?
container profile
cluster1
vm server
vm server
vm server
cluster1
container
vm
vm
container
container
vm
cluster2
container
container
container
container
container
31. The scalability of vm cluster and container cluster
cluster1
container
vm
container
vm
cluster2
container
container
user
Placement policy
Deletion policy
Scaling policy
Placement policy
Deletion policy
Scaling policy
vm
container container
#5: The section which I will talk is “Why Containers, If you already have OpenStack.”
#6: Container is a type of virtualization technology, and we can use it as computing resource.
But computing resource?
#7: OpenStack already has the Nova, which is an abstraction layoer of computing resource.
#8: Basically Nova handles virtual machine, and Nova provides abstraction layer for managing virtual machine.
So if container is a type of virtual machine, “Why Containers, If you already have OpenStack.”
This diagram shows the difference between virtual machine and container model.
Left side is a traditional virtual machine model.
And right side is a container model.
Virtual machine requires a hypervisor which emulate and translate the hardware,
and has its own OS.
Container provide isolation for processes sharing compute resources.
They are similar to virtualized machine but share the host kernel and avoid hardware emulation.
So you can use a Host resources more effectively than Virtual machine.
Soin this case, you can use container like a virtual machine,
It means that nova can manage containers.
#9: In addition, Docker provides a simple tools and eco system for container which makes container technology become very polular .
You can create container image easily using Dockerfile.
You can share container image using docker registry,
Container Image has all the additional dependencies that application need, above what is provided by host.
You can move application from host to host easily.
#10: Futthermore container scalability and elasticity are much better than virtual machine,
and thanks to some managiment tools like kubernetes and docke swarm, managiment of containers between defferent host become much easier.
So OpenStack needs this technology to make cloud managiment easiler.
#11: This slide show major use cases of container technology.
The first one is for application users, who only want the application to be started quickly,
The don’t care how the application is started.
For the application developers, they care about application lifecycles, version managiment and portablicty.
And for cloud operators they care about how to manage infrastructure effectively, how to upgrade the system and so on.
#12: Let’s see what container technology exists in OpenStack.
we have nova lxc, docker and magnum so many projects supporting container technlogy.
#13: For example, Nova has lxc driver and docker driver which provides same interface with virtual machine.
User can start container like virtual machine.
This model doesn’t support all the advantage of container technology,
But this can meet the application user’s needs who just want to deploy application quickly.
#14: Next heat.
Heat has two way to manage cotainer.
One is Docker::Container resource, and the other is SoftwareConfig or StructuredConfig Resource.
This can also meet the application user’s needs but it has limitaion of managing the containers after they are created.
#15: And the next is magnum.
Magnum is a container orchestration engine as a service which deploy and manage COE.
When magnum deploy the COE, user can use all of the advantage of container technology through it’s COE specific tools such as kubectl or docker cli.
This can meet the developer and operator use cases.
But you must manage containers without an OpenStack native way.
#16: Next one is Kolla.
Kolla is a OpenStack as a Service.
which uses container technology to make managiment openstack easier.
This is one caese of operator’s usecases.
this is just use a containers but not a way to manage containers itself.
#17: So in order to manage container well in OpenStack, we need to find the new solution.
But we have a some problem to solve it.
The commuty has disscussed a lot about this issues.
Create an unified api which can support vm, baremetal and containers?
But the usecase of vm and conrainer are different so we can’t provider unified API.
next issue is how create an unified abstraction api for container orchestration engine?
This also have same problem which is a difference between kuber
But we can’t get an agreement on it.
#29: As introduced previously Senlin is a project which provides clustering service, currently it only supports vm cluster. When supports container clustering, Senlin will do it in the similar way of vm clustering.
So at first, we need a new type profile – a container type profile. In the profile we will define some properties which will be used to create containers. All necessary properties can be defined in this profile. The format is similar to Nova server format.
#30: With the profile , we can create container nodes and container clusters, When create containers, we need host vms, and the host vms are also managed by Senlin.So Senlin can manage both vm layer and container layer.
#31: This is the workflow of creating container cluster. We can create multiple containers in one vm depending on the vm resource.
We can see physically containers are running on vms, but logically the container cluster(cluster2) and the vm cluster(cluster1) are separated clusters, Senlin can manage them separately.
That means to end uses who just want containers, they may just see the container cluster.
#32: Lets see how to manage the resources.
The scalability control is the advantage of Senlin. When we have a vm cluster and container cluster, sometime the resource is not enough, it need to scale out.
Senlin invents policies to tell the cluster how to scale out/scale in. As we see the policies are attached to the cluster. When the resource is not enough, we got an alarm from the ceilometer, the policy will be triggered. Then it will tell the vm cluster to create a vm, after that the scaling policy attached to the container cluster will be triggered, then a new container will be created.
This is the scale out model, of course when the resources are idle, some vm/containers will be deleted. This is the way how senlin control cluster scalability.
#35: About the design for container cluster, we still have some issues to think about. Everyone has its own advantage. When starting a container we need to determine starting it on which cluster which node. So do we need a scheduler to do this job. In senlin we have a placement policy, in the policy we can define where to start to nodes, it is a kind of scheduler, but very simple one, it’s not smart enough, we still need to improve it to meet our needs. Anyway it is a solution of this issue.
#36: So we hope we can use Kuryr to create container network automatically just like we create vm network.
#37: After all about container cluster support in senlin, we have had some discussions and have made some agreements on some issues. But we still want to hear more voices from the community, we need your ideas, your suggestions and also new hands, so please join us if you are interested in this job. You can find us on IRC senin channel and can also join our weekly meeting, any ideas are appreciated.