[OpenInfra Days Korea 2018] (Track 1) TACO (SKT All Container OpenStack): Clo...OpenStack Korea Community
- 폰트 문제로 다운로드를 여기서 해 주세요: http://bit.ly/openinfradays-day1-skt-taco
- 발표자: 안재석, SK Telecom
- 설명: https://event.openinfradays.kr/2018/session1/track_1_4
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkBo Yang
The slides explain how shuffle works in Spark and help people understand more details about Spark internal. It shows how the major classes are implemented, including: ShuffleManager (SortShuffleManager), ShuffleWriter (SortShuffleWriter, BypassMergeSortShuffleWriter, UnsafeShuffleWriter), ShuffleReader (BlockStoreShuffleReader).
Performance optimization for all flash based on aarch64 v2.0Ceph Community
This document discusses performance optimization techniques for All Flash storage systems based on ARM architecture processors. It provides details on:
- The processor used, which is the Kunpeng920 ARM-based CPU with 32-64 cores at 2.6-3.0GHz, along with its memory and I/O controllers.
- Optimizing performance through both software and hardware techniques, including improving CPU usage, I/O performance, and network performance.
- Specific optimization techniques like data placement to reduce cross-NUMA access, multi-port NIC deployment, using multiple DDR channels, adjusting messaging throttling, and optimizing queue wait times in the object storage daemon (OSD).
- Other
This document discusses observability and its three pillars: logs, metrics, and traces. It introduces common observability tools like Elastic Stack, Prometheus, and Jaeger. Logs should be aggregated and indexed, metrics can use recording rules and alerting, and traces enable root cause analysis. Best practices include monitoring components, testing configurations, and retaining sufficient log data. Observability provides insight into systems from external outputs and context about internal states.
Everyone wants observability into their system, but find themselves with too many vendors and tools, each with its own API, SDK, agent and collectors.
In this talk I will present OpenTelemetry, an ambitious open source project with the promise of a unified framework for collecting observability data. With OpenTelemetry you could instrument your application in a vendor-agnostic way, and then analyze the telemetry data in your backend tool of choice, whether Prometheus, Jaeger, Zipkin, or others.
I will cover the current state of the various projects of OpenTelemetry (across programming languages, exporters, receivers, protocols), some of which not even GA yet, and provide useful guidance on how to get started with it.
OpenStack 개요 및 활용 사례 @ Community Open Camp with MicrosoftIan Choi
2016년 4월 9일, Microsoft와 함께 하는 Community Open Camp에서 오픈스택 한국 커뮤니티 첫 번째 세션 자료입니다.
두 번째 자료는 다음 URL에서 확인 가능합니다
: http://www.slideshare.net/YooEdward/why-openstack-is-operating-system-60685165
1. What is Solr?
2. When should I use Solr vs. Azure Search?
3. Why is Solr great (and its downside)?
4. How does Solr compare to Azure Search?
5. Why SearchStax? (Solr is complex; SearchStax makes it as easy as Azure Search)
OpenTelemetry is a set of APIs, SDKs, tooling and integrations that are designed for the creation and management of telemetry data such as traces, metrics, and logs. It aims to enable effective observability by making high-quality, portable telemetry ubiquitous and vendor-agnostic. The OpenTelemetry Collector is an independent process that acts as a "universal agent" to collect, process, and export telemetry data in a highly performant and stable manner, supporting multiple types of telemetry through customizable pipelines consisting of receivers, processors, and exporters.
Stop the Guessing: Performance Methodologies for Production SystemsBrendan Gregg
Talk presented at Velocity 2013. Description: When faced with performance issues on complex production systems and distributed cloud environments, it can be difficult to know where to begin your analysis, or to spend much time on it when it isn’t your day job. This talk covers various methodologies, and anti-methodologies, for systems analysis, which serve as guidance for finding fruitful metrics from your current performance monitoring products. Such methodologies can help check all areas in an efficient manner, and find issues that can be easily overlooked, especially for virtualized environments which impose resource controls. Some of the tools and methodologies covered, including the USE Method, were developed by the speaker and have been used successfully in enterprise and cloud environments.
HDFS Federation addresses scalability limitations of HDFS by partitioning the namespace across multiple namenodes. It introduces block pools to provide a generic block storage service independent of the namespace. This allows isolation of namespaces and flexibility to run different applications directly on the block storage. The design changes little in existing HDFS while improving scalability, isolation, and innovation.
As technology and software design practices morph and change, Lowe’s Digital has had to do the same. Moving from a single monolithic web application to multiple mobile applications for both consumers and associates has forced us to look at how we manage our development lifecycle differently. This complex landscape has changed how we look at how we leverage Akamai and their array of solutions in both our lower and production level environments. In this presentation we will discuss where we started, the challenges we faced along the way, and how we are leveraging tools and Akamai API's to streamline our solutions delivery pipeline.
Docker Networking - Common Issues and Troubleshooting TechniquesSreenivas Makam
This document discusses Docker networking components and common issues. It covers Docker networking drivers like bridge, host, overlay, topics around Docker daemon access and configuration behind firewalls. It also discusses container networking best practices like using user-defined networks instead of links, connecting containers to multiple networks, and connecting managed services to unmanaged containers. The document is intended to help troubleshoot Docker networking issues.
The document summarizes updates to CephFS in the Pacific release, including improvements to usability, performance, ecosystem integration, multi-site capabilities, and quality. Key updates include MultiFS now being stable, MDS autoscaling, cephfs-top for performance monitoring, scheduled snapshots, NFS gateway support, feature bits for compatibility checking, and improved testing coverage. Performance improvements include ephemeral pinning, capability management optimizations, and asynchronous operations. Multi-site replication between clusters is now possible with snapshot-based mirroring.
By Tom Wilkie, delivered at London Microservices User Group on 2/12/15
The rise of microservice-based applications has had many knock-on effects, not least on the complexity of monitoring your application. Order-of-magnitude increase in the number of moving parts and rate of change of the application require us to reassess traditional monitoring techniques.
In this talk we will discuss some different approaches to monitoring, visualising and tracing containerised, microservices-based applications. We’ll present different techniques to some of the emergent problems, and try not to rant too much.
This document provides an overview and agenda for a meetup on distributed tracing using Jaeger. It begins with introducing the speaker and their background. The agenda then covers an introduction to distributed tracing, open tracing, and Jaeger. It details a hello world example, Jaeger terminology, and building a full distributed application with Jaeger. It concludes with wrapping up the demo, reviewing Jaeger architecture, and discussing open tracing's ability to propagate context across services.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
The document discusses Cilium and Istio with Gloo Mesh. It provides an overview of Gloo Mesh, an enterprise service mesh for multi-cluster, cross-cluster and hybrid environments based on upstream Istio. Gloo Mesh focuses on ease of use, powerful best practices built in, security, and extensibility. It allows for consistent API for multi-cluster north-south and east-west policy, team tenancy with service mesh as a service, and driving everything through GitOps.
Combining Logs, Metrics, and Traces for Unified ObservabilityElasticsearch
Learn how Elasticsearch efficiently combines data in a single store and how Kibana is used to analyze it. Plus, see how recent developments help identify, troubleshoot, and resolve operational issues faster.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
Distributed tracing allows requests to be tracked across multiple services in a distributed system. The Jaeger distributed tracing system was used with the HOTROD sample application to visualize and analyze the request flow. Key aspects like latency bottlenecks and non-parallel processing were identified. Traditional logs lack the request context provided by distributed tracing.
This document discusses upgrading an Openstack network to SDN with Tungsten Fabric. It evaluates three solutions: 1) using the same database across regions, 2) hot-swapping Open vSwitch and virtual routers, and 3) using an ML2 plugin. The recommended solution is #3 as it provides minimum downtime. Key steps include installing the OpenContrail driver, synchronizing network resources between Openstack and Tungsten, and live migrating VMs. Topology 2 is also recommended as it requires minimum changes. The upgrade migrated 80 VMs and 16 compute nodes to the SDN network without downtime. Issues discussed include synchronizing resources and migrating VMs between Open vSwitch and virtual routers.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
This document discusses end-to-end data governance with Apache Avro and Apache Atlas at Comcast. It outlines how Comcast uses Avro for schema governance and Apache Atlas for data governance, including metadata browsing, schema registry, and tracking data lineage. Comcast has extended Atlas with new types for Avro schemas and customizations to better handle their hybrid environment and integrate platforms for comprehensive data governance.
Do you think of cheetahs not RabbitMQ when you hear the word Swift? Think a Nova is just a giant exploding star, not a cloud compute engine. This deck (presented at the OpenStack Boston meetup) provides introduction will answer your many questions. It covers the basic components including: Nova, Swift, Cinder, Keystone, Horizon and Glance.
Cgroups, namespaces and beyond: what are containers made from?Docker, Inc.
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
The document introduces AppliedMicro's X-Gene® processor technology. The X-Gene 1 and X-Gene 2 are server-on-a-chip solutions that integrate ARMv8 CPU cores, memory controllers, networking, storage and I/O interfaces while achieving high performance and low power. Benchmark results show the X-Gene processors providing competitive performance to Intel Xeon chips while using less power. The high-density, low-power X-Gene chips allow building scale-out servers that deliver significantly higher performance and lower costs than traditional scale-up servers for various workloads like web applications and databases.
OpenTelemetry is a set of APIs, SDKs, tooling and integrations that are designed for the creation and management of telemetry data such as traces, metrics, and logs. It aims to enable effective observability by making high-quality, portable telemetry ubiquitous and vendor-agnostic. The OpenTelemetry Collector is an independent process that acts as a "universal agent" to collect, process, and export telemetry data in a highly performant and stable manner, supporting multiple types of telemetry through customizable pipelines consisting of receivers, processors, and exporters.
Stop the Guessing: Performance Methodologies for Production SystemsBrendan Gregg
Talk presented at Velocity 2013. Description: When faced with performance issues on complex production systems and distributed cloud environments, it can be difficult to know where to begin your analysis, or to spend much time on it when it isn’t your day job. This talk covers various methodologies, and anti-methodologies, for systems analysis, which serve as guidance for finding fruitful metrics from your current performance monitoring products. Such methodologies can help check all areas in an efficient manner, and find issues that can be easily overlooked, especially for virtualized environments which impose resource controls. Some of the tools and methodologies covered, including the USE Method, were developed by the speaker and have been used successfully in enterprise and cloud environments.
HDFS Federation addresses scalability limitations of HDFS by partitioning the namespace across multiple namenodes. It introduces block pools to provide a generic block storage service independent of the namespace. This allows isolation of namespaces and flexibility to run different applications directly on the block storage. The design changes little in existing HDFS while improving scalability, isolation, and innovation.
As technology and software design practices morph and change, Lowe’s Digital has had to do the same. Moving from a single monolithic web application to multiple mobile applications for both consumers and associates has forced us to look at how we manage our development lifecycle differently. This complex landscape has changed how we look at how we leverage Akamai and their array of solutions in both our lower and production level environments. In this presentation we will discuss where we started, the challenges we faced along the way, and how we are leveraging tools and Akamai API's to streamline our solutions delivery pipeline.
Docker Networking - Common Issues and Troubleshooting TechniquesSreenivas Makam
This document discusses Docker networking components and common issues. It covers Docker networking drivers like bridge, host, overlay, topics around Docker daemon access and configuration behind firewalls. It also discusses container networking best practices like using user-defined networks instead of links, connecting containers to multiple networks, and connecting managed services to unmanaged containers. The document is intended to help troubleshoot Docker networking issues.
The document summarizes updates to CephFS in the Pacific release, including improvements to usability, performance, ecosystem integration, multi-site capabilities, and quality. Key updates include MultiFS now being stable, MDS autoscaling, cephfs-top for performance monitoring, scheduled snapshots, NFS gateway support, feature bits for compatibility checking, and improved testing coverage. Performance improvements include ephemeral pinning, capability management optimizations, and asynchronous operations. Multi-site replication between clusters is now possible with snapshot-based mirroring.
By Tom Wilkie, delivered at London Microservices User Group on 2/12/15
The rise of microservice-based applications has had many knock-on effects, not least on the complexity of monitoring your application. Order-of-magnitude increase in the number of moving parts and rate of change of the application require us to reassess traditional monitoring techniques.
In this talk we will discuss some different approaches to monitoring, visualising and tracing containerised, microservices-based applications. We’ll present different techniques to some of the emergent problems, and try not to rant too much.
This document provides an overview and agenda for a meetup on distributed tracing using Jaeger. It begins with introducing the speaker and their background. The agenda then covers an introduction to distributed tracing, open tracing, and Jaeger. It details a hello world example, Jaeger terminology, and building a full distributed application with Jaeger. It concludes with wrapping up the demo, reviewing Jaeger architecture, and discussing open tracing's ability to propagate context across services.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
The document discusses Cilium and Istio with Gloo Mesh. It provides an overview of Gloo Mesh, an enterprise service mesh for multi-cluster, cross-cluster and hybrid environments based on upstream Istio. Gloo Mesh focuses on ease of use, powerful best practices built in, security, and extensibility. It allows for consistent API for multi-cluster north-south and east-west policy, team tenancy with service mesh as a service, and driving everything through GitOps.
Combining Logs, Metrics, and Traces for Unified ObservabilityElasticsearch
Learn how Elasticsearch efficiently combines data in a single store and how Kibana is used to analyze it. Plus, see how recent developments help identify, troubleshoot, and resolve operational issues faster.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
Distributed tracing allows requests to be tracked across multiple services in a distributed system. The Jaeger distributed tracing system was used with the HOTROD sample application to visualize and analyze the request flow. Key aspects like latency bottlenecks and non-parallel processing were identified. Traditional logs lack the request context provided by distributed tracing.
This document discusses upgrading an Openstack network to SDN with Tungsten Fabric. It evaluates three solutions: 1) using the same database across regions, 2) hot-swapping Open vSwitch and virtual routers, and 3) using an ML2 plugin. The recommended solution is #3 as it provides minimum downtime. Key steps include installing the OpenContrail driver, synchronizing network resources between Openstack and Tungsten, and live migrating VMs. Topology 2 is also recommended as it requires minimum changes. The upgrade migrated 80 VMs and 16 compute nodes to the SDN network without downtime. Issues discussed include synchronizing resources and migrating VMs between Open vSwitch and virtual routers.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
This document discusses end-to-end data governance with Apache Avro and Apache Atlas at Comcast. It outlines how Comcast uses Avro for schema governance and Apache Atlas for data governance, including metadata browsing, schema registry, and tracking data lineage. Comcast has extended Atlas with new types for Avro schemas and customizations to better handle their hybrid environment and integrate platforms for comprehensive data governance.
Do you think of cheetahs not RabbitMQ when you hear the word Swift? Think a Nova is just a giant exploding star, not a cloud compute engine. This deck (presented at the OpenStack Boston meetup) provides introduction will answer your many questions. It covers the basic components including: Nova, Swift, Cinder, Keystone, Horizon and Glance.
Cgroups, namespaces and beyond: what are containers made from?Docker, Inc.
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
The document introduces AppliedMicro's X-Gene® processor technology. The X-Gene 1 and X-Gene 2 are server-on-a-chip solutions that integrate ARMv8 CPU cores, memory controllers, networking, storage and I/O interfaces while achieving high performance and low power. Benchmark results show the X-Gene processors providing competitive performance to Intel Xeon chips while using less power. The high-density, low-power X-Gene chips allow building scale-out servers that deliver significantly higher performance and lower costs than traditional scale-up servers for various workloads like web applications and databases.
This document provides an overview of Kubernetes and its components. It discusses the Go programming language features used in Kubernetes. It also describes how Kubernetes is architected, including the kube-apiserver, kube-scheduler, Kubelet, reconciliation process, and networking with Flannel. The presenter is Anseungkyu who worked on OpenStack private clouds and is now the deputy representative for OpenStack Korea.
This document discusses software defined storage based on OpenStack. It provides background on the author's experience including medical image processing, Linux kernel development, and OpenStack components like Heat, SDN and OPNFV. It then discusses several OpenStack storage components - Cinder for block storage, Swift for object storage, and Manila for shared file systems. It explains how these components work, their APIs and plugins to interface with different backend storage systems. Finally, it compares Cinder, Swift and other technologies like Ceph.
2017년 5월 25일 (목), IBM과 함께 하는 오픈스택 정기 세미나에서 IBM 김민석 과장님께서 발표해 주신 자료를 공유합니다.
- IBM 클라우드에 대해 궁금하신 사항 있으시면, IBM 담당자께 contact 바랍니다.
(한국IBM 클라우드 마케팅 담당 임지현, [email protected])
The document discusses configuring Broadcom-based network switches using OpenNSL. It provides an overview of the Open Compute Project (OCP), Facebook's Wedge switch hardware, the Open Network Linux (ONL) operating system, and the Broadcom Trident2 chip. It then demonstrates how to perform basic L2 switching and L3 routing functions using the OpenNSL API, such as learning MAC addresses, forwarding traffic, creating IP interfaces, and adding routes. OpenNSL provides an open-source hardware abstraction layer for programming Broadcom switching ASICs.
This document discusses Kubernetes and its integration with OpenStack. It begins with an introduction to Kubernetes and how it manages containerized applications across multiple hosts. It then compares virtualization and containers, describing the architecture and components of Kubernetes including pods, services, and rolling upgrades. The document outlines how Kubernetes is implemented in OpenStack using Nova Docker, Murano, and Magnum. It concludes with a Q&A section.
1. Virtual networks and cloud platforms need to collaborate as companies extend their networks across public clouds.
2. NSX supports major public clouds like AWS and Azure, allowing customers to consistently manage networks and security across private and public clouds.
3. NSX aims to connect and secure applications across private and public multiple clouds by creating private networks within or across clouds and defining logical networking and security policies.
IBM Bluemix is a cloud platform that allows developers to quickly setup and deploy applications. It uses containers and Kubernetes to provide an isolated runtime environment for applications. Developers can use the Bluemix command line interface to interact with the platform, deploying code with a single command. The platform handles configuration, builds applications using buildpacks, and runs them across a scalable container infrastructure.
IBM BlueMix Architecture and Deep Dive (Powered by CloudFoundry) Animesh Singh
meetup.com/Bluemix
meetup.com/CloudFoundry/
In this meetup, we discussed the architecture and demonstrated IBM BlueMix, public Platform-as-a-Service offering based on Cloud Foundry
데브시스터즈의 Cookie Run: OvenBreak 에 적용된 Kubernetes 기반 다중 개발 서버 환경 구축 시스템에 대한 발표입니다.
Container orchestration 기반 개발 환경 구축 시스템의 필요성과, 왜 Kubernetes를 선택했는지, Kubernetes의 개념과 유용한 기능들을 다룹니다. 아울러 구축한 시스템에 대한 데모와, 작업했던 항목들에 대해 리뷰합니다.
*NDC17 발표에서는 데모 동영상을 사용했으나, 슬라이드 캡쳐로 대신합니다.
네이버 클라우드 플랫폼의 Kubernetes Service(NKS)에서 Pod들의 오토스케일을 적용하는 방법에 대해서 소개합니다 | Introduce how to apply autoscale of Pods in the Kubernets Service (NKS) of Naver Cloud Platform
도커부터 시작하여 쿠버네티스까지 각 서비스의 기능 및 구성요소에 대해 자세히 소개해 드립니다 | Learn more about the features and components of each service, starting with a Docker and ending with Kubernetes.
Talk about Kubernetes cluster on baremetal servers for serve internal services.
Sharing my experiences about setup and manage kubernetes cluster on premise.
At first, start with single master kubernetes cluster and enhanced by many opensource softwares.
Add more master nodes for increase server availability.
Use Rook, MetalLB, Ceph, mysql-operator and more opensource projects.
[ CB-Ladybug - 멀티클라우드 애플리케이션 실행환경 통합 관리 (Multi-Cloud Application Execution Environment Integration Management) ]
- CB-Ladybug 개요 및 구조
- CB-Ladybug 개발 방향 및 제공 서비스
- CB-Ladybug 특징 및 기대 효과
- CB-Ladybug 구조 및 기술 현황
- CB-Ladybug 개발 로드맵
# 발표영상(YouTube) : https://youtu.be/VDns6ZhiwLs
---------------------------------------------------------------------------------------------
# Cloud-Barista Community Homepage : https://cloud-barista.github.io
# Cloud-Barista Community GitHub : https://github.com/cloud-barista
# Cloud-Barista YouTube channel : https://youtube.com/@cloud-barista
# Cloud-Barista SlideShare : https://cloud-barista.github.io/slideshare
Windows Kubernetes Bootstrapping and OperationsJung Hyun Nam
이 슬라이드는 Kubernetes Korea User Group 밋업 프레젠테이션 (2019년 7월 30일)에 발표한 내용입니다.
This slide was released in the Kubernetes Korea User Group MAKEUP PRESENTATION (July 30, 2019).
4. 4
Kubernetes 용어
[ Pod ]
• 컨테이너를 담고 있는 그릇 (여러개의 컨테이너가 포함될 수 있음)
• 같은 Pods 안에서의 여러 컨테이너가 같은 네트워크 네임스페이스와 ip 를 가짐
(Apache -> (localhost, port) -> Tomcat)
• 같은 Pods 안에서의 여러 컨테이너가 같은 볼륨을 본다.
[ Replica Set ]
• Pod 개수를 관리
[ Deployment ]
• Pod 와 Replica Set 을 통합하여 배포할 수 있는 단위
• 배포 히스토리를 버전별로 관리
[ Service ]
• Route to pod (using labels) – 내부 IP로 Pod 에 대한 Load Balancing (기본기능)
• 외부에서 접근할려면 아래 두 타입을 활용하여 가능
• 타입 : Load balancer (GCE), NodePort (iptables)
[ ConfigMap and Secret ]
• ConfigMap : Application 의 Configuration, 혹은 shell script
• Secret : 보안 값
5. 5
Pod
• 컨테이너 배포 단위로 컨테이너를 담고 있는 그릇
• 여러 개의 컨테이너가 포함될 수 있음
• 하나의 Pod 안에서의 여러 컨테이너는 같은 docker ip 를
가짐
• pause 컨테이너가 하나씩 생김
• Pod 내부에서 컨테이너간 통신은 localhost & 포트로 통신
• Docker Networking 의 Mapped Container Mode
docker run -d --name pause pause_image
docker run -d --name web -net=container:pause
web_image
• 하나의 Pods 안에서의 여러 컨테이너는 같은 볼륨을 볼 수
있다.
6. 6
ReplicaSet
• Pod 의 개수를 지정해서 실행
• 실행 중인 Pod 의 수를 항상 보장
• Pod 를 명령어로 삭제해도 ReplicaSet 에 의해 자동 복구됨
• Horizontal Pod Autoscaler 가 autoscale 을 할 때 ReplicaSet 활용
• ReplicaSet = Pod + replicas 수 (Pod 개수)
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: frontend-scaler
spec:
scaleTargetRef:
kind: ReplicaSet
name: frontend
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50
7. 7
Deployment
• Deployment = ReplicaSet + History (Revision)
• Pod 배포에 대한 버전 관리가 가능
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
$ kubectl create -f nginx.yaml
$ kubectl rollout history deployment/nginx-
deployment
$ kubectl rollout history deployment/nginx-
deployment --revision=2
$ kubectl rollout undo deployment/nginx-
deployment --to-revision=2
8. 8
Service
• Type: ClusterIP(default), LoadBalancer, NodePort, ExternalName
• Pod 를 대표하는 DNS 이름
• ClusterIP 가 할당됨 (Virtual IP)
• kube-proxy 가 iptables 에 Cluster IP 세팅
• Simple Load Balance (default : Round Robin)
• selector 를 지정하면 Endpoint 가 생김
Service
Pod B-1 Pod B-2
Pod A-1
serivce명 & port
Pod IP & port (Endpoint)
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
type: NodePort
ports:
- port: 80
nodePort: 31000
selector:
app: guestbook
tier: frontend
10. 10
Helm 이란?
• Kubernetes applications 을 Helm charts 로 관리하여 설치, 업그레이드 용이
• Client (Helm) 와 Server (Tiller) 로 구성
• Chart 는 최소한 2개의 구성요소를 가짐
- Helm 패키지를 설명하는 Chart.yaml
- Kubernetes manifest 파일을 가지는 Template 파일
port-forwarding
& gRPC in k8s
14. 14
OpenStack-Helm Project
https://github.com/openstack/openstack-helm
• The goal of OpenStack-Helm is to enable deployment, maintenance, and upgrading of
loosely coupled OpenStack services and their dependencies individually or as part of
complex environments.
• AT&T가 2016년 11월 시작한 project
• openstack kolla image들을 k8s helm chart로 관리하는 프로젝트
• 2017년 4월 11일에 openstack 정식 project로 합류
15. 15
OpenStack on Kubernetes
• openstack-control-plane으로 labeling된 node에 controller component들 배포
• openstack-compute-node로 labeling된 node에 compute, ovs 관련 component
들 배포