A study and practice of OpenStack release Kilo HA deployment. The Kilo document has some errors, and it's hardly find a detailed document to describe how to deploy a HA cloud based on Kilo release. Hope this slides can provide some clues.
Kubernetes has evolved from Borg at Google to provide an open source platform for automating deployment, scaling, and management of containerized applications. The presentation discusses how to use Jenkins, Fabric8, and other tools to achieve continuous integration and delivery (CI/CD) with Kubernetes. It provides examples of configuring Jenkins and Fabric8 to build, test, and deploy container images to a Kubernetes cluster, illustrating an end-to-end CI/CD workflow on Kubernetes.
Red Hat OpenShift 4 allows for automated and customized deployments. The Full Stack Automation method fully automates installation and updates of both the OpenShift platform and Red Hat Enterprise Linux CoreOS host operating system. The Pre-existing Infrastructure method allows OpenShift to be deployed on user-managed infrastructure, where the customer provisions resources like load balancers and DNS. Both methods use the openshift-install tool to generate ignition configs and monitor the cluster deployment.
Red Hat Satellite 5.7 2015.4 is a management system that allows users to control updates, compliance, provisioning, and remote control of up to thousands of Red Hat Enterprise Linux servers from a single console. It retrieves update packages from Red Hat Network and deploys them to target servers, installing the same versions across server groups. The system can also rollback servers to snapshots and provide crash information for troubleshooting. Related products include Spacewalk for community versions and Oracle Spacewalk for Oracle Linux, while SUSE Manager performs similar functions for SUSE Linux Enterprise Server.
MySQL Group Replication is a new 'synchronous', multi-master, auto-everything replication plugin for MySQL introduced with MySQL 5.7. It is the perfect tool for small 3-20 machine MySQL clusters to gain high availability and high performance. It stands for high availability because the fault of replica don't stop the cluster. Failed nodes can rejoin the cluster and new nodes can be added in a fully automatic way - no DBA intervention required. Its high performance because multiple masters process writes, not just one like with MySQL Replication. Running applications on it is simple: no read-write splitting, no fiddling with eventual consistency and stale data. The cluster offers strong consistency (generalized snapshot isolation).
It is based on Group Communication principles, hence the name.
The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
The document discusses establishing a true DevOps culture and environment. It begins by describing the traditional battle between developers and operations staff. DevOps aims to resolve this conflict by having developers and operations work together across the entire application lifecycle. The document then outlines some of the challenges in implementing DevOps and presents steps for establishing a true DevOps environment, including having a common language, planning infrastructure and processes together, coding to DevOps best practices, coordinating deployments, and centralizing monitoring and logs. Key aspects are involving all teams early, sharing information transparently, and avoiding prioritizing specific tools over collaboration.
This document provides an overview of Kubernetes including:
- Kubernetes is an open source system for managing containerized applications and services across clusters of hosts. It provides tools to deploy, maintain, and scale applications.
- Kubernetes objects include pods, services, deployments, jobs, and others to define application components and how they relate.
- The Kubernetes architecture consists of a control plane running on the master including the API server, scheduler and controller manager. Nodes run the kubelet and kube-proxy to manage pods and services.
- Kubernetes can be deployed on AWS using tools like CloudFormation templates to automate cluster creation and management for high availability and scalability.
This talk outlines the features in containerd 1.1 smart client: I/O redirection from the client side, containerd namespaces to leverage a single runtime instance with a logical isolation from multiple clients (Kubernetes, Docker Engine, other systems), and containers as types in Golang when using containerd Go client library.
Additionally, it explains all the performance improvements brought by BuildKit, and the capabilities that it opens up because of it's modular architecture, enabling open source developers who create new build systems using BuildKit directly to create new front ends.
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
How deep is your buffer – Demystifying buffers and application performanceCumulus Networks
Packet buffer memory is among the oldest topics in networking, and yet it never seems to fade in popularity. Starting from the days of buffers sized by the bandwidth delay product to what is now called "buffer bloat", from the days of 10Mbps to 100Gbps, the discussion around how deep should the buffers be never ceases to evoke opinionated responses.
In this webinar we will be joined by JR Rivers, co-founder and CTO of Cumulus Networks, a man who has designed many ultra-successful switching chips, switch products, and compute platforms, to discuss the innards of buffering. This webinar will cover data path theory, tools to evaluate network data path behavior, and the configuration variations that affect application visible outcomes.
Redis is an open source, advanced key-value store that can be used as a data structure server since it supports strings, hashes, lists, sets and sorted sets. It is written in C, works on most POSIX systems, and can be accessed from many programming languages. Redis provides options for data persistence like snapshots and write-ahead logging, and can be replicated for scalability and high availability. It supports master-slave replication, sentinel-based master detection, and sharding via Redis clusters. Redis has been widely adopted by many companies and is used in applications like microblogging services.
The purpose of this solution is to go over the Docker basics which explain containers, images, how they work, where to find them, the architecture (client, daemon), the difference between Docker and VMs, and we will see Docker and an image and see some commands.
VM Autoscaling With CloudStack VR As Network ProviderShapeBlue
In this talk, Wei looks at the new VM autoscaling functionality in CloudStack (due for the 4.18 release) that gives VM autoscaling without relying on any external devices.
Wei Zhou is a committer and PMC member of Apache CloudStack project, and works for ShapeBlue as a Software Architect.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
Kubernetes와 Kubernetes on OpenStack 환경의 비교와 그 구축방법에 대해서 알아봅니다.
1. 클라우드 동향
2. Kubernetes vs Kubernetes on OpenStack
3. Kubernetes on OpenStack 구축 방벙
4. Kubernetes on OpenStack 운영 방법
Short Introduction to Docker. These slides show the basic idea behind the container technology Docker. The slides present the basic features for the daily use with Docker, Docker Compose, Docker Machine and Docker Swarm.
Docker is specially important for DevOps, because it gives Software Developers more control about their dependencies in different environments.
Helm - Application deployment management for KubernetesAlexei Ledenev
Use Helm to package and deploy a composed application to any Kubernetes cluster. Manage your releases easily over time and across multiple K8s clusters.
This document provides an overview and introduction to Terraform, including:
- Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and custom solutions.
- It discusses how Terraform compares to other tools like CloudFormation, Puppet, Chef, etc. and highlights some key Terraform facts like its versioning, community, and issue tracking on GitHub.
- The document provides instructions on getting started with Terraform by installing it and describes some common Terraform commands like apply, plan, and refresh.
- Finally, it briefly outlines some key Terraform features and example use cases like cloud app setup, multi
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
In this slide. We will explain about what is DevOps, Docker, Kubernetes and CI/CD. We will show problem of development in real world and solution. You can watch live here https://www.facebook.com/devopsbkk/videos/294665554682243/ from minute 56
DevOps BKK 2018 at Bitec Bangna on September 8, 2018
Mass Migrate Virtual Machines to Kubevirt with Tool Forklift 2.0Konveyor Community
There are 6Rs that can help you have Cloud-native workloads running in your Kubernetes deployments: Refactor, Replatform, Rehost, Retire, Retain or Repurchase.
Rehosting virtual machines provides less friction than others, while still providing some advantages.
One of those advantages being that you can have workloads you don't want to or cannot containerize yet sit alongside your containers through KubeVirt.
In this meetup, we'll show you how Forklift 2.0 makes it easy to move them to their new home. And explain why this is a small step for your workloads but a giant leap on your path to the cloud.
Presenters: Miguel Pérez Colino, Senior Principal Product Manager & Fabien Dupont, Manager, Software Engineering & Senior Principal Engineer.
YouTube recording: https://youtu.be/-w4Afj5-0_g
Docker allows for easy deployment and management of applications by wrapping them in containers. It provides benefits like running multiple isolated environments on a single server, easily moving applications between environments, and ensuring consistency across environments. The document discusses using Docker for development, production, and monitoring containers, and outlines specific benefits like reducing deployment time from days to minutes, optimizing hardware usage, reducing transfer sizes, and enhancing productivity. Future plans mentioned include using Kubernetes for container orchestration.
The primary requirements for OpenStack based clouds (public, private or hybrid) is that they must be massively scalable and highly available. There are a number of interrelated concepts which make the understanding and implementation of HA complex. The potential for not implementing HA correctly would be disastrous.
This session was presented at the OpenStack Meetup in Boston Feb 2014. We discussed interrelated concepts as a basis for implementing HA and examples of HA for MySQL, Rabbit MQ and the OpenStack APIs primarily using Keepalived, VRRP and HAProxy which will reinforce the concepts and show how to connect the dots.
The document discusses establishing a true DevOps culture and environment. It begins by describing the traditional battle between developers and operations staff. DevOps aims to resolve this conflict by having developers and operations work together across the entire application lifecycle. The document then outlines some of the challenges in implementing DevOps and presents steps for establishing a true DevOps environment, including having a common language, planning infrastructure and processes together, coding to DevOps best practices, coordinating deployments, and centralizing monitoring and logs. Key aspects are involving all teams early, sharing information transparently, and avoiding prioritizing specific tools over collaboration.
This document provides an overview of Kubernetes including:
- Kubernetes is an open source system for managing containerized applications and services across clusters of hosts. It provides tools to deploy, maintain, and scale applications.
- Kubernetes objects include pods, services, deployments, jobs, and others to define application components and how they relate.
- The Kubernetes architecture consists of a control plane running on the master including the API server, scheduler and controller manager. Nodes run the kubelet and kube-proxy to manage pods and services.
- Kubernetes can be deployed on AWS using tools like CloudFormation templates to automate cluster creation and management for high availability and scalability.
This talk outlines the features in containerd 1.1 smart client: I/O redirection from the client side, containerd namespaces to leverage a single runtime instance with a logical isolation from multiple clients (Kubernetes, Docker Engine, other systems), and containers as types in Golang when using containerd Go client library.
Additionally, it explains all the performance improvements brought by BuildKit, and the capabilities that it opens up because of it's modular architecture, enabling open source developers who create new build systems using BuildKit directly to create new front ends.
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
How deep is your buffer – Demystifying buffers and application performanceCumulus Networks
Packet buffer memory is among the oldest topics in networking, and yet it never seems to fade in popularity. Starting from the days of buffers sized by the bandwidth delay product to what is now called "buffer bloat", from the days of 10Mbps to 100Gbps, the discussion around how deep should the buffers be never ceases to evoke opinionated responses.
In this webinar we will be joined by JR Rivers, co-founder and CTO of Cumulus Networks, a man who has designed many ultra-successful switching chips, switch products, and compute platforms, to discuss the innards of buffering. This webinar will cover data path theory, tools to evaluate network data path behavior, and the configuration variations that affect application visible outcomes.
Redis is an open source, advanced key-value store that can be used as a data structure server since it supports strings, hashes, lists, sets and sorted sets. It is written in C, works on most POSIX systems, and can be accessed from many programming languages. Redis provides options for data persistence like snapshots and write-ahead logging, and can be replicated for scalability and high availability. It supports master-slave replication, sentinel-based master detection, and sharding via Redis clusters. Redis has been widely adopted by many companies and is used in applications like microblogging services.
The purpose of this solution is to go over the Docker basics which explain containers, images, how they work, where to find them, the architecture (client, daemon), the difference between Docker and VMs, and we will see Docker and an image and see some commands.
VM Autoscaling With CloudStack VR As Network ProviderShapeBlue
In this talk, Wei looks at the new VM autoscaling functionality in CloudStack (due for the 4.18 release) that gives VM autoscaling without relying on any external devices.
Wei Zhou is a committer and PMC member of Apache CloudStack project, and works for ShapeBlue as a Software Architect.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
Kubernetes와 Kubernetes on OpenStack 환경의 비교와 그 구축방법에 대해서 알아봅니다.
1. 클라우드 동향
2. Kubernetes vs Kubernetes on OpenStack
3. Kubernetes on OpenStack 구축 방벙
4. Kubernetes on OpenStack 운영 방법
Short Introduction to Docker. These slides show the basic idea behind the container technology Docker. The slides present the basic features for the daily use with Docker, Docker Compose, Docker Machine and Docker Swarm.
Docker is specially important for DevOps, because it gives Software Developers more control about their dependencies in different environments.
Helm - Application deployment management for KubernetesAlexei Ledenev
Use Helm to package and deploy a composed application to any Kubernetes cluster. Manage your releases easily over time and across multiple K8s clusters.
This document provides an overview and introduction to Terraform, including:
- Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and custom solutions.
- It discusses how Terraform compares to other tools like CloudFormation, Puppet, Chef, etc. and highlights some key Terraform facts like its versioning, community, and issue tracking on GitHub.
- The document provides instructions on getting started with Terraform by installing it and describes some common Terraform commands like apply, plan, and refresh.
- Finally, it briefly outlines some key Terraform features and example use cases like cloud app setup, multi
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
In this slide. We will explain about what is DevOps, Docker, Kubernetes and CI/CD. We will show problem of development in real world and solution. You can watch live here https://www.facebook.com/devopsbkk/videos/294665554682243/ from minute 56
DevOps BKK 2018 at Bitec Bangna on September 8, 2018
Mass Migrate Virtual Machines to Kubevirt with Tool Forklift 2.0Konveyor Community
There are 6Rs that can help you have Cloud-native workloads running in your Kubernetes deployments: Refactor, Replatform, Rehost, Retire, Retain or Repurchase.
Rehosting virtual machines provides less friction than others, while still providing some advantages.
One of those advantages being that you can have workloads you don't want to or cannot containerize yet sit alongside your containers through KubeVirt.
In this meetup, we'll show you how Forklift 2.0 makes it easy to move them to their new home. And explain why this is a small step for your workloads but a giant leap on your path to the cloud.
Presenters: Miguel Pérez Colino, Senior Principal Product Manager & Fabien Dupont, Manager, Software Engineering & Senior Principal Engineer.
YouTube recording: https://youtu.be/-w4Afj5-0_g
Docker allows for easy deployment and management of applications by wrapping them in containers. It provides benefits like running multiple isolated environments on a single server, easily moving applications between environments, and ensuring consistency across environments. The document discusses using Docker for development, production, and monitoring containers, and outlines specific benefits like reducing deployment time from days to minutes, optimizing hardware usage, reducing transfer sizes, and enhancing productivity. Future plans mentioned include using Kubernetes for container orchestration.
The primary requirements for OpenStack based clouds (public, private or hybrid) is that they must be massively scalable and highly available. There are a number of interrelated concepts which make the understanding and implementation of HA complex. The potential for not implementing HA correctly would be disastrous.
This session was presented at the OpenStack Meetup in Boston Feb 2014. We discussed interrelated concepts as a basis for implementing HA and examples of HA for MySQL, Rabbit MQ and the OpenStack APIs primarily using Keepalived, VRRP and HAProxy which will reinforce the concepts and show how to connect the dots.
We repeat an introductory presentation on the OpenStack project, as many of our new members have asked to receive a complete overview. During this presentation we shall visit the different components and provide a high-level description on the architecture of OpenStack software. We shall also refer to the community around the project and as usual discuss any issues posed by the attendees.
This is a great chance to get to know better the internals of OpenStack, so i highly recommend to share with any interested party.
Liberating Your Data From MySQL: Cross-Database Replication to the Rescue!Linas Virbalas
Countless petabytes of data are sitting in MySQL database where they are perfectly useless for PostgreSQL users. Fortunately there is a solution: Tungsten Replicator can move data from MySQL to PostgreSQL, and in real time, too. In this talk we'll describe how to design cross-database replication, then set it up using Tungsten Replicator. We will cover some of the pitfalls and corner cases like SQL dialect differences, data types, character sets, and MySQL bugs that make implementation both exciting and fun. We'll conclude with a demo of database updates moving in real time between databases.
Canonical transitioned its internal IT infrastructure to use OpenStack in their private cloud (CanoniStack) to practice what they preach about cloud technologies. This transition was challenging due to organizational expectations for increased efficiency, heterogeneous hardware, and decisions around OpenStack software configuration and service management. They eventually implemented a production-ready private cloud (ProdStack) using specific Ubuntu OpenStack releases, hardware resource management with MAAS and Juju, and further improvements are planned around high availability, live upgrades, and resilience testing.
The document discusses considerations for building a private cloud using OpenStack Folsom. It covers topics such as the definition of a private cloud, sizing instances and flavors, network architecture including multiple networks, image storage and performance, and architecture examples for different sizes of private clouds. The document provides guidance on capacity planning, performance bottlenecks, and best practices for building a private cloud with OpenStack.
Beth Cohen from Cloud Technology Partners presented recommendations for a client's OpenStack cloud project. The client, a $3 billion IT outsourcing company, wanted to build an internal cloud but faced challenges including inexperience with cloud technologies and a traditional IT organization. Cohen recommended a layered 3 network architecture with virtual networking to address scalability, availability, and security risks. Tools like Crowbar could help automate deployment. Key advice was to focus on in-house expertise, scale horizontally, and automate operations.
Deployment topologies for high availability (ha)Deepak Mane
The document discusses different deployment topologies for OpenStack high availability configurations. It describes the types of nodes in an OpenStack deployment including endpoint, controller, compute, and cinder volume nodes. It then examines several specific topology examples: one using a hardware load balancer with API services on compute nodes, another with a dedicated endpoint node and API services on controller nodes, and a third with simple controller redundancy and API services on controller nodes. Across all the examples, the key is distributing OpenStack services across nodes in a redundant and highly available manner.
This document discusses MySQL real-time database replication configuration for Abuja Electricity Distribution Company. It defines database replication as maintaining multiple copies of the same database across servers. Typically one server acts as the master database, while additional servers act as slave databases that copy data from the master. Replication improves availability by allowing slave databases to take over if the master fails, and helps scale operations by distributing data across server farms. It also allows long-running analytics jobs to use slave databases to avoid slowing down transactions on the master database.
This webinar gives a brief introduction to the OpenStack cloud, covering the topics:
- the OpenStack cloud platform,
- the Open Source community,
- OpenStack architecture and its main elements,
- overview of the compute, networking, block-storage e object-storage services.
If you want to know more about OpenStack, visit our website http://www.create-net.org/community/openstack-training.
MySQL High Availability Sprint: Launch the Pacemakerhastexo
This document provides instructions for a MySQL high availability sprint. It outlines setting up various components of the Linux HA stack including Pacemaker for cluster resource management, Corosync for cluster messaging, and DRBD for storage replication. It then provides step-by-step instructions for configuring resources like a floating IP address, DRBD device, filesystem, and MySQL, and grouping them together for high availability. The document concludes by providing further information and a way to provide feedback on the sprint.
MySQL Database Replication - A Guide by RapidValue SolutionsRapidValue
For many years, MySQL replication used to be based on binary log events. It was considered that all a slave knew was the exact event and the exact position it just read from the master. Any single transaction from a master could have ended in different binary logs, and also, in different positions in these logs. GTID was introduced along with MySQL 5.6. It has brought, along, some major changes in the way MySQL operates. Every transaction has a unique identifier which identifies it in a same way on every server. It’s not important, anymore, in which binary log position a transaction was recorded, all you need to know is the GTID.
Database replication is used to handle multiple copies of data, automatically, from the master database server to slave database servers. If we have changed data or schema in the master database, it will, automatically, update the slave database. The main advantage of replication is that it prevents the data loss. If the master database server is crashed, the exact copy of data will be there in the slave server. In MySQL, you can use MySQL Utility for implementing database replication between master and slave. MySQL Utility is a package that is used for maintenance and administration of MySQL servers. You can install MySQL utility, along with MySQL Workbench, or install it as a stand-alone package.
MySQL Replication.
This article explains how it is implemented, with an example. In this example, two servers have been used – one master and one slave. Both servers are configured in the same manner with MySQL server and MySQL Utility.
Slides for the webinar held on January 21st 2014
Repair & Recovery for your MySQL, MariaDB & MongoDB / TokuMX Clusters
Galera Cluster, NDB Cluster, VIP with HAProxy and Keepalived, MongoDB Sharded Cluster, etc. all have their own availability models. We are aware of these availability models and will demonstrate in this webinar how to take corrective action in case of failures via our cluster management tool, ClusterControl.
In this webinar, Severalnines CTO Johan Andersson will show you how to leverage ClusterControl to detect failures in your database cluster and automatically repair them to maximize the availability of your database services. And Codership CEO Seppo Jaakola will be joining Johan to provide a deep-dive into Galera recovery internals.
Agenda:
Redundancy models for Galera, NDB and MongoDB/TokuMX
Failover & Recovery (Automatic vs Manual)
Zooming into Galera recovery procedures
Split brains in multi-datacenter setups
This document discusses various approaches to implementing high availability (HA) in OpenStack including active/active and active/passive configurations. It provides an overview of HA techniques used at Deutsche Telekom and eBay/PayPal including load balancing APIs and databases, replicating RabbitMQ and MySQL, and configuring Pacemaker/Corosync for OpenStack services. It also discusses lessons learned around testing failures, placing services across availability zones, and having backups for HA infrastructures.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
MySQL with DRBD/Pacemaker/Corosync on LinuxPawan Kumar
The document describes setting up a high availability MySQL cluster with DRBD, Corosync, and Pacemaker on Linux. DRBD is configured in active-passive mode to synchronize data between two nodes. Corosync and Pacemaker provide cluster management and failover capability. MySQL runs in active mode on one node, and the virtual IP and data are failed over to the other passive node if needed for high availability. The steps provided include installing and configuring DRBD, Corosync, Pacemaker, generating authentication keys, and configuring the DRBD resource and cluster.
Webinar container management in OpenStackCREATE-NET
This webinar covers the topics of Containers in OpenStack and, in particular it offers an overview of what containers are, LXC, Docker and Kubernetes. It also includes the topic of Containers in OpenStack and the specific examples of Nova docker, Murano and Magnum. In the final part there are live Demos about the elements covered earlier.
Test automation principles, terminologies and implementationsSteven Li
A general slides for test automation principle, terminologies and implementation
Also, the slides provide an example - PET, which is a platform written by Perl, but not just for Perl. It provides a general framework to use.
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
This document provides an overview of how to create your own cloud using Apache CloudStack. It discusses the key characteristics of clouds, different cloud service and deployment models supported by CloudStack, and the core components that make up a CloudStack deployment including zones, pods, clusters, primary and secondary storage, virtual routers, hypervisors, and the management server. The document also touches on CloudStack's networking, security, high availability, resource allocation, and usage accounting features.
This document provides an overview of Apache CloudStack, an open source cloud computing platform. It describes CloudStack's key characteristics including on-demand self-service, broad network access, resource pooling, rapid elasticity, and API access. It outlines CloudStack's support for different cloud service models including SaaS, PaaS, and IaaS and discusses its hypervisor support, zone, pod, and cluster architecture. The document also summarizes CloudStack's management server, high availability features, networking, security groups, and usage accounting capabilities.
This document summarizes a presentation about the open source CloudStack cloud computing platform. CloudStack provides tools for provisioning and managing virtual infrastructure as a service, including APIs for self-service provisioning, distributed management of compute, storage and networking resources, and high availability features. The presentation outlines CloudStack's history and goals of multi-tenancy, broad hardware support, orchestration of resources behind firewalls, and scalability. It describes key CloudStack components and features such as the management server, domains and users, hypervisor and storage support, resource allocation policies, and networking functionality.
This presentation provides an overview of Apache CloudStack, an open source cloud computing platform. It discusses CloudStack's history and licensing, its ability to provide infrastructure as a service across multiple hypervisors, and how it enables multi-tenancy, high availability, scalability, and resource allocation. Key CloudStack components and concepts are also summarized, such as networking models, security groups, primary and secondary storage, usage tracking, and its management architecture.
Deploying Apache CloudStack from API to UIJoe Brockmeier
For most organizations with a large computing footprint, it's not a matter of if you'll need a private cloud - it's when, and what kind. One of the most mature and widely deployed options is Apache CloudStack, a robust, turnkey cloud that includes everything you need to set up a private, public, or hybrid cloud. We'll cover Apache CloudStack from API to UI, and a little of everything in between.
The document discusses OpenStack high availability (HA), performance tuning, and troubleshooting techniques. It covers HA concepts in OpenStack, including compute and controller node HA. It then discusses performance tuning and analyzing OpenStack logs for troubleshooting. It provides details on HA solutions for various OpenStack components like Nova, Glance, Keystone, Swift, Cinder and Neutron. It also covers techniques for optimizing performance in OpenStack like kernel tuning, huge pages, and KSM. Finally, it lists some common log locations for troubleshooting various OpenStack services.
This document provides an overview of the CloudStack architecture and its evolution from a developer's perspective. It describes the key components of CloudStack including hosts, primary storage, clusters, pods, networks, secondary storage, and zones. It also outlines the general architecture abstractions used in CloudStack like resource agents, message bus, and asynchronous job execution. Finally, it details some of the core CloudStack subsystems including the compute subsystem and management server deployment architecture.
CloudStack is an open source cloud computing platform that provides infrastructure as a service. It supports various hypervisors (KVM, Xen, VMware), has APIs for self-service provisioning, measures resource usage, and allows for rapid elasticity. CloudStack can be deployed as public, private or hybrid clouds and manages networks, storage, security and high availability of virtual machines.
The document provides a technical overview of the CLIMB OpenStack cloud including hardware, software, and configuration details. The key components are IBM servers and storage, xCAT for provisioning, SaltStack for configuration management, OpenStack for cloud services, and IBM Spectrum Scale (formerly GPFS) for parallel file storage. Spectrum Scale is integrated with OpenStack components like Cinder, Glance, and Swift to provide scalable block and object storage.
Getting started with Riak in the Cloud involves provisioning a Riak cluster on Engine Yard and optimizing it for performance. Key steps include choosing instance types like m1.large or m1.xlarge that are EBS-optimized, having at least 5 nodes, setting the ring size to 256, disabling swap, using the Bitcask backend, enabling kernel optimizations, and monitoring and backing up the cluster. Benchmarks show best performance from high I/O instance types like hi1.4xlarge that use SSDs rather than EBS storage.
CloudStack is an open-source cloud computing platform that provides infrastructure as a service. It supports various hypervisors and storage types, and allows for multi-tenancy and isolation between users/organizations. CloudStack provides tools for provisioning, managing, and monitoring virtual machines and cloud infrastructure resources.
OpenStack is an open source cloud computing platform that can manage large networks of virtual machines and physical servers. It uses a distributed architecture with components like Nova (compute), Swift (object storage), Cinder (block storage), and Quantum (networking). OpenStack has been successful due to its scalability, support for multiple hypervisors including Hyper-V, and compatibility with popular programming languages like Python. While OpenStack is best suited for large public and private clouds, its complex installation and lack of unified deployment tools can present challenges, especially for small to mid-sized clouds.
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
The document discusses using cells in OpenStack to scale cloud infrastructure across multiple geographic locations. Key points include using cells to distribute OpenStack compute services around Australia, with over 6000 users, 700 hypervisors, and 30,000 cores spread across 8 sites and 14 cells. It also discusses strategies for operating, upgrading, and scheduling across multiple cells.
Session on CloudStack, intended for new users to CloudStack, provides an overview to varied audience levels information on usages, use cases, deployment and its architecture.
PLNOG 13: Michał Dubiel: OpenContrail software architecturePROIDEA
Michał Dubiel – TBD
Topic of Presentation: OpenContrail software architecture
Language: Polish
Abstract:
OpenContrail is a complete solution for Software Defined Networking (SDN). Its relatively new approach to network virtualization in data centers utilizes the overlay networking technology in order to achieve full decoupling of the physical infrastructure from the tenant’s logical configurations.
This presentation describes the software architecture of the system and its functional partitioning. A special emphasis is put on a compute node components: the vRouter kernel module and the vRouter Agent. Also, selected implementation details are presented in greater details along with an analysis of their impact on an overall system’s exceptional scalability and great performance.
Smart Mobile App Pitch Deck丨AI Travel App Presentation Templateyojeari421237
🚀 Smart Mobile App Pitch Deck – "Trip-A" | AI Travel App Presentation Template
This professional, visually engaging pitch deck is designed specifically for developers, startups, and tech students looking to present a smart travel mobile app concept with impact.
Whether you're building an AI-powered travel planner or showcasing a class project, Trip-A gives you the edge to impress investors, professors, or clients. Every slide is cleanly structured, fully editable, and tailored to highlight key aspects of a mobile travel app powered by artificial intelligence and real-time data.
💼 What’s Inside:
- Cover slide with sleek app UI preview
- AI/ML module implementation breakdown
- Key travel market trends analysis
- Competitor comparison slide
- Evaluation challenges & solutions
- Real-time data training model (AI/ML)
- “Live Demo” call-to-action slide
🎨 Why You'll Love It:
- Professional, modern layout with mobile app mockups
- Ideal for pitches, hackathons, university presentations, or MVP launches
- Easily customizable in PowerPoint or Google Slides
- High-resolution visuals and smooth gradients
📦 Format:
- PPTX / Google Slides compatible
- 16:9 widescreen
- Fully editable text, charts, and visuals
Understanding the Tor Network and Exploring the Deep Webnabilajabin35
While the Tor network, Dark Web, and Deep Web can seem mysterious and daunting, they are simply parts of the internet that prioritize privacy and anonymity. Using tools like Ahmia and onionland search, users can explore these hidden spaces responsibly and securely. It’s essential to understand the technology behind these networks, as well as the risks involved, to navigate them safely. Visit https://torgol.com/
Best web hosting Vancouver 2025 for you businesssteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
Top Vancouver Green Business Ideas for 2025 Powered by 4GoodHostingsteve198109
Vancouver in 2025 is more than scenic views, yoga studios, and oat milk lattes—it’s a thriving hub for eco-conscious entrepreneurs looking to make a real difference. If you’ve ever dreamed of launching a purpose-driven business, now is the time. Whether it’s urban mushroom farming, upcycled furniture sales, or vegan skincare sold online, your green idea deserves a strong digital foundation.
The 2025 Canadian eCommerce landscape is being shaped by trends like sustainability, local innovation, and consumer trust. To stay ahead, eco-startups need reliable hosting that aligns with their values. That’s where 4GoodHosting.com comes in—one of the top-rated Vancouver web hosting providers of 2025. Offering secure, sustainable, and Canadian-based hosting solutions, they help green entrepreneurs build their brand with confidence and conscience.
As eCommerce in Canada embraces localism and environmental responsibility, choosing a hosting provider that shares your vision is essential. 4GoodHosting goes beyond just hosting websites—they champion Canadian businesses, sustainable practices, and meaningful growth.
So go ahead—start that eco-friendly venture. With Vancouver web hosting from 4GoodHosting, your green business and your values are in perfect sync.
APNIC Update, presented at NZNOG 2025 by Terry SweetserAPNIC
Terry Sweetser, Training Delivery Manager (South Asia & Oceania) at APNIC presented an APNIC update at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
Reliable Vancouver Web Hosting with Local Servers & 24/7 Supportsteve198109
Looking for powerful and affordable web hosting in Vancouver? 4GoodHosting offers premium Canadian web hosting solutions designed specifically for individuals, startups, and businesses across British Columbia. With local data centers in Vancouver and Toronto, we ensure blazing-fast website speeds, superior uptime, and enhanced data privacy—all critical for your business success in today’s competitive digital landscape.
Our Vancouver web hosting plans are packed with value—starting as low as $2.95/month—and include secure cPanel management, free domain transfer, one-click WordPress installs, and robust email support with anti-spam protection. Whether you're hosting a personal blog, business website, or eCommerce store, our scalable cloud hosting packages are built to grow with you.
Enjoy enterprise-grade features like daily backups, DDoS protection, free SSL certificates, and unlimited bandwidth on select plans. Plus, our expert Canadian support team is available 24/7 to help you every step of the way.
At 4GoodHosting, we understand the needs of local Vancouver businesses. That’s why we focus on speed, security, and service—all hosted on Canadian soil. Start your online journey today with a reliable hosting partner trusted by thousands across Canada.
DNS Resolvers and Nameservers (in New Zealand)APNIC
Geoff Huston, Chief Scientist at APNIC, presented on 'DNS Resolvers and Nameservers in New Zealand' at NZNOG 2025 held in Napier, New Zealand from 9 to 11 April 2025.
APNIC -Policy Development Process, presented at Local APIGA Taiwan 2025APNIC
Joyce Chen, Senior Advisor, Strategic Engagement at APNIC, presented on 'APNIC Policy Development Process' at the Local APIGA Taiwan 2025 event held in Taipei from 19 to 20 April 2025.
4. • Redundant in locations – Minimize downtime due to power
or network issue
• All components distributed on different labs
• Controller, Network, Compute, Storage
• Minimize downtime
• Minimize data loss risk
• Eliminate single point of failure
• Extendability
Physical Infrastructure
5. Physical Infrastructure - example
Zone1 Zone 2
APIs/Orchestration/Dashboard…
Compute Compute
Database for control plane (MySQL)
Message Queue
Network
Storage Storage
8. Stateless / Stateful Services
State Description Services
Stateless • There is no dependency between
requests
• No need for data
replication/synchronization. Failed
request may need to be restarted
on a different node.
Nova-api, nova-
conductor, glance-api,
keystone-api, neutron-
api, nova-scheduler,
Apache web server,
Cinder Scheduler, etc.
Stateful • An action typically comprises
multiple requests
• Data needs to be replicated and
synchronized between redundant
services (to preserve state and
consistency)
MySQL, RabbitMQ,
Cinder Volume,
Ceilometer center
agent, Neutron L3,
DHCP agents, etc.
9. • Active/Passive
• There is a single master
• Load balance stateless services
using a VIP and a load balancer such
as HAProxy
• For Stateful services a replacement
resource can be brought online. A
separate application monitors these
services, bringing the backup online
as necessary
• After a failover the system will
encounter a “speed bump” since the
passive node has to notice the fault
in the active node and become
active
• Active/Active
• Multiple masters
• Load balance stateless services
using a VIP and a load balancer such
as HAProxy
• Stateful Services are managed in
such a way that services are
redundant, and that all instances
have an identical state
• Updates to one instance of database
would propagate to all other
instances
• After a failover the system will
function in a “degraded” state
Active/Passive or Active/Active
10. Do not reinvent the wheel
• Leverage time-tested Linux utilities such as Keepalived,
HAProxy and Virtual IP (using VRRP)
• Leverage Hardware Load Balancers
• Leverage replication services for RabbitMQ/MySQL such as
RabbitMQ Clustering, MySQL master-master replication,
Corosync, Pacemaker, DRBD, Galera and so on
Overall Philosophy
11. • Keepalived
• Based on Linux Virtual Server (IPVS) kernel module
providing layer 4 Load Balancing
• Implements a set of checkers to maintain health and Load
Balancing
• HA is implemented using VRRP Protocol, Used to load
balance API services
• VRRP (Virtual Router Redundancy Protocol)
• Eliminates SPOF in a static default routed environment
• HAProxy
• Load Balancing and Proxying for HTTP and TCP
Applications
• Works over multiple connections
• Used to load balance API services
Keepalived, VRRP, HAProxy
– for APIs (Active/Active – 2 nodes)
12. • Corosync
• Totem single-ring ordering and membership protocol
• UDP and InfiniBand based messaging, quorum, and cluster
membership to Pacemaker
• Pacemaker
• High availability and load balancing stack for the Linux platform.
• Interacts with applications through Resource Agents (RA)
• DRDB (Distributed Replication Block Device)
• Synchronizes Data at the block device
• Uses a journaling system (such as ext3 or ext4)
Corosync, Pacemaker and DRDB
- for APIs and MySQL (Active/Passive)
13. • MySQL patched for wsrep (Write Set
REPlication)
• Active/active multi-master topology
• Read and write to any cluster node
• True parallel replication, in row level
• No slave lag or integrity issues
MySQL Galera (Active/Active)
Synchronous Multi-master Cluster technology for MySQL/InnoDB
15. • Cinder (Block Storage) backends support
• LVM Driver
• Default linux iSCSI server
• Vendor software plugins
• Gluster, CEPH, VMware VMDK driver
• Vendor storage plugins
• EMC VNX, IBM Storwize, Solid Fire, etc.
• Local RAID support
• Swift (Object Storage) -- Done
• Replication
• Erasure coding: (not enabled)
Data Redundancy (storage HA)
16. • No need to HA support for L2 networking, which is located in
compute node
• Problems
• Routing on Linux server (max. bandwith approximately 3-4 Gbits)
• Limited distribution between more network nodes
• East-West and North-South communication through network node
• High Availability
• Pacemaker&Corosync
• Keepalived VRRP
• DVR + VRRP – should be in Juno release
Networking – Vanilla Neutron L3 agent
Reference:
• Neutron/DVR
• L3 High Availability
• Configuring DVR in OpenStack Juno
17. HA methods in different vendors
Vendor Cluster/Replication Technique Characteristics
RackSpac
e
Keepalived, HAProxy, VRRP, DRBD, native
clustering
Automatic - Chef
for 2 controller nodes installation
Red Hat Pacemaker, Corosync, Galera Manual installation/Foreman
Cisco Keepalived, HAProxy, Galera
Manual installation, at least 3
controller
tcp cloud
Pacemaker, Corosync, HAProxy,Galera,
Contrail
Automatic Salt-Stack deployment
Mirantis Pacemaker, Corosync, HAProxy,Galera Automatic - Puppet
HP
Microsoft Windows based installation with
Hyper-V
MS SQL server and other Windows
based methods
Ubuntu Juju-Charms, Corosync, Percona XtraDB, Juju+MAAS
18. Comparison
Database Replication
method
Strengths Weakness/Limit
ations
Keepalived/HAPro
xy/VRRP
Works on MySQL
master-master
replication
Simple to
implement and
understand.
Works for any
storage system.
Master-master
replication does
not work beyond
2 nodes.
Pacemaker/Coros
ync/DRBD
Mirroring on Block
Devices
Well tested More complex to
setup. Split Brain
possibility
Galera Based on write-
set Replication
(wsrep)
No Slave lag Needs at least 3
nodes. Relatively
new.
Others MySQL Cluster,
RHCS with
DAS/SAN storage
Well tested More complex
setup.
19. • HAProxy for load
balancing
• MySQL Galera –
active/active
• RabbitMQ cluster
Sample OpenStack HA architecture -1
20. • HAProxy for load balancing
• MySQL Galera –
active/active
• RabbitMQ cluster
• DVR + VRRP for network
Sample OpenStack HA architecture - 2
HAProxy
VIP
HAProxy
Keepalived
Controller
keystone
glance
cinder
horizon
rabbitmq
nova
Controller
keystone
glance
cinder
horizon
rabbitmq
nova
MySQL MySQL
galera
Storage Storage
Block Block
Object Object
Network / Compute Network / Compute
DVR + VRRP
21. • OpenStack High Availability Guide
• Ubuntu OpenStack HA wiki
• RackSpace OpenStack Control Plane High Availability
• TCP Cloud OpenStack High Availability
• Configuring DVR in OpenStack Juno
• OpenStack High Availability – Controller Stack by Brian
Seltzer
Reference
24. • OpenStack Release: Kilo
• Host computers
• Cisco UCS for Controller, Compute, Network nodes
• SuperMicro Computer for Storage nodes
• Host OS: Ubuntu 14.04 Server
• Network Switches: Cisco Nexus – N7K, N5K, N2K
• IP assignment:
• All hosts are using Lab internal IP address to save IP
addresses resource
• For Management/tunnel/storage/… cloud networks
• Use Jumpbox to access the all the cloud host computers from
outside, 4 Jumpbox are set up for redundancy
• HAProxies for internal load balancing and dashboard portal
for outside.
Equipment and Software
25. • Two portal hosts for redundancy
and load balance
• Same configurations on both
• One node hosts 3 VMs for 2 jumpbox
and 1 haproxy
• Jumpbox to Cloud management
• All IP addresses in Cloud are private,
reachable via Jumpbox from outside
• Applications: VNC, Java, Wireshark, …
• Repository mirroring for Linux(Ubuntu
14.04) and OpenStack (Kilo)
• Mirror required since internal network can
not access Internet directly
• Locate on Jumpbox
• Dashboard portal (on HAProxies)
• VIPs for load-balance
• VIP1 for external network access
• VIP2 for load balance of all Cloud APIs,
Database, MessageQ, …
• Important: Two VIPs should be in one
VRRP group.
Set up Portal Hosts
VIP1
PortalHost1
PortalHost2
HAProxy1
HAProxy2
Keepalived
JumpBox4
JumpBox3
External Network
(Cisco)
VIP2
JumpBox1
JumpBox2
Internal Network
(Cloud)
Step 1
26. Portal Hosts Index Example
VIP1
PortalHost1
IPMI: 10.10.10.6
Host: 10.10.10.9
PortalHost2
IPMI: 10.10.10.7
Host: 10.10.10.8
HAProxy1
HAProxy2
Keepalived
JumpBox4
JumpBox3
External Network
(Cisco)
VIP2
JumpBox1
JumpBox2
Internal Network
(Cloud)
10.10.10.11
10.10.10.12
10.10.10.10
gw 10.10.10.1
192.168.222.240
gw 192.168.222.1
192.168.222.251
192.168.222.252
192.168.222.242
192.168.222.241
192.168.222.243
192.168.222.244
Windows 10.10.10.14
Ubuntu 10.10.10.13
Ubuntu 10.10.10.15
Windows 10.10.10.16
Assume the 10.x.x.x is the company network IPs, 192.168.*.* is for lab internal use
27. • HAProxy configuration for VIP1 (external network)
• Keepalived configuration
• VIP1 and VIP2 should be in one VRRP group
• Once there is one interface fail, the whole function will be taken over
by another host
• HAProxy configuration for VIP2 (internal network)
• http://docs.openstack.org/high-availability-guide/content/ha-aa-
haproxy.html
HAProxy and Keepalived set up
Step
1.1
28. • 4 jumpbox set up
• 2 Windows and 2 Linux
• Software installed:
• VNC
• WireShark
• Vmclient
• Putty …
• Repository Mirror for Ubuntu 14.04 and OpenStack Kilo set
up in 2 Linux jumpbox
• The internal network will get package from the Jumpbox directly
Jumpbox and Repository Mirroring
Step
1.2
29. • NIS servers set up for the cloud infrastructure, for
• Host configuration
• Authentication
• …
• Two NIS servers set up on the HAProxy hosts
• Master and slave
NIS set up on HAProxy host (option)
30. • 3 UCS hosts for Controller, Database, and MessageQ
• Located in two racks
• Better to have Network / Compute all located in UCS-B
hosts
• Be sure the Mac Pools are set differently in different FIs,
otherwise, there will be Mac Address conflict
• Complete all cabling and network configuration on UCSes
and upper switches
• Verify all network connectivity of IPMI ports
• Write down all the configuration in a detailed document
Cloud host
Step 2
Similar setting when using other compute hosts
31. • 2 Portal Hosts
• 2 HAProxy, 4 Jumpbox
• 2 Portal hosts, for each:
• IPMI: VLAN aaa (external network)
• Eth0: (7 external IP) – VLAN eee
• Eth1: (7 Internal IPs) – VLAN mmm -- VLAN access port.
• All Cloud hosts
• IPMI vlan/network (lab internal)
• accessible via jumpbox from external network
• Management vlan/network (lab internal)
• Accessible via jumpbox from external network
• Tunnel vlan/network (lab internal) – not accessible from external
• Storage vlan/network (lab internal) – not external accessible
• Other internal network (e.g. internal VLAN network)
VLANs / IP design
32. • Network configuration for each node
• Each host in one network can connect each other
• Each host can reach HAProxy and JumpBox via management interface
• Each host can reach HAProxy VIP2 via management interface
• Hosts set up: controller-vip is used for APIs
• Install the Ubuntu Cloud archive keyring and repository
• Use the mirror address, instead of the standard one
• Update packages for each system, via mirror on jumpbox
• Verification
1. NTP: ntpq –c peers
2. Connectivity: can reach HAProxy and Jumpbox
3. Repository setup: /etc/apt/sources.list..d/ …
4. Upgraded the packages
Host system preparation Check on
each host
33. • NTP Source:
• Select a stable NTP source from external network as standard time server
• The VMs on portal hosts should configure to follow the standard time
server listed above
• Jumpboxes and HAProxy
• All internal hosts in cloud should follow the HAProxy host
• Using the VIP2
NTP set up
Step 3
34. • The MySQL/Maria Galera are deployed in 3 hosts: 2 controllers and another
• Make sure InnoDB is configured
• Configure HAProxy to listen on galera cluster api, and load balance (Port:
3306) .
• Verification
• Create table on one node, can be access/manipulate from another
• Mysql work well through VIP2, and verify tolerance of single node failure
• Access from Jumpbox, work fine.
• References:
• http://docs.openstack.org/high-availability-guide/content/ha-aa-db-mysql-
galera.html
• Product webpage: http://www.codership.com/content/using-galera-cluster/。
• Download: http://www.codership.com/downloads/download-mysqlgalera。
• Document: http://www.codership.com/wiki。
• More information about wsrep, see https://launchpad.net/wsrep
MySQL/MariaDB Galera Setup
Step 4
35. • Deploy on 3 nodes, including two controllers
• Configure them as a cluster, all nodes are disk nodes
• Configure HAProxy for load balance (port: 5672) , to use multiple
rabbit_hosts instead.
• Verification
• rabbitmqadmin tool?
• rabbitmqctl status
• References:
• http://docs.openstack.org/high-availability-guide/content/ha-aa-
rabbitmq.html
• http://88250.b3log.org/rabbitmq-clustering-ha
• OpenStack High Availability: RabbitMQ
RabbitMQ Cluster Setup
Step 5
36. • Services contain:
• All OpenStack API services
• All OpenStack Schedulers
• Memcashed service (multiple instances can be configured, consider later)
• API services
• User VIP2 when configuring Keystone endpoints
• All configuration files should refer to VIP2
• Schedulers: use RabbitMQ as the message system, hosts configured:
• http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-
compute-agent-ha.html
• Telemetry central agent set up can be load balanced:
• See also: http://docs.openstack.org/high-availability-guide/content/ha-aa-controllers.html
Control services set up
37. • Installation
• Add Database into MySQL, and grant privileges (once)
• Install Keystone components on each node (one each node)
• Configure the keystone.conf
• Configure the backend to sql database
• Disable caching if needed (need to try)
• Configure the HAProxy for the API
• Configure the keystone token backend to sql (the default is
memcached)
• Services/Endpoints/Users/Roles/Projects & Verification
• Create admin, demo, service projects and corresponding user/role, …
• Create on one node and verify it on another node
• Work with VIP2
• See also:
• http://docs.openstack.org/high-availability-guide/content/s-keystone.html
• http://docs.openstack.org/kilo/config-reference/content/section_keystone.conf.html
Identity Service - Keystone
Step 6
38. • Shared storage is required for Glance
HA
• In the pilot cloud, the controller local file
system is used as image storage, for HA,
it will not work
• Use the swift as Glance backend.
• Swift itself needs to be HA
• At least two storage nodes
• At least two swift proxy nodes
• Installed on controllers with glance
• Use keystone for authentication, instead
of Swauth
Image Service - Glance
Step 7
VIP2
…
Controller
keystone
glance
MySQL
Swift Proxy…
Controller
keystone
glance
MySQL
Swift Proxy
HAProxy1 HAProxy2
Swift
39. • Installation
• Install two swift proxy
• Proxies can be located on the controller nodes; Configure VIP2 for them for load-
balance
• Install two storage nodes in B-series nodes, two disks for each, total 4
• Configure 3 replicators for HA
• No account error fix – upgrade swift-client to 2.3.2.
• There is a bug, fixed in 2.3.2 (not in Kilo release)
• Verification
• File can be put into the storage via one of proxies, and get from another
• Object write/get via VIP2
• Failure cases
• See also: http://docs.openstack.org/kilo/install-
guide/install/apt/content/ch_swift.html
• https://bugs.launchpad.net/python-swiftclient/+bug/1372465
Object Storage Installation
Step
7.1
40. • Install Glance on each controller
• Use the file system as the backend first, verify it works with local
file system
• Configure the HAProxy for Glance API and Glance Registry service
• There would be warning about Unknown version in Glance API log
• Need to change HAProxy setting about httpchk to fix it
• option httpchk Get /versions
• See also:
• http://docs.openstack.org/juno/config-reference/content/section_glance-
api.conf.html
• https://bugzilla.redhat.com/show_bug.cgi?id=1245572
• http://docs.openstack.org/high-availability-guide/content/s-glance-
api.html
Image Service - Glance
Step
7.2
41. • Prerequisites:
• Swift object store had been installed and verified
• Glance installed on controllers and verified with local file system as the backend
• Integration:
• Configure the swift store as the glance backend
• Configure the keystone token backend to sql (important)
• Or configure multiple memcached hosts in the configuration file
• Verification
• Upload image and list images successfully in each controller node
• See also:
• http://behindtheracks.com/2014/05/openstack-high-availability-glance-
and-swift/
• http://thornelabs.net/2014/08/03/use-openstack-swift-as-a-backend-
store-for-glance.html
Integration of Glance and Swift
Step
7.3
42. • Install Nova related packages
• In two controller nodes and in one compute node
• Compute nodes need to be set up as “Virturelization Host”
• Otherwise, the installation later will fail due to dependency issue.
• Configure HAProxy for Nova services
• List the Nova services
• Verify if RabbitMQ works in the HA environment
• There should be redundant Nova APIs, Schedulers, Conductors,
… listed
• Further verification needs network nodes set up
Install Compute Services
Step 8
43. • DVR + 3 network nodes (distributed SNAT / DHCP redundency)
• Multiple options, but no one is perfect
• Pacemaker&Corosync
• Keepalived VRRP
• DVR + VRRP – should be in Juno/Kilo releases
• References:
• http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
• https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_ar
chitecture
• http://assafmuller.com/2014/08/16/layer-3-high-availability/
Network Redundancy
Step 9
44. Service Layout for DVR mode
• Network Node
• Services required the same as the
central mode
• Compute Node
• Compute node is doing networking too
• L3 services added in Compute node
• Take most networking functions too
45. • Support GRE/VXLAN/VLAN/FLAT
network
• In our system, GRE is used for
tunneling between instances and SNAT
• VLAN network is not required if we do
not use it.
• Network Node is mainly for network
central services like DHCP,
Metadata, and SNAT
• Just north/south traffic with fixed IP
need network node forwarding
• Compute nodes handle DNAT
• East/West traffic and North/South
traffic with a floating IP will not go
through network node
DVR General Architecture
46. • This is just for load balance of networking, not HA really
• Install all services listed in the service layout picture
• On Network node and Compute node respectively
• Configuration
• router_distrbuted = True
• ……
• Create routers, networks and instances
• Verification
• North/south for instances with a fixed IP address (SNAT, via Network
node)
• North/south for instances with a floating IP address (DNAT, via Compute
node only)
• East/west for instances using different networks on the same router (via
Compute node only)
Install one Network and two Compute node
Step
9.1
47. • Add one more network Node
• DHCPD redundancy
• Networking L3 Agent redundancy
• Networking Metadata Agent
• Kilo does not support DVR & L3HA mechanism combination
• This is not implemented in our practice, but it should be
feasible to implement
• The key is to keep all configuration (static/dynamic) sync-up
• Two ways to go:
• PaceMaker + CoroSync …
• VRRP + Keepalived (need to reboot network node when one is down)
L3 network redundancy - TBD
Step
9.2
48. • Cinder Services installation
• Install Cinder services in each controller
• Configure HAProxy for the API
• Storage Nodes set up
• SuperMicro equipment as storage
• Linux soft-raid for disk redundancy
• GlusterFS for node redundancy
• Verification
• Create and access volume through client on
Jumpbox – try both controllers
• Do failover cases on disk level
• Do failover cases on node level
Volume redundancy
Step 10
VIP2
…
Controller
keystone
MySQL
Cinder
…
Controller
keystone
MySQL
Cinder
HAProxy1 HAProxy2
It’s the same to use any other storage node, e.g. normal computer,
we use SuperMicro since it provides >100T storage on one node
RaidRaid
SuperMicro Storage Node
Gluster FS
49. • Horizon Services installation
• Install Horizon services in each controller
• Configure Horizon services
• Use the external url name for console setting (instead of controller)
• Configure the memcached
• /etc/openstack-dashboard/local_settings.py – CACHES LOCATION changes to VIP2
• /etc/memcached.conf change 127.0.0.1 to the controller IP.
• Configure HAProxy for the API
• Configure the VIP1 to the internal controllers proxy
• Make sure the Dashboard is accessible from external network
• Verification
• From Jumpbox to access the dashboard
• From external network to access the dashboard
Dashboard redundancy
Step 11
50. • HEAT Services installation
• Install HEAT services in each controller
• Configure HEAT services
• Configure HAProxy for the HEAT API
• Verification
Orchestration redundancy
Step 12
51. • Ceilometer Services installation
• Install Ceilometer services in each controller
• Configure Ceilometer services
• Configure HAProxy for the Ceilometer API
• Verification
Telemetry redundancy - TBD
Step 13