OVN databases high availability with scale test
Ref: https://www.youtube.com/watch?v=sjjMoFs8_QQ&t=0s&list=PLaJlRa-xItwCzuAL3mP6n02vmXab4Bwu-&index=34
Large scale overlay networks with ovn: problems and solutionsHan Zhou
Han Zhou presents problems and solutions for scaling Open Virtual Network (OVN) components in large overlay networks. The key challenges addressed are:
1. Scaling the OVN controller by moving from recomputing all flows to incremental processing based on changes.
2. Scaling the southbound OVN database by increasing probe intervals, enabling fast resync on reconnect, and improving performance of the clustered mode.
3. Further work is planned to incrementally install flows, reduce per-host data, and scale out the southbound database with replicas.
This document discusses OVN (Open Virtual Network) and its integration with OpenStack Neutron. It provides an overview of OVN, how it integrates with Neutron, deployment models, and performance comparisons with ML2/OVS. Some key advantages of ML2/OVN include native support for DHCP, distributed routing, load balancing, and DPDK support. Disadvantages include lack of firewall and VPN support and some quality of service limitations.
OVN provides virtual networking capabilities for Open vSwitch including logical switches, routers, security groups, and ACLs. It uses OVSDB to configure OVN components and provides native integration with OpenStack Neutron. OVN's architecture includes a northbound database for logical network definitions, a southbound database for physical mappings, and daemons like ovn-northd and ovn-controller that translate between the databases.
This document discusses deploying IPv6 on OpenStack. It provides an overview of IPv6, including that IPv6 addresses the shortage of IPv4 addresses by providing a vastly larger 128-bit address space. It describes IPv6 address types and allocation methods. It also discusses IPv6 configuration modes in OpenStack, including stateless address autoconfiguration (SLAAC) and DHCPv6 stateless and stateful modes. Additionally, it covers deployment options for IPv6 on OpenStack like dual stack, NAT64/DNS64, and network tunnels. It provides details on IPv6 address and router advertisement configuration in OpenStack.
This document discusses upgrading an Openstack network to SDN with Tungsten Fabric. It evaluates three solutions: 1) using the same database across regions, 2) hot-swapping Open vSwitch and virtual routers, and 3) using an ML2 plugin. The recommended solution is #3 as it provides minimum downtime. Key steps include installing the OpenContrail driver, synchronizing network resources between Openstack and Tungsten, and live migrating VMs. Topology 2 is also recommended as it requires minimum changes. The upgrade migrated 80 VMs and 16 compute nodes to the SDN network without downtime. Issues discussed include synchronizing resources and migrating VMs between Open vSwitch and virtual routers.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
The document discusses using Senlin, an OpenStack clustering service, to provide autoscaling capabilities for multicloud platforms. Senlin allows for managing clusters of nodes across different cloud providers and includes features like load balancing, auto-healing, and scaling policies. It describes how Senlin was implemented at a company to provide a centralized autoscaling solution across OpenStack and VMware cloud environments. Some drawbacks of Senlin are also outlined, along with potential future work like multi-region clusters and global load balancing.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document discusses using BGP dynamic routing with Neutron to route cloud network traffic. It provides an overview of Neutron's BGP dynamic routing service and applications. Currently Neutron networks use static routing, but dynamic routing would allow routes to move between routers more easily. The document outlines how Neutron could insert routes into a routing protocol to advertise to infrastructure routers. Future applications discussed include routed network segments, L3 VPNs, and directly routible tenant networks.
The document discusses deploying IPv6 in OpenStack environments. It covers topics like tenant IPv6 addressing using SLAAC, stateless DHCPv6, and stateful DHCPv6. It also discusses provider networks, IPv6-only networks, IPv6 prefix delegation, and using IPv6 with Heat and L3 high availability. The agenda provides reference material and GitHub links for IPv6 Heat templates. Examples are given for creating tenant networks using different IPv6 addressing modes in Neutron. Captures from network sniffers show the router advertisements and DHCP messages used to configure IPv6 addresses and options.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
VXLAN Integration with CloudStack was presented at the Advanced Zone CCCEU13 conference in Amsterdam on November 21, 2013. The presentation discussed integrating VXLAN to overcome the VLAN ID limitation in CloudStack and allow for more scalable network isolation. VXLAN was demonstrated working with CloudStack to provide isolated networks and inter-tier connectivity within VPCs while maintaining network isolation. Basic functions like VM connectivity, migration, and network availability were tested under VXLAN and found to work as expected. Feedback was welcomed on the VXLAN integration in CloudStack.
Openstack Neutron & Interconnections with BGP/MPLS VPNsThomas Morin
The document discusses Neutron networking-bgpvpn, an OpenStack Neutron extension that enables interconnection between OpenStack deployments and BGP/MPLS VPNs. It provides a high-level overview of networking-bgpvpn's architecture, supported drivers, features, and integration with OPNFV SDNVPN for automated deployment and testing of BGPVPN use cases. Neutron networking-bgpvpn allows tenants to define and manage their own BGP VPNs to interconnect with external networks or between distributed OpenStack installations.
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking ShapeBlue
1) The document discusses using VXLAN, BGP and EVPN to implement a layer 3 network for a cloud deployment using Ceph and CloudStack. This allows scaling beyond the limits of layer 2 networks and VLANs.
2) Key infrastructure components discussed include Dell S5232F-ON switches running Cumulus Linux, SuperMicro hypervisors and Ceph storage servers using NVMe SSDs.
3) The deployment provides high performance private and public cloud infrastructure with scalable networking and over 650TB of reliable Ceph storage per rack.
Openstack Neutron, interconnections with BGP/MPLS VPNsThomas Morin
This document discusses the Openstack Neutron networking-bgpvpn project, which provides a Neutron API and service plugin that allows tenants to interconnect their Openstack networks and routers with BGP/MPLS VPNs. The API exposes constructs like BGPVPNs, network associations, and router associations. It works with drivers for Neutron/OVS, OpenDaylight, OpenContrail, and others. The goal is to provide a common way for tenants to control interconnections in a controller-agnostic manner. The project is part of Openstack and OPNFV, and provides a model for integrating telco functionality into Openstack.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
macvlan and ipvlan allow VMs and containers to have direct exposure to the host network by assigning them their own MAC/IP addresses without requiring a bridge. macvlan uses MAC addresses to separate traffic while ipvlan uses layer 3. Both are lighter weight than bridges. macvlan is commonly used in bridge mode to allow communication between VMs/containers on the same host, while ipvlan may be preferred when MAC limits are in place or for untrusted networks.
This document discusses issues with running OpenStack in a multi-region mode and proposes Tricircle as a solution. It notes that in a multi-region OpenStack deployment, each region runs independently with separate instances of services like Nova, Cinder, Neutron, etc. Tricircle aims to integrate multiple OpenStack regions into a unified cloud by acting as a central API gateway and providing global views and replication of resources, tenants, and metering data across regions. It discusses how Tricircle could address issues around networking, quotas, resource utilization monitoring and more in a multi-region OpenStack deployment.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Kvm performance optimization for ubuntuSim Janghoon
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
This document discusses optimizing Ceph latency through hardware design. It finds that CPU frequency has a significant impact on latency, with higher frequencies resulting in lower latencies. Testing shows 4KB write latency of 2.4ms at 900MHz but 694us at higher frequencies. The document also discusses how CPU power states that wake slowly, like C6 at 85us, can negatively impact latency. Overall it advocates designing hardware with fast CPUs and avoiding slower cores or dual sockets to minimize latency in Ceph deployments.
This document discusses running MySQL on Kubernetes with Percona Kubernetes Operators. It provides an introduction to cloud native applications and Kubernetes. It then discusses the benefits and challenges of running MySQL on Kubernetes compared to database-as-a-service options. It introduces Percona Kubernetes Operators for MySQL, which help manage and configure MySQL deployments on Kubernetes. Finally, it discusses how to deploy MySQL with the Percona Kubernetes Operators, including prerequisites, connectivity, architecture, high availability, and monitoring.
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld
1. This document provides an overview and agenda for a presentation on vSphere 6.x host resource deep dive topics including compute, storage, and network.
2. It introduces the presenters, Niels Hagoort and Frank Denneman, and provides background on their expertise.
3. The document outlines the topics to be covered under each section, including NUMA, CPU cache, DIMM configuration, I/O queue placement, driver considerations, RSS and NetQueue scaling for networking.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document discusses using BGP dynamic routing with Neutron to route cloud network traffic. It provides an overview of Neutron's BGP dynamic routing service and applications. Currently Neutron networks use static routing, but dynamic routing would allow routes to move between routers more easily. The document outlines how Neutron could insert routes into a routing protocol to advertise to infrastructure routers. Future applications discussed include routed network segments, L3 VPNs, and directly routible tenant networks.
The document discusses deploying IPv6 in OpenStack environments. It covers topics like tenant IPv6 addressing using SLAAC, stateless DHCPv6, and stateful DHCPv6. It also discusses provider networks, IPv6-only networks, IPv6 prefix delegation, and using IPv6 with Heat and L3 high availability. The agenda provides reference material and GitHub links for IPv6 Heat templates. Examples are given for creating tenant networks using different IPv6 addressing modes in Neutron. Captures from network sniffers show the router advertisements and DHCP messages used to configure IPv6 addresses and options.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
VXLAN Integration with CloudStack was presented at the Advanced Zone CCCEU13 conference in Amsterdam on November 21, 2013. The presentation discussed integrating VXLAN to overcome the VLAN ID limitation in CloudStack and allow for more scalable network isolation. VXLAN was demonstrated working with CloudStack to provide isolated networks and inter-tier connectivity within VPCs while maintaining network isolation. Basic functions like VM connectivity, migration, and network availability were tested under VXLAN and found to work as expected. Feedback was welcomed on the VXLAN integration in CloudStack.
Openstack Neutron & Interconnections with BGP/MPLS VPNsThomas Morin
The document discusses Neutron networking-bgpvpn, an OpenStack Neutron extension that enables interconnection between OpenStack deployments and BGP/MPLS VPNs. It provides a high-level overview of networking-bgpvpn's architecture, supported drivers, features, and integration with OPNFV SDNVPN for automated deployment and testing of BGPVPN use cases. Neutron networking-bgpvpn allows tenants to define and manage their own BGP VPNs to interconnect with external networks or between distributed OpenStack installations.
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking ShapeBlue
1) The document discusses using VXLAN, BGP and EVPN to implement a layer 3 network for a cloud deployment using Ceph and CloudStack. This allows scaling beyond the limits of layer 2 networks and VLANs.
2) Key infrastructure components discussed include Dell S5232F-ON switches running Cumulus Linux, SuperMicro hypervisors and Ceph storage servers using NVMe SSDs.
3) The deployment provides high performance private and public cloud infrastructure with scalable networking and over 650TB of reliable Ceph storage per rack.
Openstack Neutron, interconnections with BGP/MPLS VPNsThomas Morin
This document discusses the Openstack Neutron networking-bgpvpn project, which provides a Neutron API and service plugin that allows tenants to interconnect their Openstack networks and routers with BGP/MPLS VPNs. The API exposes constructs like BGPVPNs, network associations, and router associations. It works with drivers for Neutron/OVS, OpenDaylight, OpenContrail, and others. The goal is to provide a common way for tenants to control interconnections in a controller-agnostic manner. The project is part of Openstack and OPNFV, and provides a model for integrating telco functionality into Openstack.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
macvlan and ipvlan allow VMs and containers to have direct exposure to the host network by assigning them their own MAC/IP addresses without requiring a bridge. macvlan uses MAC addresses to separate traffic while ipvlan uses layer 3. Both are lighter weight than bridges. macvlan is commonly used in bridge mode to allow communication between VMs/containers on the same host, while ipvlan may be preferred when MAC limits are in place or for untrusted networks.
This document discusses issues with running OpenStack in a multi-region mode and proposes Tricircle as a solution. It notes that in a multi-region OpenStack deployment, each region runs independently with separate instances of services like Nova, Cinder, Neutron, etc. Tricircle aims to integrate multiple OpenStack regions into a unified cloud by acting as a central API gateway and providing global views and replication of resources, tenants, and metering data across regions. It discusses how Tricircle could address issues around networking, quotas, resource utilization monitoring and more in a multi-region OpenStack deployment.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Kvm performance optimization for ubuntuSim Janghoon
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
This document discusses optimizing Ceph latency through hardware design. It finds that CPU frequency has a significant impact on latency, with higher frequencies resulting in lower latencies. Testing shows 4KB write latency of 2.4ms at 900MHz but 694us at higher frequencies. The document also discusses how CPU power states that wake slowly, like C6 at 85us, can negatively impact latency. Overall it advocates designing hardware with fast CPUs and avoiding slower cores or dual sockets to minimize latency in Ceph deployments.
This document discusses running MySQL on Kubernetes with Percona Kubernetes Operators. It provides an introduction to cloud native applications and Kubernetes. It then discusses the benefits and challenges of running MySQL on Kubernetes compared to database-as-a-service options. It introduces Percona Kubernetes Operators for MySQL, which help manage and configure MySQL deployments on Kubernetes. Finally, it discusses how to deploy MySQL with the Percona Kubernetes Operators, including prerequisites, connectivity, architecture, high availability, and monitoring.
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld
1. This document provides an overview and agenda for a presentation on vSphere 6.x host resource deep dive topics including compute, storage, and network.
2. It introduces the presenters, Niels Hagoort and Frank Denneman, and provides background on their expertise.
3. The document outlines the topics to be covered under each section, including NUMA, CPU cache, DIMM configuration, I/O queue placement, driver considerations, RSS and NetQueue scaling for networking.
This document discusses scaling up logging and metrics in OpenShift Container Platform (OCP). It provides an overview of the logging stack including Elasticsearch, Fluentd, and Kibana. It also summarizes the metrics stack including Cassandra, Heapster, and Hawkular. The document outlines testing done to evaluate limits and scaling of these components on large OCP clusters with thousands of nodes and pods. It provides recommendations for configuring and deploying the infrastructure to support high throughput logging and metrics collection.
Containers for the Enterprise: Delivering OpenShift on OpenStack for Performa...Stephen Gordon
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
Managing Open vSwitch Across a Large Heterogenous Fleetandyhky
Open vSwitch (OVS) is one of the more popular ways to provide VM connectivity in OpenStack. Rackspace has been using Open vSwitch in production since late 2011. In this session, we will detail the challenges faced with managing and upgrading Open vSwitch across a large heterogenous fleet. Finally, we will share some of the tools we have created to monitor OVS availability and performance.
Specific topics covered will include:
Why upgrade OVS?
Measuring OVS
Minimizing downtime with upgrades
Bridge fail modes
Kernel module gotchas
Monitoring OVS
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
The document discusses using cells in OpenStack to scale cloud infrastructure across multiple geographic locations. Key points include using cells to distribute OpenStack compute services around Australia, with over 6000 users, 700 hypervisors, and 30,000 cores spread across 8 sites and 14 cells. It also discusses strategies for operating, upgrading, and scheduling across multiple cells.
Implementing an IPv6 Enabled Environment for a Public Cloud TenantShixiong Shang
"Implementing an IPv6 Enabled Environment for a Public Cloud Tenant" case study I delivered in OpenStack Vancouver Summit (May, 2015) jointly with Anik and Sharmin from Cisco System.
This document discusses Contrail 3.0.2 cloud solution with nested KVM virtual machines. It begins with an overview of data center orchestration with OpenStack and Contrail. It then covers overlay networking using MPLS over GRE and MPLS over UDP tunnels. The document demonstrates how to create nested KVM virtualization and shows routes and packet forwarding between nested virtual machines and physical hosts. It provides commands to view routes, tunnels, and trace packets between nested and physical systems.
The document provides tips for OpenStack cloud transformation. It discusses (1) unifying CPU models on compute nodes for live migration, (2) clustering compute nodes by host aggregate, and (3) configuring options to slow down CPU during live migration for stability. Other tips include increasing HAProxy connection limits, enabling multiple network queues, implementing port level security, and ensuring adequate entropy for scale-out systems. The overall document offers best practices and configurations for improving performance and stability during OpenStack cloud transformations.
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
This document discusses building a high-performance and durable block storage service using Ceph. It describes the architecture, including a minimum deployment of 12 OSD nodes and 3 monitor nodes. It outlines optimizations made to Ceph, Qemu, and the operating system configuration to achieve high performance, including 6000 IOPS and 170MB/s throughput. It also discusses how the CRUSH map can be optimized to reduce recovery times and number of copysets to improve durability to 99.99999999%.
This document discusses Rook, an open source cloud-native storage orchestrator for Kubernetes. It provides an overview of Rook's features and capabilities, describes how to easily deploy Rook and a Ceph cluster using Helm, and discusses some challenges encountered with resizing volumes and Rook's integration with Kubernetes nodes. While Rook automates many administration tasks, it requires a stable environment and infrastructure. Common issues include device filtering, dynamically resizing volumes, LVM discovery, and using external Ceph clusters.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
Scaling Kubernetes to Support 50000 Services.pptxthaond2
Scaling Kubernetes to Support 50,000 Services
The challenges of scaling Kubernetes include high API server load from managing a large number of services, pods and endpoints. Solutions tested include batch processing requests to reduce QPS, and restructuring iptables rules into a search tree for faster routing. IPVS was also tested and showed significantly better performance than iptables for large scale deployments, with constant latency for adding rules and higher network bandwidth. With these techniques, Kubernetes has been scaled to support over 50,000 services.
Network Automation with Salt and NAPALM: IntrouctionCloudflare
This document discusses how CloudFlare uses Salt and NAPALM for network automation and configuration management across their global network of over 80 points of presence. Some key points:
- CloudFlare routes web traffic through their global network of over 80 locations serving over 4 million domains and handling over 43 billion DNS queries per day.
- They use Salt and NAPALM to automate the deployment, configuration, monitoring and maintenance of their large network, including replacing equipment and deploying new points of presence.
- NAPALM integrates directly with Salt and provides vendor-agnostic modules to control network devices, retrieve information and enforce configurations across different device types and vendors.
This document provides an overview of the network emulation and visualization tools Mininet, Wireshark, and Open vSwitch. It describes how Mininet can be used to create virtual networks on a single machine and supports common topologies. Wireshark is summarized as a tool for network packet capture and analysis that supports many protocols. Open vSwitch is presented as a software switch that implements OpenFlow and supports Linux bridge functionality along with network virtualization features. The document also introduces some command line utilities for interacting with Open vSwitch components.
Distributed Performance testing by funkloadAkhil Singh
Distributed Performance testing by funkload, sysbench.
These slides briefs the load and stress testing on apache, nginx, redis, mysql servers by using funkload and sysbench. Testing is done on a single master node setup on kubernetes cluster.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Ceph QoS: How to support QoS in distributed storage system - Taewoong KimCeph Community
This document discusses supporting quality of service (QoS) in distributed storage systems like Ceph. It describes how SK Telecom has contributed to QoS support in Ceph, including an algorithm called dmClock that controls I/O request scheduling according to administrator-configured policies. It also details an outstanding I/O-based throttling mechanism to measure and regulate load. Finally, it discusses challenges like queue depth that can be addressed by increasing the number of scheduling threads, and outlines plans to improve and test Ceph's QoS features.
Stacks and Layers: Integrating P4, C, OVS and OpenStackOpen-NFP
This document discusses integrating programmable packet processing (P4), traditional software (C), and hardware acceleration using Agilio SmartNICs with OpenStack networking. It reviews traditional OpenStack networking options and their performance issues. It then discusses how P4, C extensions, and SmartNICs can provide flexible, high-performance networking by offloading or extending the OpenStack networking datapaths like OVS and Contrail vRouter. Examples are provided of running P4/C firmware on the SmartNIC to implement a virtual switch or extending existing software. Integration with OpenStack and implications are discussed throughout.
ADVXAI IN MALWARE ANALYSIS FRAMEWORK: BALANCING EXPLAINABILITY WITH SECURITYijscai
With the increased use of Artificial Intelligence (AI) in malware analysis there is also an increased need to
understand the decisions models make when identifying malicious artifacts. Explainable AI (XAI) becomes
the answer to interpreting the decision-making process that AI malware analysis models use to determine
malicious benign samples to gain trust that in a production environment, the system is able to catch
malware. With any cyber innovation brings a new set of challenges and literature soon came out about XAI
as a new attack vector. Adversarial XAI (AdvXAI) is a relatively new concept but with AI applications in
many sectors, it is crucial to quickly respond to the attack surface that it creates. This paper seeks to
conceptualize a theoretical framework focused on addressing AdvXAI in malware analysis in an effort to
balance explainability with security. Following this framework, designing a machine with an AI malware
detection and analysis model will ensure that it can effectively analyze malware, explain how it came to its
decision, and be built securely to avoid adversarial attacks and manipulations. The framework focuses on
choosing malware datasets to train the model, choosing the AI model, choosing an XAI technique,
implementing AdvXAI defensive measures, and continually evaluating the model. This framework will
significantly contribute to automated malware detection and XAI efforts allowing for secure systems that
are resilient to adversarial attacks.
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. Originally applied to water (hydromechanics), it found applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of various fluids at rest, and fluid dynamics.
Fluid statics, also known as hydrostatics, is the study of fluids at rest, specifically when there's no relative motion between fluid particles. It focuses on the conditions under which fluids are in stable equilibrium and doesn't involve fluid motion.
Fluid kinematics is the branch of fluid mechanics that focuses on describing and analyzing the motion of fluids, such as liquids and gases, without considering the forces that cause the motion. It deals with the geometrical and temporal aspects of fluid flow, including velocity and acceleration. Fluid dynamics, on the other hand, considers the forces acting on the fluid.
Fluid dynamics is the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Fundamentally, every fluid mechanical system is assumed to obey the basic laws :
Conservation of mass
Conservation of energy
Conservation of momentum
The continuum assumption
For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.
The continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale
The role of the lexical analyzer
Specification of tokens
Finite state machines
From a regular expressions to an NFA
Convert NFA to DFA
Transforming grammars and regular expressions
Transforming automata to grammars
Language for specifying lexical analyzers
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxRishavKumar530754
LiDAR-Based System for Autonomous Cars
Autonomous Driving with LiDAR Tech
LiDAR Integration in Self-Driving Cars
Self-Driving Vehicles Using LiDAR
LiDAR Mapping for Driverless Cars
Sorting Order and Stability in Sorting.
Concept of Internal and External Sorting.
Bubble Sort,
Insertion Sort,
Selection Sort,
Quick Sort and
Merge Sort,
Radix Sort, and
Shell Sort,
External Sorting, Time complexity analysis of Sorting Algorithms.
ELectronics Boards & Product Testing_Shiju.pdfShiju Jacob
This presentation provides a high level insight about DFT analysis and test coverage calculation, finalizing test strategy, and types of tests at different levels of the product.
2. What components can be improved with scale test?
● OVN-Controller on computes/GWs – ongoing discussions and WIP upstream
● OVS-vSwitchd on computes/GWs – performance improved with help of community.
● OVN-Northd on central nodes – ongoing discussions and WIP upstream
3. Why scale test?
● To see how OVN behaves when deployed at scale.
● Ensure an entire availability zone is simulated fine in big cloud deployments.
● Find out bugs as early as possible to improvise OVN.
4. What to use for scale test?
● OVN Scale test
○ When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on "what", "why" and "where" without a solid
scalability testing framework.
○ Since OpenStack rally is very convenient benchmarking tool, OVN scale test leverages the same.
○ It is a plugin of OpenStack Rally.
○ It’s open sourced and maintained under same base project OpenvSwitch.
○ Intended to provide the community with a OVN control plane scalability test tool that is capable of
performing specific, complicated and reproducible test cases on simulated scenarios.
○ Need to have a Rally installed, as workflow is also similar to Rally’s.
○ Upstream scale test repo @ https://github.com/openvswitch/ovn-scale-test
○ User guide @ http://ovn-scale-test.readthedocs.org/en/latest/
5. Rally OVS
● To run OVN scale test, you don’t need OpenStack installed - instead you just need rally installed.
● Main keywords :
○ Deployment = any cloud deployment consisting of all network and compute components.
○ Task = Any CRUD operations on compute, farm and network components like lports, lswitches, lrouters, etc.
○ Farm = collection of sandboxes
○ Sandbox = a chassis (hypervisor/compute node/ovs sandbox)
6. Base counters considered for an availability zone
●8 lrouters
●5 lswitches per router
●250 lports per lswitches
●Total 10k lports
●Total Chassis: 1k
●Total BMs that hosts chassis: 20
● Total control plane nodes: 3
●10 lports(VM) per chassis
●OS: Ubuntu 16.04 with 4.4 kernel
7. OVSdb service models
● OVSDB supports three service models for databases:
○ Standalone
○ Active-Backup
○ Clustered
● The service models provide different compromises among consistency, availability, and partition tolerance.
● They also differ in the number of servers required and in terms of performance.
● The standalone and active-backup database service models share one on-disk format, and clustered databases use a different format [1]
1.https://github.com/openvswitch/ovs/blob/80c42f7f218fedd5841aa62d7e9774fc1f9e9b32/Documentation/ref/ovsdb.7.rst
8. OVN DBs Active-standby using pacemaker
NB
Northd
SB
NB
Northd
SB
NB
Northd
SB
Node1 Node2 Node3
CMS
LB VIP
LB VIP
NeutronCMS
HV HV HV...
Active
Standby
Pacemaker Cluster
Alternatively, this LB VIP can be replaced by:
● Option 2: BGP advertising the VIP
on each node
● Option 3: put all 3 nodes on same
rack and use pacemaker to
manage the VIP too.
9. Start OVN DBs using pacemaker
● Let pacemaker manage the VIP resource.
● Using LB VIP:
○ set listen_on_master_ip_only=no
○ Active node will listen on 0.0.0.0 so that LB VIP IP can connect on respective sb and nb db ports
pcs resource create ip-192.168.220.108 ocf:heartbeat:IPaddr2 ip=192.168.220.108 op monitor interval=30s
pcs resource create ovndb_servers ocf:ovn:ovndb-servers manage_northd=yes master_ip=192.168.220.108
nb_master_port=6641 sb_master_port=6640 --master
pcs resource meta ovndb_servers-master notify=true
pcs constraint order start ip-192.168.220.108 then promote ovndb_servers-master
pcs constraint colocation add ip-192.168.220.108 with master ovndb_servers-master
10. OVN DBs – Raft Clustering
NB
Northd
SB
NB
Northd
SB
NB
Northd
SB
Node1 Node2 Node3
CMS
LB VIP
LB VIP
NeutronCMS
HV HV HV...
Cluster Leader
Active
Standby
Northd uses OVSDB
named lock to ensure
only one is active
11. Starting OVN DBs using clustering
● For LB VIP:
○ Set connection table to listen on 0.0.0.0 on all nodes
● For chassis:
○ Point it to either VIP IP e.g. tcp:<vip_ip>:6642
○ Or all central node IPs e.g. “tcp:192.168.220.101:6642,tcp:192.168.220.102:6642,tcp:192.168.220.103:6642”
12. How to set up scale test env ?
• Create deployment which is installing necessary packages/binaries on a BM
– rally-ovs deployment create --file ovn-multihost.json --name ovn-overlay
{
"type": "OvnMultihostEngine",
"controller": {
"type": "OvnSandboxControllerEngine",
"deployment_name": "ovn-new-controller-node",
"ovs_repo": "https://github.com/openvswitch/ovs.git",
"ovs_branch": "branch-2.9",
"ovs_user": "root",
"net_dev": "eth0",
"controller_cidr": "192.168.10.10/16",
"provider": {
"type": "OvsSandboxProvider",
"credentials": [
{
"host": "10.x.x.x",
"user": "root"}
]
}
},
"nodes": [
{
"type": "OvnSandboxFarmEngine",
"deployment_name": "ovn-farm-node-31",
"ovs_repo" : "https://github.com/ openvswitch /ovs.git",
"ovs_branch" : "branch-2.9",
"ovs_user" : "root",
"provider": {
"type": "OvsSandboxProvider",
"credentials": [
{
"host": "10.x.x.x",
"user": "root"}
]
}
} ]
}
Rally-ovs
TOR
switch
OVN Farm1OVN central node
ssh ssh
OVN Farm20
.
.
ssh
13. How to set up scale test env ?
• Rally task start create_sandbox is equivalent to convert the BM into a compute node with ovs installed.
• rally-ovs task start create_sandbox.farm1.json
{
"version": 2,
"title": "Create sandbox",
"description": "Creates 50 sandboxes on each farm",
"tags": ["ovn", "sandbox"],
"subtasks": [
{
"title": "Create sandbox on farm 1",
"group": "ovn",
"description": "",
"tags": ["ovn", "sandbox"],
"run_in_parallel": false,
"workloads": [
{
"name": "OvnSandbox.create_sandbox",
"args": {
"sandbox_create_args": {
"farm": "ovn-farm-node-1",
"amount": 50,
"batch": 10,
"start_cidr": "192.230.64.0/16",
"net_dev": "eth0",
"tag": "TOR1"
}
},
"runner": {
"type": "constant",
"concurrency": 4,
"times": 1,
"max_cpu_count": 4
},
"context": {
"ovn_multihost" : {
"controller": "ovn-new-controller-node"
}
}
}
]
} ]
}
Rally-ovs
OVN Farm1
OVN central node
ssh ssh
TOR
switch
HV1 HV2
HV50
14. How to set up scale test env ?
• Finally create lrouters, lswitches and lports and also bind the lports to the chassis
• rally-ovs task start create_routers_bind_ports.json
{
"OvnNetwork.create_routers_bind_ports": [
{
"runner": {
"type": "serial",
"times": 1
},
"args": {
"port_create_args": {
"batch": 100
},
"router_create_args": {
"amount": 8,
"batch": 1
},
"network_create_args": {
"start_cidr": "172.145.1.0/24",
"batch": 1
},
"networks_per_router": 5,
"ports_per_network": 250,
"port_bind_args": {
"wait_up": true,
"wait_sync": "none"
}
},
"context": {
"sandbox": {},
"ovn_multihost": {
"controller": "ovn-new-controller-node"
}
}
}
]
}
Rally-ovs
OVN Farm1
OVN central node
ssh ssh
TOR
switch
HV1 HV2
HV50
lport1 lport20
lport500
..
16. OVN scale test with HA
● OVN scale test by default sets up one active standalone OVN DB.
● Hence, we need to separately setup an HA cluster
○ TODO: (support to deploy HA cluster to be added in OVN-scale-test to avoid manual setup)
● For testing HA, we need to point the chassis to HA nodes setup which can be set to respective OVN DB HA VIP IP
in the create_sandbox.json using below param
○ "controller_cidr": "192.168.10.10/16",
17. Scenarios – Active-standby using pacemaker
Scenarios Impact on Control plane Impact on Data plane
Standby node reboot No No
Active node reboot Yes (~5+ minutes as SB DB is
running super hot resyncing
the data)
Only newly created VMs/lports
till SB DB cools down.
All active and standby nodes
reboot
Yes (few minutes depending
on how soon is new node up
and data sync is finished)
No*
• *Entire NB db data got flushed/lost causing both control and data plane impact
• *Discussion @ https://mail.openvswitch.org/pipermail/ovs-discuss/2018-August/047161.html
• *Fixed rolled out with help of upstream and no issues reported so far.
• *Commit ecf44dd3b26904edf480ada1c72a22fadb6b1825
18. OVN DBs HA – Active-backup with pacemaker
● Current status
○ Basic functionality tested
○ Scale testing always ongoing with findings reported and some major issues fixed with help of upstream.
○ Detailed scale test scenarios reported and also updated on mail chain to the community https://mail.openvswitch.org/pipermail/ovs-
discuss/2018-September/047405.html
○ Consent and improvements asked to upstream folks
19. Scenarios – Clustered DBs
Scenarios Impact on Control plane Impact on Data plane
Any active node reboot No No
All active nodes reboot Yes (few minutes depending
on how soon is new node up
along with leader selection and
data sync completion)
Not fully verified
20. Raft with scale test summary
● Current status
○ Basic functionality tested.
○ Scale testing ongoing and problems found when using rally-ovs (ovn scale test) with around 2k+ lports
○ db="tcp:192.168.220.101:6641,tcp:192.168.220.102:6641,tcp:192.168.220.103:6641" -- wait-until Logical_Switch_Port
lport_061655_SKbDHz up=true -- wait-until Logical_Switch_Port lport_061655_zx9LXe up=true -- wait-until
Logical_Switch_Port Last stderr data: 'ovn-nbctl:
tcp:192.168.220.101:6641,tcp:192.168.220.102:6641,tcp:192.168.220.103:6641: database connection failed (End of
file)n'.", "traceback": "Traceback (most recent call last):n File "/ebay/home/aginwala/rally-repo/rally/rally/task/runner.py",
line 66, in _run_scenario_oncen
○ Following up with community to get it fixed soon with discussions @ https://mail.openvswitch.org/pipermail/ovs-dev/2018-
May/347260.html
○ Upstream also have raft torture test in test cases in ovs repo for testing locally.
21. Some tunings for both clustered and non clustered setup
• Netfilter TCP params on all central nodes:
– Since tcp_max_syn_backlog and net.core.somaxconn values are too small, we need to increase the value to avoid getting TCP sync flood
messages in syslog:
• net.ipv4.tcp_max_syn_backlog = 4096
• net.core.somaxconn = 4096
• Pacemaker configurations
– When the SB DB starts on the new active node, it will be very busy on syncing data to all HVs.
– During this time, pacemaker monitoring can get timed out. Because of this, the timeout value for "op monitor" needs to be set big enough to
avoid timeout to avoid restart/failover forever.
– Hence, configure pacemaker monitor for resource ovndb-servers: op monitor interval=60s timeout=50s
• Inactivity probe settings on all chassis
– Set inactivity probe to 3min, so that central SB DB won't get overloaded for probe handling and also if failover happens, chassis will be able
to notice the changes
• Upstart settings on all central nodes when using pacemaker:
– Disable ovn-central and openvswitch-switch upstart to avoid confusing pacemaker when node reboots because pacemaker thinks there is
already an active pid and all the nodes will act as standalone nodes. Also LB gets confused sending traffic to this standby node.
22. Promising outcome and more to go
• OVS-vswitchd CPU utilization was running super high on chassis.
• Performance improved by making ofproto faster and results are amazing; test completed in 3+ hours vs 8+ hours:
• Discussion @ https://mail.openvswitch.org/pipermail/ovs-discuss/2018-February/046140.html
• Commit c381bca52f629f3d35f00471dcd10cba1a9a3d99
23. CPU/Mem stats for active-standby
• Active Central node
• Chassis
Components CPU Mem
OVN NB DB 0.12 97392000
OVN SB DB 0.92 777028000
OVN Northd 6.78 825836000
Components CPU Mem
OVSDB server 0.02 11672000
OVS-vSwitchd 3.75 152812000
OVN-controller 0.94 839188000
Note:
• Mem: RES mem in bytes whether its mb, gb or tb.
• CPU: total CPU time, the task has used since it started.
e.g. if the total cpu time in seconds for a current ovn-controller process is 6:26.90,
we convert the same into integer seconds by following time conversion formula:
6 * 6000 + 26 * 100 + 90 = 38690
• Converted in Delta (speed per second)
24. Stuck?
• Reach out to OVS community as it’s super interactive and responsive.
• For any generic OVS queries/tech discussions use [email protected] so that wide variety of engineers can respond for the same.