Ultimate VMware NSX for Professionals: Leverage Virtualized Networking, Security, and Advanced Services of VMware NSX for Efficient Data Management and Network Excellence
()
About this ebook
Key Features ● Gain a profound understanding of the core principles of network virtualization with VMware NSX.
● Step-by-step explanations accompanied by screenshots for seamless deployments and configurations.
● Explore the intricate architecture of vital concepts, providing a thorough understanding of the underlying mechanisms.
● Coverage of the latest networking and security features in VMware NSX 4.1.1, ensuring you're up-to-date with the most advanced capabilities.
● Reinforce your understanding of core concepts with convenient reviews of key terms at the end of each chapter, solidifying your knowledge.
Book Description
"Embark on a transformative journey into the world of network virtualization with 'Ultimate VMware NSX for Professionals.' This comprehensive guide crafted by NSX experts, starts with an exploration of Software Defined Networking, NSX architecture, and essential components in a systematic approach. It then dives into the intricacies of deploying and configuring VMware NSX, unraveling key networking features through detailed packet walks. The book then ventures into advanced security realms—from Micro-segmentation to IDS/IPS, NTA, Malware Prevention, NDR, and the NSX Application Platform. Traverse through Datacenter Services, mastering NAT, VPN, and Load Balancing, with insights into the fundamentals of NSX Advanced Load Balancer. The exploration extends into NSX Multisite and NSX Federation, offering a detailed examination of onboarding, configuration, and expert tips for monitoring and managing NSX environments. To enrich your practical knowledge, immerse yourself in hands-on experiences with NSX Labs or VMware's complimentary Hands-on Labs, link provided in the book.
What you will learn
● Master the foundational concepts of VMware NSX Datacenter.
● Explore logical switching, logical routing, VRF, EVPN, and bridging.
● Enhance network security with Micro-segmentation and advanced threat prevention mechanisms.
● Understand and configure NSX Datacenter services such as NAT, VPN, DHCP, and DNS.
● Implement NSX Advanced Load Balancer for efficient load balancing solutions.
● Dive into NSX Multisite and Federation for managing deployments across multiple locations.
● Acquire monitoring and management skills, covering authentication, authorization, backups, and more.
● VMware's free Hands-on Labs for practical experience.
Table of Contents
1. Introduction to NSX Datacenter
2. Deploying NSX Infrastructure
3. Logical Switching
4. Logical Routing – NSX Edge Nodes
5. Logical Routing – NSX Gateways
6. Logical Routing – VRF and EVPN
7. Logical Bridging
8. Security – Micro-segmentation
9. Security – Advanced Threat Prevention
10. Security – Network Detection and Response
11. NSX DataCenter Services – 1
12. NSX DataCenter Services – 2
13. NSX Multisite Deployment
14. Monitoring and Managing NSX
Index
Related to Ultimate VMware NSX for Professionals
Related ebooks
The VMware NSX Handbook: Practical Solutions for Network Virtualization and Security Rating: 0 out of 5 stars0 ratingsVMware NSX Network Essentials Rating: 0 out of 5 stars0 ratingsVMware Infrastructure Essentials: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsMastering VMware NSX for vSphere Rating: 0 out of 5 stars0 ratingsVMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads Rating: 0 out of 5 stars0 ratingsESXi Administration and Automation: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsMastering VMware vSphere 5.5 Rating: 0 out of 5 stars0 ratingsVXLAN Network Virtualization Guide: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsXenServer Administration and Deployment Guide: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsVMware vSphere Design Rating: 3 out of 5 stars3/5vSphere Victory: From Virtual Rookie to ESXi Executioner Rating: 0 out of 5 stars0 ratingsVirtualization Security: Protecting Virtualized Environments Rating: 3 out of 5 stars3/5VMware vSphere Essentials Rating: 0 out of 5 stars0 ratingsUltimate VMware NSX for Professionals Rating: 0 out of 5 stars0 ratingsCitrix® XenApp® 7.x Performance Essentials Rating: 0 out of 5 stars0 ratingsVMware vSphere Design Essentials Rating: 0 out of 5 stars0 ratingsCitrix Virtual Apps and Desktops Troubleshooting Rating: 5 out of 5 stars5/5Citrix Solutions Implementation: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsAIX Systems Administration and Architecture: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsVLAN Configuration and Implementation Techniques: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsManaging Virtual Infrastructure with Veeam® ONE™ Rating: 0 out of 5 stars0 ratingsVMware Cloud on AWS Blueprint: Design, automate, and migrate VMware workloads on AWS global infrastructure Rating: 0 out of 5 stars0 ratingsCitrix XenApp® 7.5 Desktop Virtualization Solutions Rating: 0 out of 5 stars0 ratingsInformation Technology HandBook Rating: 3 out of 5 stars3/5VMware vCloud Security Rating: 0 out of 5 stars0 ratingsComprehensive Guide to VNC Technology: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratings
Security For You
CompTIA Security+ Study Guide with over 500 Practice Test Questions: Exam SY0-701 Rating: 5 out of 5 stars5/5Cybersecurity For Dummies Rating: 5 out of 5 stars5/5IAPP CIPP / US Certified Information Privacy Professional Study Guide Rating: 0 out of 5 stars0 ratingsCompTIA Security+ Study Guide: Exam SY0-601 Rating: 5 out of 5 stars5/5Cybersecurity: The Beginner's Guide: A comprehensive guide to getting started in cybersecurity Rating: 5 out of 5 stars5/5CompTia Security 701: Fundamentals of Security Rating: 0 out of 5 stars0 ratingsMake Your Smartphone 007 Smart Rating: 4 out of 5 stars4/5Social Engineering: The Science of Human Hacking Rating: 3 out of 5 stars3/5The Art of Intrusion: The Real Stories Behind the Exploits of Hackers, Intruders and Deceivers Rating: 4 out of 5 stars4/5How to Hack Like a GOD: Master the secrets of hacking through real-life hacking scenarios Rating: 4 out of 5 stars4/5How to Hack Like a Pornstar Rating: 4 out of 5 stars4/5(ISC)2 CISSP Certified Information Systems Security Professional Official Study Guide Rating: 3 out of 5 stars3/5Hacking For Dummies Rating: 4 out of 5 stars4/5Codes and Ciphers Rating: 5 out of 5 stars5/5The Darknet Superpack Rating: 0 out of 5 stars0 ratingsHow to Become Anonymous, Secure and Free Online Rating: 5 out of 5 stars5/5CISM Certified Information Security Manager Study Guide Rating: 4 out of 5 stars4/5Cybersecurity All-in-One For Dummies Rating: 0 out of 5 stars0 ratingsIAPP CIPM Certified Information Privacy Manager Study Guide Rating: 0 out of 5 stars0 ratingsCEH v11 Certified Ethical Hacker Study Guide Rating: 0 out of 5 stars0 ratingsThe Hacker Crackdown: Law and Disorder on the Electronic Frontier Rating: 4 out of 5 stars4/5Tor and the Dark Art of Anonymity Rating: 5 out of 5 stars5/5CompTIA CySA+ Study Guide: Exam CS0-003 Rating: 2 out of 5 stars2/5Hands on Hacking: Become an Expert at Next Gen Penetration Testing and Purple Teaming Rating: 3 out of 5 stars3/5Amazon Web Services (AWS) Interview Questions and Answers Rating: 5 out of 5 stars5/5
Reviews for Ultimate VMware NSX for Professionals
0 ratings0 reviews
Book preview
Ultimate VMware NSX for Professionals - Vinay Aggarwal
CHAPTER 1
Introduction to NSX Datacenter
Introduction
Before we start with the world of SDN (Software Defined Networking) or VMware’s offering for SDN called NSX, Let’s take a step back and understand the challenges with traditional physical networking. If I ask you as an application developer with you being a Network Engineer, I want to deploy multi-tier applications (could be a monolithic or container base) with components in their own broadcast domain, a firewall applied to every component with centralized management, capability for automation to provision network services on demand as and when required and ask you to prepare the underlying network infrastructure for application; these requirements from application team become very tough and challenging very soon.
But what if we had a magic wand, which we can wave and solve all these challenges? Well, cheesy lines aside, virtualization is the key to all these questions. We have solved similar challenges with compute and storage in the datacenter and by moving physical networking into software construct, we can achieve the same level of automation and flexibility with our Networking as well. So, fasten your seatbelts as we drive on the path to virtualize Networks. This chapter discusses current challenges with physical networks, how SDN can be an answer to these challenges and more specifically VMware SDN offering NSX.
Structure
In this chapter, we will discuss the following topics:
Challenges with Physical Networks
Introducing SDN
Different models of SDN
Introducing NSX – A VMware SDN Solution
NSX Architecture
Management Plane
Control Plane
Data Plane
Challenges with Physical Networks
Networks are analogous to veins in the human body like veins carry blood from different organs back and forth similarly as their core functionality, Networks provide connectivity between different components in a datacenter. Apart from connecting different components together, networks also provide some other key services in a datacenter including but not limited to:
Layer 2 communication
Layer 3 connectivity
Firewall services
NAT and VPN services
Load Balancing services, and so on.
Initially, we started with different applications running on various physical servers housed in a datacenter. In case a developer requires to run an application on different hardware, it would take time to procure hardware, set it up in the datacenter and have the operating system ready to run the application. In such cases, physical networks were able to keep up with the speed and requirements of the application. But with the evolution of virtualization and the rise of SDDC (Software Defined Datacenter), physical networks are lagging behind. Make no mistake, we still require physical networks to connect different components together even in an SDDC, but like the example, in the Introduction, there are new challenges that have arisen with physical networks.
Traffic Hairpinning
Traffic hairpinning or pinning is forcing traffic to flow through a single point. This can result in a bottleneck of traffic affecting the performance of the system. In networking, devices are assigned an IP address so they can communicate with each other via IP address. If IP addresses belong to the same subnet, they can communicate with each other directly. But if their IP address belongs to a different subnet, we need a router or L3 device that can forward the packet to the correct subnet known as the default gateway. For physical workloads, it is fine to keep the default gateway on one or more routers but it becomes an issue for virtual workloads.
Imagine, two Virtual Machines sitting on the same physical server, in order to talk to each other they have to traverse the whole network path due to traffic hairpinned to the default gateway. The following Figure 1.1 shows an example where two virtual machines are running on same hypervisor but belongs to different subnets have traffic hairpinned to default gateway.
Figure 1.1: Traffic Hairpinning in Virtual Workload
Using firewalls or other security appliances such as IDS/IPS results in similar hairpinning of traffic affecting the performance of the application in case of non-optimal network design.
Speed and Agility
Physical networks are not capable to keep up with the speed and agility of SDDC (Software Defined Datacenter). In today’s world, there is an ever-changing requirement from businesses to roll out new features and be on top of the competition. This requires faster provisioning of applications and a requirement to freely move an application from one environment to another. Server Virtualization, storage virtualization and Containers have enabled IT for faster provisioning of applications and application mobility but physical networks are holding them back. Let’s take the same example from the introduction of the chapter. I need to create new subnet domains on demand for my application components, create new firewall rules and provide new load balancing services in order for the application to be up and running. How long do you think it would need to enable all these requirements; Days, Weeks or Months? It would not be an acceptable answer for businesses, they need it to be done now. What about using different devices and systems, they may use different protocols and standards, which can make it difficult for the administrator to manage all these devices. And then there is a limit to physical networks about how much can they expand, for example, 4096 VLANs limit per LAN.
Security
Security is the most important aspect of any network infrastructure. This is what keeps those hackers away and protects your business. In traditional networks, security is provided by Firewalls which can provide Layer 3, Layer 4 and Layer 7 inspection. There are other appliances that provide advanced security features such as IDS/IPS (Intrusion Detection System/Intrusion Prevention System). However, there are a few inherent flaws with these systems. First, these require network traffic to be hairpinned due to their non-distributed architecture and placement in North-South bound network traffic. Now, it could be challenged by saying how about utilizing firewall services directly on the workload itself. But different workloads have different OS and different consoles and it becomes very difficult to manage rules without any centralized management. Second, most of these Firewalls or Intelligent Systems are very good at acting as perimeter firewalls. They can stop most of the security attacks but not all.
Most of the cyber-attacks in recent years have a common characteristic – Lateral Movement. Once a malicious vector gets access to a system inside a datacenter, they use various vulnerabilities to exploit and move malicious code from server to server also known as East-West movement or lateral movement.
Using a perimeter-only strategy makes it extremely difficult to prohibit lateral movements, and traditional firewalling makes it exceedingly expensive to safeguard traffic across all workloads in a datacenter.
Introducing SDN
SDN or more commonly Software Defined Networking is a networking approach where Control Plane (Decision Maker) is separated from Forwarding Plane or Data Plane (Who transmits data). This approach is different from traditional networking in the sense that instead of using dedicated physical devices such as switches or routers to control traffic, it uses software to either create and control a new Network Infrastructure on top of physical networks known as a Virtual Network or control traditional hardware.
The key difference between SDN and traditional networking is infrastructure. While traditional networking is a hardware-based approach, SDN is a software-based approach. In an SDN architecture, the control plane is centralized and implemented in software, while the data/forwarding plane is implemented in network devices or servers. This separation allows for greater flexibility and programmability, as the control plane can be easily updated or modified without requiring changes to the physical network infrastructure. This allows network engineers to manage the entire network infrastructure from a centralized user interface and enable them to control the network, change configuration settings, provision resources or increase network capacity.
Different models of SDN
The key aspect of SDN is the separation of the Control Plane and Data Plane. This can be achieved in different ways which constitute different models of SDN.
Open SDN
Network administrators use a protocol like OpenFlow to control the behavior of virtual and physical switches at the data plane level.
SDN by APIs
Instead of using an open protocol, APIs (Application Programming Interfaces) controls the flow of data over the network across multiple devices.
SDN Overlay Model
Another type of software-defined networking runs a virtual network on top of an existing hardware infrastructure, creating dynamic tunnels to different on-premise and remote data centers. The virtual network allocates bandwidth over a variety of channels and assigns devices to each channel, leaving the physical network untouched.
Hybrid SDN
This model combines software-defined networking with traditional networking protocols in one environment to support different functions on a network. Standard networking protocols continue to direct some traffic, while SDN takes on responsibility for other traffic, allowing network administrators to introduce SDN in stages to a legacy environment.
Introducing NSX – A VMware SDN Solution
VMware NSX is part of a wider VMware Virtual Cloud Network framework. In essence, VCN (Virtual Cloud Network) is a VMware framework to apply consistent network and security policies to different types of workloads which could be virtual machines, containers or applications running on different types of platforms such as hypervisors, bare metal servers or cloud platforms. VCN is all software in nature and requires no dedicated or specialized hardware to run, as long as minimum requirements can be fulfilled VCN can be deployed as a software layer. This software layer provides connectivity between different datacenters, cloud platforms or edge infrastructure and helps in deploying consistent security policies which help in overcoming challenges posed by traditional physical networks. Apart from providing networking and security, VCN includes several other solutions which are key to providing extensibility, integration, automation and consistency. Figure 1.2 captures key solutions part of VCN portfolio
Figure 1.2: VMware Virtual Cloud Network Portfolio
Key solutions part of the VMware Virtual Cloud Network portfolio are:
NSX Data Center – VMware solution for providing consistent networking and security policies across multiple applications and workloads running on different platforms
NSX Advance Load Balancer (ALB) – VMware distributed load balancing solution for virtual machines, containers, bare metal servers or workloads running on the cloud
Hybrid Cloud Extension (HCX) – Provide the capability to migrate workloads from legacy infrastructure to the cloud with minimal downtime
NSX Network Detection & Response (NDR) – Advance threat detection and analysis services provided by VMware for proactive response
NSX IDS/IPS – VMware advanced fully distributed Layer 7 intelligent threat detection and prevention services part of NSX solution
NSX Cloud – SaaS offering for NSX services which delivers networking and security services for applications running in public cloud natively.
Tanzu Service Mesh – Provide consistent networking and security management across different Kubernetes platforms from a central console
Aria Operations for Network – Provide visibility, monitoring for virtual and physical networks and based on flows captured provides recommendations for distributed firewall in the NSX environment
Note: Key solutions listed above NSX IDS/IPS, NDR, and ALB are covered in this book as part of NSX and not as a standalone product. Other key solutions are outside the scope of this book.
NSX Data Center
NSX Data Center is a VMware SDN offering for networking and security and is based on four fundamental attributes:
Policy and Consistency – NSX allows the user to provide desired state configuration with the help of API or UI which enables automation for fast pace business requirements. It has various controls and inventories in place to keep the system in a consistent state across different platforms.
Connectivity – It provides logical switching and distributed routing capabilities separated from the management plane and control plane without being tied to a single compute domain. For example, multiple vCenter can be integrated with a single NSX to provide consistent networking across different clusters. It can further extend connectivity to different sites, public clouds or containers via specific implementations.
Security – It allows for distributed security across multiple workloads but still provides centralized management for security policies. This helps in providing consistent security policy on different workloads be it Virtual machines, containers or applications running in private or public cloud to maintain correct security posture.
Visibility – It provides a wide variety of toolsets such as Traceflow, Live Traffic Analysis, Spanning, metric collection, and events all available in a single place greatly reducing operation complexities.
NSX Architecture
VMware NSX Data Center is based on the Overlay SDN model and provides consistent networking and security services in the IT environment. It implements these services including but not limited to Switching, Routing, NAT, VPN, firewalling, etc. in software hence bringing networking services into software construct. Further, these services can be provisioned on demand in different combinations to create different network environments as per application requirements with tremendous agility. NSX implements all these features with the help of three different functional planes. These are:
Management Plane
Control Plane
Data Plane
Figure 1.3 shows different components which made up these functional planes which are covered in depth in upcoming section.
Figure 1.3: NSX High-Level Architecture Components
Before NSX-T 2.4 (Before version 4.x, NSX was named as NSX-T), Management plane and Control plane were deployed as separate appliance. However, starting NSX-T 2.4, the Management plane and Control plane are merged into a single appliance known as the Manager Appliance and Data Plane is implemented with the help of transport nodes – more to be discussed in the upcoming section.
Note: Starting NSX 4.0, KVM is no longer supported as a transport node. Also, VIO is the only supported OpenStack implementation with NSX 4.0 and above.
Management Plane
The management plane is the point of entry into the NSX environment. This is where cloud platforms, management tools or API clients interact with NSX using API queries or we as a user can interact either via Graphical User Interface or API queries. The primary responsibilities of the management plane are to store user configurations, handle any queries and perform operational tasks such as pushing policies to the control plane for realization, storing configuration in the database, etc.
The management plane is implemented by NSX Manager Appliance in an NSX environment. NSX Manager provides a centralized interface to view and manage NSX deployments. As the NSX Manager appliance is the single point of entry in an NSX environment, in order to avoid a single point of failure, it is deployed as a group of three nodes to form NSX Management Cluster which provides high availability and scalability in an NSX environment.
Key functionalities of the NSX Management Cluster include:
The entry point for user configurations either via RESTFul APIs or GUI (Graphical User Interface)
Store desired user configuration in distributed database
Push desired user configuration to control plane for realization
Communicate with Data Nodes to retrieve metrics as well as realized configuration
NSX Management Cluster
As stated earlier, the NSX Management cluster is formed by grouping three NSX Manager appliances in a cluster. Prior to NSX-T 2.4, NSX Manager and NSX Controllers were separated by their roles and deployed as separate appliances. So, in total four appliances were needed, one for management and three for controllers but still it was a single point of failure of the management plane. Starting NSX-T 2.4, Management Plane components and Control Plane components merged into a single appliance eliminating a single point of failure for the management plane.
NSX Manager cluster primarily runs the following functions:
Manager Role
Policy Role
Controller Role
Distributed Persistent Database
Manager and Policy Role provides management plane functionality whereas the Controller role as the name suggests servers control plane functions. All the desired configuration is saved in a distributed database which is replicated across all three nodes, providing the same configuration view to all nodes in a cluster. Figure 1.4 depicts different roles which make up Management Plane and their cluster groups within a NSX Management Cluster.
Figure 1.4: NSX Management Cluster Components
NSX Manager appliance is available in different sizes to be deployed based on different requirements. The following table highlights different options available to deploy an NSX manager appliance:
Table 1.1: NSX Manager Appliance Size
Note: Extra Small VM resource size is only for the Cloud Services Manager appliance.
Small VM appliance is for lab or Proof-of-Concept deployment and should not be used for production deployments.
NSX Consumption Model
NSX manager provides two different models to interact with the NSX environment or consume NSX services. These two different models are handled by two different roles:
Policy Role
Manager Role
Primarily, you will be working with Policy Role which is a declarative state configuration model whereas the Manager role is an imperative state configuration model. Let’s take an example to understand more about the Policy role and Manager role. Imagine going to a pizzeria and ordering a margarita pizza. We tell the pizzeria about our choice of margarita pizza and the size of the order whether it will be small, medium or large. We are providing the desired state of our order and we leave it up to the pizzeria to figure out the recipe, ingredients and step-by-step details to create a desired pizza. The policy role handles the desired end-state order from the user and then hands it over to the manager role to figure out the step-by-step process. So, we order through the Policy role and then the backend of the pizzeria is handled by the Manager role to figure out how to achieve the desired end state. To summarize, In Policy Model or Policy role we provide desired state whereas in the Manager role we need to provide step-by-step configurations.
NSX Policy: NSX Manager’s default UI mode is policy mode. Some of the key functions provided by NSX Policy role:
It provides a centralized interface for configuring networking and security policies across the environment
It takes the desired state configuration from users in NSX UI or can be accessed via API URI /policy/api/
It enables the user to specify the final desired state of the system without worrying about the current state or underlying implementation steps
NSX Manager: NSX Manager UI is disabled by default and is deprecated from NSX-T 3.2. It is temporarily used to address deployment created via Manager mode or upgrade from older NSX-T versions. NSX-T 3.2 has introduced a policy promotion tool to migrate configuration from manager mode to policy mode. Other key functions provided by NSX Manager Role:
It installs and prepares the data plane components
It retrieves and validates configurations from NSX Policy and pushes the configuration to the control plane
Also, it retrieves the metrics from data plane components
Using Policy vs. Manager UI
It is always recommended to use Policy UI for any new deployments as Manager UI is deprecated and new features are implemented in Policy UI only. However, for a few use cases, the use of Manager UI might be required. The following table highlights such use cases where Manager UI is required.
Table 1.2: Using Policy UI or Manager UI
NSX Manager Communication Workflow
We have discussed a lot about different roles and components in the NSX Manager appliance, so let’s map them out and see how they interact with each other. Figure 1.5 shows key services part of NSX Manager and how they communicate with each other to take desired state configuration from user, validate it and save it in database.
Figure 1.5: NSX Manager Communication Workflow
As a first step, the user accesses the UI or REST API which comes through Reverse Proxy on NSX Manager. Revery Proxy is the first point of entry with authentication and authorization capabilities. Configuration is then sent to Policy which in turn update CorfuDB which is our persistent distributed database to save desired configurations. The policy then sends configuration to Proton which is our manager role. Proton in turn validates all the configurations provided by Policy and updates the CorfuDB database. After validation and updating CorfuDB, Proton then sends the configuration to Control Plane.
Proton is one of the core components of NSX Manager and it is responsible for various key functionalities such as logical switching, logical routing, distributed firewall, etc. and both Proton and Policy save their data in CorfuDB which is replicated to other nodes in the cluster to provide consistent view to all nodes in NSX cluster.
Control Plane
Control Plane consists of the Controller role whose primary function is to maintain the realized state of the system based on desired state configuration received from the Manager role or Proton. The controller role performs this function by computing the runtime state based on the configuration from Proton and then pushing the stateless configuration to the data plane. Figure 1.6 highlights component of control plane.
Figure 1.6: NSX Control Plane
Another key function of the Controller is to distribute topology information reported by data plane nodes and maintain the realized state configuration for the system. In NSX, the control plane functionality is achieved through a multi-tier approach and it is divided into two main parts, Central Control Plane and Local Control Plane as shown in Figure 1.7:
Figure 1.7: NSX Control Plane Architecture
Central Control Plane (CCP): Central Control Plane or CCP resides on NSX Manager nodes and it is part of the Controller role. CCP is also implemented as a cluster form factor with a controller role running on all three NSX Manager nodes part of the cluster. This provides both high availability and load distribution within the control plane. As stated earlier, one of the primary functions of the controller role is to compute runtime state configuration and this is handled by CCP on NSX Manager nodes. Also, it distributes topology information reported by LCP running on data plane nodes to other CCP nodes so the same realized state is maintained across the environment.
As CCP is logically separated from the data plane, any failure on the control plane doesn’t affect data plane traffic or any user traffic is not passed to CCP avoiding any hairpinning.
Local Control Plane (CCP) – Local Control Plane or LCP exists on each and every data plane node which can be ESXi host, Edge nodes or bare metal servers (more on data nodes covered in Section Data Plane). Its primary function is to program or configure kernel modules on data nodes and push stateless configuration to forwarding engines. It monitors the link status on local data nodes and provides any update in the forwarding engine to CCP.
Figure 1.8: Information distribution from LCP to CCP
An important point to note here is each data plane node is communicating with a single control node. Any time a transport node or data node is initialized, it is assigned to a control node. CCP receives any configuration update from NSX Manager and pushes the information to the LCP of transport nodes. In Figure 1.8, transport node on left has a configuration update. How transport node on right receives these updates is shown in a multi-step process.
If any update or change occurs on a transport node, as a first step the LCP running on transport nodes updates its assigned control node or CCP running on a specific control node. In the second step, CCP further distributes these changes to other control nodes via the help of the distributed database. Also, any transport nodes assigned to the same CCP will receive the changes. In the last step, CCP running on other control nodes push changes to their respective assigned LCP. So, changes in any node are propagated to the entire NSX environment to keep it consistent. This distribution of workload is achieved with the help of process Sharding.
Sharding
Sharding is the process to distribute workload across multiple control plane nodes. NSX Management cluster is made up of three nodes with each node running a controller role and more specifically CCP. With the help of sharding, transport nodes or data nodes are divided equally among different CCP nodes and the CCP node is responsible for maintaining state information on data nodes that are assigned to it. This helps in dividing load across the NSX Management cluster and avoids high load or high resource usage on a specific node only. Figure 1.9 shows transport nodes assigned to a specific controller node to share load.
Figure 1.9: Controller Sharding
On a high level:
The transport node is assigned to a specific CCP node for L2, L3 and distributed firewall rule configuration and distribution
Each CCP node receives updates from both the Manager role and Data plane node but maintains state configuration on the specific transport node it has been assigned to.
But a question arises here – What if my controller got failed? What will happen to transport nodes assigned to failed controller node? Will it affect my data plane traffic?
The answer is pretty simple; we have separated our control plane from the data plane. So even if my controller node got failed, data plane traffic will remain unaffected and continues to flow without any effect. And there is a heartbeat running between all controller nodes so if one controller node gets failed, other controller nodes become aware of the failure and the sharding table is recalculated to again distribute the load among the remaining controller nodes.
Note: If two controller nodes or NSX Manager appliance get failed in a three-node cluster then Management Plane goes into a read-only state and no configuration changes can be performed.
Data Plane
The data plane is the most important component in the NSX environment as this is where actual packet processing is performed. It is completely distributed and stateless in nature as if any data node, for example, ESXi host gets failed all Virtual Machines running on a specific ESXi host get failed over to other hosts.
The data plane consists of Transport Nodes where TN (Transport Node) is the host who is running LCP (Local Control Plane) daemons and forwarding engines. NSX support different type of transport nodes and can be classified into two broad categories:
Host Transport Nodes
Host transport nodes are either Hypervisor or Bare metal servers which are prepared and configured for NSX to provide network and security to workloads or applications running on them. The most common host transport nodes are:
ESXi Host: It provides a data plane function for different types of workloads such as VMs and containers. NSX implements a data plane on the ESXi host with the help of N-VDS (NSX Virtual Distributed Switch). Starting NSX-T 3.0, NSX can be directly installed on top of VDS v7 (vSphere Distributed Switch). Also, from NSX 4.0, N-VDS is no longer supported with ESXi.
KVM: Starting NSX 4.0, KVM is no longer supported. However, in an earlier release, NSX-T installed N-VDS which is based on OVS (Open vSwitch) to provide data plane functions to VMs running on KVM.
Bare Metal Server: NSX Agent or NSX third-party packages can be installed on bare metal servers running Windows or Linux to secure applications running on bare metal servers. Also, Bare Metal Server can be prepared as an Edge Transport Node to provide routing and network services to the NSX environment.
Edge Transport Nodes
Edge Nodes are special NSX appliances that are dedicated to run stateful and central network services that cannot be distributed in nature such as Gateway Firewall, N-S Routing, NAT or VPN, and so on. Edge TNs are grouped into a cluster to provide high availability and scalability for network services by abstracting and presenting compute resources as a pool of resources to centralized network services. Edge Transport Node can be deployed in two forms:
Edge VM Node: As the name suggests, the Edge transport node is deployed as a virtual machine in a vSphere environment. This is the most common deployment and can cater to most business requirements.
Bare Metal Edge Node: Instead of deploying a virtual machine, a bare metal server can be instantiated as an Edge transport node. Bare metal edge nodes are usually deployed where bandwidth requirements are much higher than a VM form factor can provide.
Some of the key functions provided by Data Plane are:
It forwards the packets based on the configuration provided by the local control plane which in turn is provided by the central control plane based on the user-desired configuration
It processes packets based on various flow tables and rules which are populated by the control plane
It reports the topology information back to the control plane with help of the local control plane which is further distributed to other data plane nodes
It monitors the status of links and tunnels and in case of any failure and performs failover
Last but not least, it maintains statistics and metrics at the packet level which helps in determining resource utilization and different events
Transport Node Communication Path
As we stated earlier in Management Plane and Control Plane sections, part of their function is to communicate with data plane nodes, collect metrics or statistics and push stateless configurations respectively. Instead of the Manager from the Management plane and CCP from the Control plane communicating directly with the data node or more appropriately transport node, they use a proxy called APH (Appliance Proxy Hub) to communicate with the transport node. Figure 1.10 highlights different ports and services responsible for communication between NSX Manager node and Transport node.
Figure 1.10: Transport Node communication with Manager Node
APH (Appliance Proxy Hub) runs as a service on the NSX Manager node and provides secure connectivity to the transport node based on the NSX-RPC protocol. Similar to APH on the NSX Manager node, NSX-Proxy runs on a transport node that communicates with APH to provide statistics or receive configurations. NSX preparation or initial configuration is handled by the Manager running on the NSX Manager node and when a transport node is added to NSX, it sends configuration to APH which in turn talks to NSX-Proxy over TCP Port 1234 to configure the data plane on the local node. Post preparation of NSX over data node, NSX-Proxy collects stats and metrics from the local node and sends it to APH over TCP Port 1234 for further analysis by the Manager.
Similarly, CCP pushes stateless configurations computed based on desired user configurations to APH who in turn sends it to NSX-Proxy over TCP Port 1235 for it to apply on forwarding engines. Also, any topology reported by a local node is communicated by NSX-Proxy to APH over TCP Port 1235 which further communicates information to CCP.
As there are multiple NSX Manager nodes and Transport Nodes, each establishes its own communication channel over a specified port and all traffic passing between different nodes is secure and encrypted in nature.
Conclusion
In this chapter, we discussed challenges with traditional physical networks and how VMware NSX can overcome those challenges without the need for any specific hardware with help of Software Defined Networking. Like any network device, NSX can be divided into three functional planes which are logically separated from each other to avoid effect on one plane due to the failure of another. Policy and Manager roles are key components of the Management Plane whereas CCP and LCP constituents control the plane. NSX supports a wide array of platforms that can be configured as data plane nodes or transport nodes to provide Network and Security services to different workloads or applications running on them.
Real fun with NSX begins with hands on experience. In next chapter, we will begin with deployment of NSX manager and cover how to prepare infrastructure for NSX.
Key Terms
VLAN(Virtual Local Area Network): A group of devices logically segmented in same broadcast domain on a switched network
NAT (Network Address Translation>): It is a method to map multiple or single private IP address to a public IP address before passing traffic over network
SDN (Software Defined Networking): A networking approach to separate control plane from data plane with help of software and can be implemented in different models
Overlay: Virtual network created on top of physical network to create different network services with help of encapsulation and tunnels
NSX Manager Node: A specialized appliance provide by VMware to implement virtual network in datacenter. Also, it serves management and control plane functions
NSX Manager Cluster: A group of three NSX Manager nodes to provide high availability and scalability to management plane and control plane
Policy Role: A declarative based approach which take user inputs in form of desired state configurations
Manager Role: A imperative based approach which provide step by step configuration to achieve desired state configuration
CorfuDB: A persistent distributed database to save configurations and inventory
CCP (Central Control Plane): Controller role runs on NSX Manager node and compute stateless configurations from desired user configurations provided by manager to further push to local control plane
LCP (Local Control Plane): Controller role running on transport node helps in pushing stateless configuration to forwarding engine to achieve realized state configuration
Transport Node: Any host such as hypervisor or bare metal server prepared for NSX running LCP daemons and forwarding engines
APH (Appliance Proxy Hub): Service running on NSX Manager node responsible for communication between NSX Manager node and transport node
CHAPTER 2
Deploying NSX Infrastructure
Introduction
In the previous chapter, we reviewed NSX architecture on a high level and different key components in an NSX environment such as Manager, Controller and Transport Nodes. This is all good but the real fun begins with deployments. This chapter covers the deployment of NSX Manager, attaching it to a vCenter server and most important preparing data nodes. Key steps in preparing NSX data nodes are:
Creating Transport Zone
Creating uplink profiles
Preparing ESXi host for NSX
This chapter covers all the preceding steps to prepare your NSX environment in addition to deploying NSX Manager. Starting NSX 4.x, KVM is no longer supported, hence this chapter covers NSX infrastructure deployment and preparation for the vSphere environment only. KVM preparation for the NSX environment is out of the scope of this book but documentation for the same has been provided in the References section.
Structure
In this chapter, we will cover the following topics:
Deploying NSX Manager Cluster on vSphere
Deploying NSX Manager
NSX Manager Base Configuration
Deploying Additional NSX Manager Node
Creating NSX Management Cluster using CLI
Configuring Virtual IP for Cluster
Validating NSX Management Cluster
Preparing NSX Data Plane
Architecture of Transport Nodes
Transport Zones
Uplinks or pNICs
Uplink Profile
Transport Node Profile
Preparing ESXi Host as Transport Node
Deploying NSX Manager Cluster on vSphere
Before configuring or utilizing NSX networking in the environment, it is important to set up a management plane as it provides entry into the NSX environment. As covered earlier, the management plane consists of three Manager appliances grouped in a cluster which are deployed as Open Virtualization Appliance (OVA) appliances either on standalone ESXi hosts or ESXi hosts managed by a vCenter server. The recommended way to deploy NSX Manager is through the vCenter server as the vCenter server can deploy any virtual machine that NSX Manager directs it to thus removing manual overhead. In a nutshell, first NSX Manager is deployed using the OVA file provided by VMware and post successful deployment of NSX Manager, the vCenter server is registered with NSX Manager as a compute Manager. Subsequent NSX Managers are deployed from within NSX UI to create a three-node management cluster. As a next step, prerequisites for Data Plane preparation are configured such as Transport Zone, uplink profile, etc. and ESXi host is prepared as Transport Nodes. Last but not least, edge nodes are deployed and configured in an Edge Cluster to provide centralized network services. Figure 2.1 shows the steps to implement NSX in the