SZ-D-5.1-ConfigurationGuide-RevA-20181109
SZ-D-5.1-ConfigurationGuide-RevA-20181109
Ruckus SmartZone
Data Plane (vSZ-D/SZ100-D)
Configuration Guide
Export Restrictions
These products and associated technical data (in print or electronic form) may be subject to export control laws of the United
States of America. It is your responsibility to determine the applicable regulations and to comply with them. The following notice
is applicable for all products or technology subject to export control:
These items are controlled by the U.S. Government and authorized for export only to the country of ultimate destination for use by the
ultimate consignee or end-user(s) herein identified. They may not be resold, transferred, or otherwise disposed of, to any other country
or to any person other than the authorized ultimate consignee or end-user(s), either in their original form or after being incorporated
into other items, without first obtaining approval from the U.S. government or as otherwise authorized by U.S. law and regulations.
Disclaimer
THIS CONTENT AND ASSOCIATED PRODUCTS OR SERVICES ("MATERIALS"), ARE PROVIDED "AS IS" AND WITHOUT WARRANTIES OF
ANY KIND, WHETHER EXPRESS OR IMPLIED. TO THE FULLEST EXTENT PERMISSIBLE PURSUANT TO APPLICABLE LAW, ARRIS
DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, TITLE, NON-INFRINGEMENT, FREEDOM FROM COMPUTER VIRUS,
AND WARRANTIES ARISING FROM COURSE OF DEALING OR COURSE OF PERFORMANCE. ARRIS does not represent or warrant
that the functions described or contained in the Materials will be uninterrupted or error-free, that defects will be corrected, or
are free of viruses or other harmful components. ARRIS does not make any warranties or representations regarding the use of
the Materials in terms of their completeness, correctness, accuracy, adequacy, usefulness, timeliness, reliability or otherwise. As
a condition of your use of the Materials, you warrant to ARRIS that you will not make use thereof for any purpose that is unlawful
or prohibited by their associated terms of use.
Limitation of Liability
IN NO EVENT SHALL ARRIS, ARRIS AFFILIATES, OR THEIR OFFICERS, DIRECTORS, EMPLOYEES, AGENTS, SUPPLIERS, LICENSORS
AND THIRD PARTY PARTNERS, BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, PUNITIVE, INCIDENTAL, EXEMPLARY OR
CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER, EVEN IF ARRIS HAS BEEN PREVIOUSLY ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES, WHETHER IN AN ACTION UNDER CONTRACT, TORT, OR ANY OTHER THEORY ARISING FROM
YOUR ACCESS TO, OR USE OF, THE MATERIALS. Because some jurisdictions do not allow limitations on how long an implied
warranty lasts, or the exclusion or limitation of liability for consequential or incidental damages, some of the above limitations
may not apply to you.
Trademarks
ARRIS, the ARRIS logo, Ruckus, Ruckus Wireless, Ruckus Networks, Ruckus logo, the Big Dog design, BeamFlex, ChannelFly,
EdgeIron, FastIron, HyperEdge, ICX, IronPoint, OPENG, SmartCell, Unleashed, Xclaim, ZoneFlex are trademarks of ARRIS
International plc and/or its affiliates. Wi-Fi Alliance, Wi-Fi, the Wi-Fi logo, the Wi-Fi CERTIFIED logo, Wi-Fi Protected Access (WPA),
the Wi-Fi Protected Setup logo, and WMM are registered trademarks of Wi-Fi Alliance. Wi-Fi Protected Setup™, Wi-Fi Multimedia™,
and WPA2™ are trademarks of Wi-Fi Alliance. All other trademarks are the property of their respective owners.
Network Architecture........................................................................................................................................................................29
Communication Workflow................................................................................................................................................................ 31
Hardware Requirements................................................................................................................................................................... 41
Important Notes About Hardware Requirements................................................................................................................................. 41
Supported Modes of Operation...............................................................................................................................................................42
vSZ-D with DirectI/O.......................................................................................................................................................................... 44
vSZ-D with Hypervisor vSwitch Installed......................................................................................................................................... 45
vSZ-D and vSZ with Hypervisor vSwitch Installed.......................................................................................................................... 46
Hypervisor Configuration..................................................................................................................................................................49
Supported Hypervisors............................................................................................................................................................................. 49
General Configuration.............................................................................................................................................................................. 49
VMware Specific Configuration................................................................................................................................................................ 49
KVM Specific Configuration...................................................................................................................................................................... 54
Hypervisor Detail ............................................................................................................................................................................ 54
CPU Type............................................................................................................................................................................................. 55
Memory Allocation.............................................................................................................................................................................56
Disk Configuration............................................................................................................................................................................. 57
NIC Configuration in Direct IO Mode............................................................................................................................................... 60
NIC Configuration in vSwitch Mode................................................................................................................................................. 61
Adding a PCI Device to a VM on Virt-Manager ...............................................................................................................................63
NIC Card Setting................................................................................................................................................................................. 64
Deployment of vSZ............................................................................................................................................................................. 65
Deploy vSZ-D with 40GB NIC on ESXi Server..........................................................................................................................................65
Deploy vSZ-D with 40GB NIC on ESXi Server.................................................................................................................................. 65
Deploy vSZ-D with 40GB NIC on Linux Server........................................................................................................................................80
Deploy vSZ-D with 40GB NIC on Linux Server................................................................................................................................ 80
Document Conventions
The following table lists the text conventions that are used throughout this guide.
NOTE
A NOTE provides a tip, guidance, or advice, emphasizes important information, or provides a reference to related
information.
ATTENTION
An ATTENTION statement indicates some information that you must read before continuing with the current action or
task.
CAUTION
A CAUTION statement alerts you to situations that can be potentially hazardous to you or cause damage to
hardware, firmware, software, or data.
DANGER
A DANGER statement indicates conditions or situations that can be potentially lethal or extremely hazardous to you.
Safety labels are also attached directly to products to warn of these conditions or situations.
Convention Description
bold text Identifies command names, keywords, and command options.
italic text Identifies a variable.
[] Syntax components displayed within square brackets are optional.
Default responses to system prompts are enclosed in square brackets.
{x|y|z} A choice of required parameters is enclosed in curly brackets separated by vertical bars. You must select one of the
options.
x|y A vertical bar separates mutually exclusive elements.
<> Nonprinting characters, for example, passwords, are enclosed in angle brackets.
... Repeat the previous element, for example, member[member...].
\ Indicates a “soft” line break in command examples. If a backslash separates two lines of a command input, enter the
entire command at the prompt without the backslash.
Document Feedback
Ruckus is interested in improving its documentation and welcomes your comments and suggestions.
For example:
• Ruckus SmartZone Upgrade Guide, Release 5.0
• Part number: 800-71850-001 Rev A
• Page 7
White papers, data sheets, and other product documentation are available at https://www.ruckuswireless.com.
For product support information and details on contacting the Support Team, go directly to the Ruckus Support Portal using
https://support.ruckuswireless.com, or go to https://www.ruckuswireless.com and select Support.
Open a Case
When your entire network is down (P1), or severely impacted (P2), call the appropriate telephone number listed below to get
help:
• Continental United States: 1-855-782-5871
• Canada: 1-855-782-5871
• Europe, Middle East, Africa, Central and South America, and Asia Pacific, toll-free numbers are available at https://
support.ruckuswireless.com/contact-us and Live Chat is also available.
• Worldwide toll number for our support organization. Phone charges will apply: +1-650-265-0903
We suggest that you keep a physical note of the appropriate support number in case you have an entire network outage.
Self-Service Resources
The Ruckus Support Portal at https://support.ruckuswireless.com offers a number of tools to help you to research and resolve
problems with your Ruckus products, including:
• Technical Documentation—https://support.ruckuswireless.com/documents
• Community Forums—https://forums.ruckuswireless.com/ruckuswireless/categories
• Knowledge Base Articles—https://support.ruckuswireless.com/answers
• Software Downloads and Release Notes—https://support.ruckuswireless.com/#products_grid
• Security Bulletins—https://support.ruckuswireless.com/security
Using these resources will help you to resolve some issues, and will provide TAC with additional data from your troubleshooting
analysis if you still require assistance through a support case or RMA. If you still require help, open and manage your case at
https://support.ruckuswireless.com/case_management.
This guide is written for service operators and system administrators who are responsible for managing, configuring, and
troubleshooting Ruckus devices. Consequently, it assumes a basic working knowledge of local area networks, wireless
networking, and wireless devices.
NOTE
If release notes are shipped with your product and the information there differs from the information in this guide,
follow the instructions in the release notes.
Most user guides and release notes are available in Adobe Acrobat Reader Portable Document Format (PDF) or HTML on the
Ruckus Support Web site at https://support.ruckuswireless.com/contact-us.
With the introduction of the Virtual Data Plane (vSZ-D) in SZ 3.2 release and SZ100-D in 5.1, the SmartZone platform launched
sophisticated data plane capabilities. This is truly differentiated and distinguished offering that provides compelling business
benefits for varied deployment scenarios.
NOTE
You can create a maximum of 2047 multicast groups on vSZ-D/SZ100-D.
Secure data plane tunneling Manages the creation of aggregated user data streams through
secure tunnel
Multiple Hypervisor Support Supports the most widely deployed VMware and KVM hypervisors,
applicable only to vSZ-D.
Dynamic data plane scaling Supports 1Gbps, 10Gbps or even higher throughput capacities to
support all types of enterprise and carrier deployments that can be
dynamically tuned without needing software updates
With the vSZ-D/SZ100-D , it is now possible to support tunneled WLANs on Ruckus APs that are managed by a vSZ controller. In
addition, both the Ruckus APs and the vSZ-D/SZ100-D support encryption capabilities on tunnels for data protection. This is
especially important when tunneling guest traffic and in use cases where the service provider or enterprise operator does not
have control on the backhaul links.
NOTE
vSZ-D/SZ100-D does not support IPv6 addresses for northbound soft-GRE tunnels.
This is especially useful for managed service providers and ISPs who manage remote distributed sites through a central or
regional data center. In this architecture, the vSZ is in the provider’s data center managing APs across all remote distributed sites.
On sites where there is a need for tunneling, they can introduce the vSZ-D/SZ100-D and bind them to that particular site so that
all APs on that site shall tunnel traffic locally to the vSZ-D/SZ100-D on that site.
NOTE
DHCP Server/NAT function if enabled is supported only for wireless client IPv4 address assignment.
NOTE
DHCP Server and NAT service configuration is supported using AP and web user interface. Refer to Administrator Guide
for configuring DHCP server and NAT service on the web interface.
DHCP Server
The DHCP Server is designed in-line in the data plane and provides extreme scale in terms of IP address assignment to clients.
This feature is especially useful in high density and dynamic deployments like stadiums, train stations where large number of
clients continuously move in & out of WiFi coverage. The DHCP server in the network needs to scale to meet these challenging
requirements. The DHCP server on the vSZ-D/SZ100-D provides high scale IP assignment and management with minimal impact
on forwarding latency. DHCP Server supports 440K IP addresses and 64 pools with profile support.
NOTE
The DHCP service can scale for a maximum of 100K IP leases per data plane. You can incrementally add-on license on a
per-group basis of two DPs.
NAT Service
With NAT service enabled, all the WiFi client traffic is NATed by vSZ-D/SZ100-D before being forwarded to the core network. Each
vSZ-D/SZ100-D supports up to 990K ports and 16 public IP addresses for NAT. This feature essentially reduces the network
overhead significantly since this reduces the MAC-table considerations on the UP-stream switches significantly. Again, very useful
in high density deployments.
NOTE
Only single subnet is supported.
NOTE
The NAT service scales a maximum of 2 million sessions/ flows per Data Plane. You can incrementally add-on license on
a per-Data Plane basis.
DHCP/NAT
DHCP/NAT functionality on SZ-managed APs and DPs (data planes) allows customers to reduce costs and complexity by removing
the need for DHCP server/NAT router to provide IP addresses to clients. For data traffic aggregation and services delivery you can
choose appropriate user profile for DHCP and NAT services on vSZ-D/SZ100-D.
AP-based DHCP/NAT
In highly distributed environments, particularly those with only a few APs per site, the ability for an AP or a set of APs to provide
DHCP/NAT support to local client devices simplifies deployment by providing all-in-one functionality on the AP, which eliminates
the need for a separate router and DHCP server for each site. It also eases site management by providing central control and
monitoring of the distributed APs and their clients.
Profile-based DHCP
The DHCP Server is designed in-line in the data plane and provides extreme scale in terms of IP address assignment to clients.
This feature is especially useful in high density and dynamic deployments like stadiums, train stations where large number of
clients continuously move in & out of WiFi coverage. The DHCP server in the network needs to scale to meet these challenging
requirements. The DHCP server on the vSZ-D/SZ100-D provides high scale IP assignment and management with minimal impact
on forwarding latency. DHCP Server supports 440K IP addresses and 64 pools with profile support.
NOTE
DHCP Server/NAT function if enabled is supported only for wireless client IPv4 address assignment.
Profile-based NAT
With NAT service enabled, all the WiFi client traffic is NATed by the vSZ-D/SZ100-D before being forwarded to the core network.
Each vSZ-D/SZ100-D supports up to 990K ports and 16 public IP addresses for NAT. This feature essentially reduces the network
overhead significantly since this reduces the MAC-table considerations on the UP-stream switches significantly. Again, very useful
in high density deployments.
NOTE
To edit or remove the license assignment on the data plane, select the assignment from the list and click Configure or
Delete respectively.
NOTE
To edit or remove the license assignment on the data plane, select the assignment from the list and click Configure or
Delete respectively.
L3 Roaming
Ruckus vSZ and vSZ-D/SZ100-D architecture now supports L3 Roaming without the need for additional mobility controllers.
The key use cases for L3 Roaming are well-understood,. Typically, a large WLAN network where APs are separated on different
VLAN segments and there is a need for IP address preservation and potentially session persistence. Most common deployments
are large campus networks designed with multiple switches and VLANs and there is a need to support L3 Roaming.
On vSZ-D/SZ100-D, Ruckus Wi-Fi can now support L3 Roaming with IP Address preservation. Below is the high level use case that
describes the feature functions. A large network that is broken up into various campuses and there is a need to support L3
Roaming. Below figure depicts 2 campuses, which are L2 separated but need L3 Roaming.
The APs in campus A setup a tunneled WLAN to the vSZ-D (Using Zone Affinity) and APs in building B setup a tunneled WLAN to
the vSZ-D in their building.
Each vSZ-D/SZ100-D in the building can be configured to run a DHCP Server and NAT the traffic or be setup as a DHCP Relay.
When a client roams from an AP in building A to an AP in building B, thevSZ-D/SZ100-D in building B detects the roaming event
and forwards the traffic (or assigns the same IP back to the client) to the vSZ-D in building A (home vSZ-D/SZ100-D or anchor vSZ-
D/SZ100-D) to ensure that service to the client is not interrupted.
One additional unique benefit of this architecture over other L3 Roaming solutions is that with this architecture, the roamer
client can still have access to his home network resources (this is similar to mobile roaming on 3G/4G networks).
NOTE
Traffic between intervSZ-D/SZ100-D tunnels in Figure 6 can be encrypted by enable "Enabling Tunnel Encryption on
page 26.
NOTE
If the IP address of the UE changes, then the session breaks.
You have successfully enabled L3 roaming, and also set the roaming criteria based on which DPs would connect within
the network.
Lawful Intercept
An important carrier class feature that is being introduced on the vSZ-D/SZ100-D is to support Lawful Intercept requirements.
These are slowly becoming mandatory and stringent on SP-WiFi deployments where Service Providers need to meet the CALEA
standard requirements.
Ruckus vSZ-D/SZ100-D now supports the ability to identify a device that has a LI warrant issued against it and mirror the client
data traffic to a LIG (Lawful Intercept Gateway) that is hosted in the SP’s data center over L2oGRE.
The figure below illustrates the high level architecture that is supported for Lawful Intercept capabilities. It aslo depicts an
architecture where smaller sites (with lesser number of APs) that do not need data tunneling to vSZ-D/SZ100-D (depicted as
Multi-AP and Single AP sites) but need Lawful Intercept. On the other side is a large enterprise site with large number of APs and
need tunneling (depicted as Enterprise site with vSZ-D/SZ100-D on premise) with Lawful intercept.
NOTE
As mentioned in this document, the flexibility of the Ruckus vSZ/vSZ-D architecture is that WiFi service providers can
deploy the vSZ-D/SZ100-D only on premises where there is a need (typically larger venues) for tunneling.
The Ruckus architecture simply involves spinning up a vSZ-D/SZ100-D instance at the central data center and designate that vSZ-
D/SZ100-D instance as a CALEA mirroring agent. All of this configuration is centrally managed through the vSZ. Once the network
is setup appropriately, when a client device with a matching MAC address that has a warrant is detected on any of the access
sites, the APs (or the vSZ-D/SZ100-D) will mirror the packets to the vSZ-D (CALEA Mirroring agent) in the DC which will then
forward the traffic to the LIG (Lawful Intercept Gateway) either in the DC or SP DC.
NOTE
The Flexi-VPN option is only available if the Access VLAN ID is 1, and when VLAN Pooling, Dynamic VLAN and
Core Network VLAN options are disabled.
NOTE
You can only apply 1024 WLAN IDs to a Flexi-VPN profile.
Flexi-VPN supports IPv4 addressing formats and Ruckus GRE tunnel protocol. It does not support IPv6
addressing formats.
3. Select a virtual data plane for which you want to enable the Flexi-VPN feature, and then select the Enable Flexi-VPN
check-box.
4. Click OK.
You have successfully enabled the Flexi-VPN feature on the selected vSZ-D/SZ100-D.
MTU is the size of the largest protocol data unit that can be passed on the controller network.
• Set the maximum transmission unit (MTU) for the tunnel using one of the Tunnel MTU options:
– Click the Auto radio button. This is the default option.
– Click the Manual radio button and enter the maximum number of bytes. For IPv4 traffic the range is from
850-1500 bytes, for IPv6 traffic the range is from 1384 to 1500 bytes.
MTU is the size of the largest protocol data unit that can be passed on the controller network.
4. Click OK.
The control/management interface is used for communication with the vSZ controller, as well as the command line interface. The
data plane interface is used to tunnel user data traffic from the APs.
The access layer (southbound) is used to tunnel traffic to and from managed APs. The following connections exist on the access
layer.
2. vSZ to and from vSZ-D/SZ100-D : Control plane, for vSZ to manage vSZ-D/SZ100-D
The core layer (northbound) is used by vSZ-D/SZ100-D to forward traffic to and from the core network.
1. Update the vSZ controller to the latest release or perform a fresh install of the vSZ controller with the latest release
NOTE
If you are upgrading the vSZ controller and the vSZ-D/SZ100-D, Ruckus recommends the update of vSZ
controller before the update of vSZ-D/SZ100-D
2. Install vSZ-D/SZ100-D and point it to the vSZ-E or vSZ-H controller by using the following options:
• Set vSZ-E or vSZ-H control interface IP address or FQDN or configure the controller IP address via DHCP option 43.
• For vSZ-E or vSZ-H configured with three (3) IP interfaces, the IP address to use is the vSZ control interface IP
address.
3. The vSZ-D/SZ100-D management interface connects with the vSZ-E or vSZ-H controller control interface
4. The vSZ-E or vSZ-H controller administrator approves the vSZ-D/SZ100-D connection request
7. AP establishes a Ruckus GRE tunnel with the vSZ-D/SZ100-D data interface when a tunnelling WLAN is configured
Figure 15 depicts logical network architecture. In real-world deployments, there may be network routers, gateways, firewalls and
other devices; these typical network devices are not shown in the figure to focus on the vSZ-D/SZ100-D interfaces and
communication protocol aspects between the various entities.
It is also important to note that support for distributed or centralized deployment topologies introduce NAT routers/gateway
devices. The communication interfaces between Ruckus APs, vSZ and vSZ-D/SZ100-D are designed to support NAT traversal so as
to support such NAT Deployment Topologies on page 33.
When vSZ-D/SZ100-D is behind NAT, it is assumed that vSZ-D/SZ100-D is sitting in the private world and wants to talk to the AP in
the public world through NAT. In this case, it is needed to setup the NAT IP (public IP) and a port number pair in vSZ-D/SZ100-D
“setup” process. vSZ picks up this public address and the associated port number and informs the AP that this is the vSZ-D/
SZ100-D address/port (public-IP, port) pair to connect to.
It is also needed to configure the NAT device and enter the port mapping, basically, (public-IP, port) <-> (private-IP, 23233) into
NAT’s rule table. Thus, when NAT receives the packet bound for vSZ-D/SZ100-D (sent to public-IP/port) from the AP, it will
translate it to (private-IP, 23233) based on the rule table before sending it to vSZ-D/SZ100-D, and conversely, for packet from vSZ-
D/SZ100-D, NAT router will look at the srcIP/srcPort (IP, 23233), and convert it to public IP address or port based on the rule table
before sending it to AP.
NOTE
Both TCP and UDP protocols on port 23233 need to be forwarded as both are used (TCP is used for tunnel
establishment and UDP for client data)
NOTE
The minimum memory and CPU requirements for vSZ have changed in this release. You may need to upgrade your
infrastructure before upgrading. Please read carefully. This is the minimum requirement recommended. Refer to the
Release Notes or the vSZ Getting Started Guide.
The following table lists the minimum hardware requirements recommended for running an instance of vSZ-D.
Hypervisor support required by Management Interface VMWare ESXi 6.7 and later OR KVM (CentOS 7.4 64bit)
Processor Intel Xeon E55xx and above. Recent Intel E5-2xxx chips are
recommended
CPU cores • Minimum 3 to 6 cores per instance dedicated for data plane
processing.
• DirectIO mode for best data plane performance.
NOTE
Actual throughput numbers will vary depending
on infrastructure and traffic type.
For best performance, Ruckus recommends using the direct IO mode. SR-IOV mode is unsupported. Refer to the table below for
mode of operation
NOTE
NICs assigned to direct IO cannot be shared. Moreover, VMware features such as vMotion, DRS, and HA are
unsupported.
The hardware configuration for a single vSZ-D instance specified in the guide will scale to handle 10K tunnels (10K APs) and up to
10Gbps of throughput (unencrypted) with appropriate underlying Intel NIC cards (10G interfaces) in directIO mode of operation.
This aligns with the number of Ruckus AP that a vSZ controller supports. Refer to the dimensioning table below.
NOTE
Refer to the vSZ-D Performance Recommendations on page 107 chapter for encryption and vSwitch impacts.
NOTE
* vSZ-D needs to increase the CPUs to 6 for sustaining the 10GB line rate in random-byte traffic when the encryption is
enabled. Encrypted requires 6 cores and unencrypted requires 3 cores
The figure below depicts a sample configuration in DirectIO mode. This is the recommended deployment model for the vSZ-D for
best performance benefits. In this setup, cores as well as the NICs are dedicated to the vSZ-D VM only for best performance.
NOTE
In this setup, the vSZ-D data plane interfaces directly with the DPDK NIC, completely bypassing the vSwitch
It also depicts a vSZ controller instance running as a separate VM. These VMs can be running on the same underlying host or
potentially different hosts.
NOTE
The figure below depicts multiple virtual data plane instances for reference. It also depicts a vSZ controller instance
running as a separate VM.
82599ES
X520
vSwitch VMware VMXNET3 --
KVM Virtio --
Supported Hypervisors
Unlike the vSZ controller, vSZ-D can only be installed on specific versions of VMware and KVM.
The tables below list the hypervisors and versions on which vSZ and vSZ-D can and cannot be installed.
General Configuration
Ruckus offers the following general configuration recommendations.
• Deploy vSZ-D on a machine that has at least two physical NICs. Alternatively, deploy to two vSwitch instances with
dedicated physical NICs.
• When deploying an instance of vSZ-D using an OVA file, you must assign the management and data interfaces to two
different network groups (vSwitch) on different subnets.
• After the vSZ-D instance is ready, modify the number of CPU cores (if needed) before powering on vSZ-D.
• For advanced CPU and memory resource configuration recommendations, refer to the vSphere Resource Management
Guide, which is available on the VMware website.
Hypervisor Detail
You can view the details of the hypervisor.
CPU Type
When selecting the CPU model, make sure you select one that is higher than Intel Core 2 Duo. On Linux, you can this information
in /proc/cpuinfo.
Memory Allocation
You must allocate a minimum of 6G (6144 MByte) memory for vSZ-D.
Disk Configuration
Ruckus recommends using Virtio as the disk bus and qcow2 as the storage format.
1. Enable VT-d (for Intel processors) in the motherboard BIOS. Intel's VT-d ("Intel Virtualization Technology for
Directed I/O") is available on most i7 family processors.
2. Add kernel boot parameters via GRUB to enable IOMMU (see figure below). To enable IOMMU in the kernel of
Intel processors, pass intel_iommu=on boot parameter on Linux. For more information, read this tutorial.
$ sudo -e /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="...... intel_iommu=on"
3. Then run the following command to generate the GRUB config file.
$ sudo update-grub
$ sudo -e /etc/default/grub
GRUB_CMDLINE_LINUX="...... intel_iommu=on"
3. Then run the following command to generate the GRUB config file.
• For CentOS
1. Edit GRUB config template at /boot/grub/grub.conf.
2. In the config file, look for the entry "default=N" at the top of the config file indicates which entry is the
default image. On the next line, add a kernel parameter as "name=value" in kernel /vmlinuz- variable.
1. From the VNC Viewer and click Add Hardware > PCI Host Device.
2. Choose a PCI device to assign from the PCI device list, and click Finish.
3. Power on the guest and the host PCI device would be visible in the guest VM.
10G:ixgbe
VMware w/ vSwitch e1000 VMXNET3
e1000e
VMware w/ pci passthrough e1000 1G:igb
e1000e 10G:ixgbe
Hardware Requirement
7. 1 or 2 vNICs
8. 8 GB memory
Prerequisite
• A hypervisor on ESXi to install vSZ-D. Recommended version is ESXi 6.7 and later.
• Download the vSZ-D package (.OVA file) from Ruckus support .
• The IP addresses, netmask, gateway, DNS, DHCP and NAT support for vSZ-D.
• Before you power on vSZ-D, ensure that the networking is configured on ESXi.
• Recommended to use static network addresses that are assigned to vSZ-D during setup.
NOTE
Due to different servers and NIC, the deployment procedure mentioned in this section is for reference.
Topology
The network topologies for vSZ-D deployment on ESXi 6.7 server.
The following are basic topologies for setting up vSZ-D. Based on your requirement you can choose any of the alternatives
between one IP domain to three separate domains for deployment.
The below topology shows the different IP addresses for the domains.
The below topology shows the same IP addresses for the all the interfaces.
Deployment Procedure
The following are basic instructions for setting up the controller on the ESXi server.
For this deployment two different IP address domains are considered for controller interfaces. Refer to Topology on page 65
The vSphere Client management page appears as shown in the following figure.
NOTE
You must deploy the controller directly from the .ova file. Copying an instance of the controller from another
controller template might not function properly.
5. Click Browse, to select the source location and upload the .ova file.
6. Click Next.
7. Enter the vSZ datastore name and choose the disk format as seen below.
8. Click Next.
10. Select the destination network for each network source and click Add as shown in the following image.
11. For 40Gb throughput performance , select total number of CPU cores to 8.
1. At the login prompt, login using administrator credentials of username and password. At the > prompt, enter the
enable (en) command and the admin password to change the mode to Privileged-exec mode.
2. Run the setup command to configure the IP address for management and data interface. It is recommended to add a
new host if you have multiple hosts for various configurations
3. Choose the IP address setup (IPv4 only or IPv4 and IPv6) for Management and Data interface by either selecting manual
or DHCP. On defining the IP setup the process of vSZ-D/SZ100-D connecting to vSZ controller starts.
4. Enter the DNS setting and select Enter to skip the NAT IP setting.
5. Enter vSZ control interface IP address. Follow the set of sequences as seen below for the vSZ-D/SZ100-D to connect to
vSZ controller. This changes the mode for vSZ-D/SZ100-D as well as for vSZ.
7. To view and approve the vSZ-D/SZ100-D, login to the web interface. Navigate to Clusters > Data planes. Select the vSZ-
D/SZ100-D and click on Approve. On approval the status is greyed.
You have successfully added the vSZ-D/SZ100-D image to the vSZ controller.
Hardware Requirement
2. Linux CentOS 7
Prerequisite
• A Linux host enabled KVM which to install vSZ-D VM. Prefer CentOS 7 and later.
• Download the vSZ-D package (.qcow2 file) from Ruckus support .
• The IP addresses, netmask, gateway, DNS, DHCP and NAT support for vSZ-D.
• Two network interfaces to support vSZ-D.
• Before you power on vSZ-D, ensure that the networking is configured on LINUX.
• Recommended to use static network addresses that are assigned to vSZ-D during setup.
gedit /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet
intel_iommu=on"
GRUB_DISABLE_RECOVERY="true"
NOTE
Due to different servers and NIC, the deployment procedure mentioned in this section is for reference.
Topology
The network topologies for vSZ-D deployment on LINUX CentOS 7.
The following are basic topologies for setting up vSZ-D. Based on your requirement you can choose any of the alternatives
between one IP domain to three separate domains for deployment.
The below topology shows the different IP addresses for the domains.
The below topology shows the same IP addresses for the all the interfaces.
Deployment Procedure
The following are basic instructions for setting up vSZ-D on LINUX KVM.
For this deployment two different IP address domains are considered for Data Plane interfaces. Refer to Topology on page 81.
1. Download the Data Plane package (.qcow2 file) from Ruckus support .
2. From the VNC Viewer, click System Tools > Virtual Machine Manager to open the Virtual Machine Manager tool. The
Data Plane status must appear Running as shown in the following figure.
b) In the New VM dialog box, choose the disk format option as shown in the following figure.
c) Click Forward.
d) Choose destination storage path and storage volume. Click Browse Local as show in the following figure.
e) Select the Data Plane file and click Open as shown in the following figure.
f) To select the storage path, click Browse as shown in the following figure.
g) Click Forward.
h) Enter the Memory (RAM) and CPUs setting as shown in the following figure.
NOTE
Memory (RAM) must be 6GB and CPUs must be 8 cores.
i) Click Forward.
j) To confirm the installation process, click Finish as shown in the following figure.
NOTE
The sequence for Network interfaces must first be Management and the Data.
4. 4. Add another NIC as Data Plane needs two interfaces; Management and Data. From the VNC Viewer and click Add
Hardware > PCI Host Device > PCI Device to add another NIC as shown in the following figure.
5. Select PCI Host Device > PCI Device and click Finish to add another NIC as shown in the following figure.
6. Select the NIC and choose the Device model to update the management interface associate as shown in the following
figure.
7. From PCI, select the ROM BAR check box to define the Data IP domain as shown in the following figure.
8. Define the CPU Configuration. Select the Copy host CPU configuration check box as shown in the following figure.
9. Define the IDE Disk Configuration. Choose the Disk bus option as shown in the following figure.
11. The Data Plane setup is complete as shown in the following image.
1. At the login prompt, login using administrator credentials of username and password. At the > prompt, enter the
enable (en) command and the admin password to change the mode to Privileged-exec mode.
2. Run the setup command to configure the IP address for management and data interface. It is recommended to add a
new host if you have multiple hosts for various configurations
3. Choose the IP address setup (IPv4 only or IPv4 and IPv6) for Management and Data interface by either selecting manual
or DHCP. On defining the IP setup the process of vSZ-D/SZ100-D connecting to vSZ controller starts.
4. Enter the DNS setting and select Enter to skip the NAT IP setting.
5. Enter vSZ control interface IP address. Follow the set of sequences as seen below for the vSZ-D/SZ100-D to connect to
vSZ controller. This changes the mode for vSZ-D/SZ100-D as well as for vSZ.
You have successfully added the vSZ-D/SZ100-D image to the vSZ controller.
Upgrade Procedure
Procedure for upgrading to a new vSZ-D version.
The below table indicates the compatibility matrix. In general, Ruckus supports N-2 vSZ-D releases with vSZ.
NOTE
vSZ-D version 3.2 needs to be upgraded to 3.4 or 3.5 prior to upgrading to 3.6 and onwards. vSZ support APs starting
version 3.4.
NOTE
®
Before starting this procedure, you should have already obtained a valid software upgrade file from Ruckus Support or
an authorized reseller.
NOTE
®
If you are upgrading both vSZ and vSZ-D/SZ100-D, Ruckus recommends upgrading vSZ first before vSZ-D/SZ100-D.
There is no order in upgrading the AP zone or vSZ-D/SZ100-D. During the vSZ upgrade, all tunnels stay up except the main tunnel
which moves to the vSZ. Once the upgrade procedure is completed, allow ten minutes for the vSZ-D/SZ100-D to settle.
Upgrade to R5.0 does not support data migration (statistics, events, administrator logs). Existing system and network
configuration is preserved. For further clarification, Contact Ruckus support.
4. In the Patch File Upload section, click the Browse button, and then browse to the location of the software upgrade file.
The controller automatically identifies the Type of DP (vSZ-D or SZ-D) and switches to the specific Tab page. Uploads the
file to its database, and then performs file verification. After the file is verified, the Patch Available for Upgrade section
is populated with information about the upgrade file.
6. In Data Planes, identify the data plane you want to upgrade, and then choose a patch file version Upgrade to.
7. Click Apply to apply the patch file version to the virtual data plane.
The following information about the virtual data plane is displayed after the patch file upgrade is completed.
• Name: Displays the name of the virtual data plane.
• DP MAC Address: Displays the MAC IP address of the data plane.
• Current Firmware: Displays the current version of the data plane that has been upgraded.
• Backup Firmware: Displays the backup version of the data plane.
• Last Backup Time: Displays the date and time of last backup.
• Process State: Displays the completion state of the patch file upgrade for the virtual data plane.
• DP Status: Displays the DP status.
NOTE
To have a copy of the data plane firmware or move back to the older version, you can select the data plane from the list
and click Backup or Restore respectively.
The fast path processing of the vSZ-D/SZ100-D is engineered to scale to the underlying NIC capacity profiles whether be it 1G or
10G speeds. vSZ-D/SZ100-D is designed to scale and handle data tunnels and data forwarding capabilities at high scale.
The following are some important observations and recommendations related to the vSZ-D/SZ100-D performance:
• To obtain the best throughput, Ruckus recommends operating vSZ-D/SZ100-D in directIO mode. This recommended
mode of operation applies whether the hypervisor used is VMware or KVM.
• vSZ-D/SZ100-D supports vSwitch mode of operation for added flexibility in deployments where vSZ-D/SZ100-D may be
co-located with other VMs for service chaining on the same underlying hardware. Note that the current observations are
that in the vSwitch mode of operation, there is an induced performance impact in comparison with the directIO mode of
operation. This may be due to the latency or performance bottleneck in virtIO and vSwitch sharing. This is still being
researched at the Ruckus R&D Labs.
• There is an expected performance impact when enabling encryption (AES 128 bit and AES 256 bit) on the Ruckus GRE
Tunnels. This is due to the overhead induced by the crypto processing on Ruckus AP and vSZ-D/SZ100-D due to the
associated overheads of encryption and decryption on a per packet basis. The vSZ-D/SZ100-D software is designed to
introduce minimal latency and overheads associated in packet processing. vSZ-D/SZ100-D takes advantage of the
underlying Intel chip’s crypto module for packet encryption and decryption and the associated impact is primarily
bounded at the hardware level.
For specific recommendations and calibrations that may be needed for your deployment, contact Ruckus.