Networking the Oracle PCA and PCC
Networking the Oracle PCA and PCC
DISCLAIMER
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. Your access
to and use of this confidential material is subject to the terms and conditions of your Oracle software license and service agreement, which has
been executed and with which you agree to comply. This document and information contained herein may not be disclosed, copied,
reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license
agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates.
This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the
product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making
purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole
discretion of Oracle.
Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking
significant destabilization of the code.
TABLE OF CONTENTS
Purpose Statement 2
Disclaimer 2
Disclaimers For Pre-Release, Pre-GA Products 2
Introduction 4
Advantages of the Private Cloud Appliance 5
Management nodes 6
Integrated ZS7-2 Storage 6
Compute Nodes 6
Network Infrastructure 6
Conclusion 31
3 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
INTRODUCTION
The Oracle Private Cloud Appliance (PCA) has emerged as the premier platform for WebLogic, Fusion
Middleware and general applications tier software, often in conjunction with Oracle Exadata - the premier
database platform. Many of those applications, previously deployed on Oracle Exalogic or commodity x86
servers, are now being deployed on PCA for increased performance, scale, and manageability. This white
paper describes the network connectivity of the Oracle PCA, deployment methods and best practices.
4 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
ADVANTAGES OF THE PRIVATE CLOUD APPLIANCE
The Oracle Private Cloud Appliance (PCA) is an Oracle Engineered System designed for the application tier. PCA is an integrated
hardware and software system that reduces infrastructure complexity and deployment time for virtualized workloads in private
clouds. It is a complete platform providing excellent performance and other system properties for a wide range of application types
and workloads, with built-in management, compute, storage and networking resources.
The Private Cloud Appliance is also available as the Private Cloud at Customer (PCC), a solution for on-premises private cloud
that includes the PCA and Oracle services. Customers acquire PCC on a subscription basis, with Oracle operating the
infrastructure so the customer can focus on applications. Except where noted, ‘PCA’ will be used in this document to describe
either form of the product.
The PCA platform is valuable for many application types, endowing benefits to any application tier product. There are several
reasons why this is so effective:
Private Cloud Appliance provides ‘quick time to value’ for a robust virtualization platform, going from first power-up to
operational VMs in a matter of hours. PCA automatically discovers hardware components, configures them to work with
one another, reducing design and administrative effort, eliminating potential errors, and speeding time to application
deployment. PCA’s automated configuration implements Oracle best practices for optimal performance and availability.
Private Cloud Appliance provides high performance high speed 100Gb Ethernet, a ZS7-2 mid range storage array, and
up to 25 Oracle Server X8-2 compute nodes, providing performance and scale improvements over previous product
generations. See sections below for further description of the physical infrastructure.
Private Cloud Appliance design avoids single points of failure on management, network, storage, and compute resource,
and permits 'zero-downtime' rolling upgrades to system infrastructure.
Pre-built Oracle VM virtual appliances and templates quickly stand up application instances. This is complementary to
Private Cloud Appliance quickly provisioning physical infrastructure. You can see a list of pre-built virtual appliances at
https://www.oracle.com/virtualization/technologies/virtual-appliances.html
High performance inter-VM networking using the Private Cloud Appliance internal networks permits low-latency, high
bandwidth, private communication between VMs in a clustered application. This is especially useful for clustered
applications like WebLogic and Coherence, and frameworks like Kubernetes. Multiple private networks can be
established using VLANs or custom PCA networks. This provides independent isolated networks, and is ideal for hosting
multiple application clusters on the same PCA. Each network carries traffic private to each cluster, without need to
prevent IP address collision or data leakage between applications.
Private Cloud Appliance provides load balancing (Dynamic Resource Scheduling - DRS) and High Availability (HA)
features that provide higher performance and automates recovery from outage.
Private Cloud Appliance can provide Infrastructure as a Service (IaaS) cloud function via Oracle Enterprise Manager 13c.
Application orchestration and automated workload deployment can be performed with Ansible or the Oracle VM API and
scriptable command line interface.
Customers can use Trusted Partitioning (PCA only) or Hard Partitioning (PCC) to manage software license costs.
Oracle Private Cloud Appliance and Oracle Private Cloud at Customer fully support Oracle Linux Cloud Native Environment,
including Oracle Container Runtime for Docker and Oracle Container Services for Use with Kubernetes.
5 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Management nodes
PCA uses two latest generation Oracle Server X8-2 systems as the management nodes for Oracle Private Cloud Appliance X8.
They operate in an active-passive cluster for management operations, providing resiliency in case of planned outage or server
failure. Oracle VM Manager and other management functions run on the active management node. When a management node
assumes the active role, it takes over a virtual IP address (VIP) address, so clients of the management interface don’t need to know
which management node is currently active.
Compute Nodes
Oracle Server X8-2 compute nodes in the Private Cloud Appliance provide the virtualization platform. Compute nodes run Oracle
VM Server and provide processing power and memory capacity for virtual machines under Oracle VM Manager's control.
Each X8-2 compute node server has two 24 core Intel Xeon 8260 processors, and can be ordered in three different memory
configurations - 384 GB, 768 GB, and 1.5 TB. With a 45% performance improvement over the previous compute node generation,
Oracle Server X8-2 provides the optimal balance of CPU cores, memory, and I/O throughput for mission-critical enterprise
applications. Customers can scale from 2 to 25 compute nodes in the same rack.
An automated provisioning process orchestrated by the active management node configures compute nodes into the Oracle VM
environment. Private Cloud Appliance software installs Oracle VM Server software on each compute node, defines their network
configurations, and places all compute nodes into an Oracle VM server pool.
PCA administrators can optionally define “tenant groups”, which isolate compute, network, and storage resources in separate
Oracle VM server pools that can be assigned to different customers, to provide dedicated resources. PCC administrators can
achieve this functionality through the use of ‘VM Groups’.
Network Infrastructure
The PCA X8 network is an important differentiator from previous systems. The Private Cloud Appliance relies on “wire once”
Software Defined Networking (SDN) that permits multiple isolated virtual networks to be created on the same physical network
hardware components. The physical networking consists of a pair of redundant network fabrics, each containing a single Cisco
9336 spine switch with a corresponding Cisco 9336 leaf switch. In addition, there is an administrative network utilizing a single
Cisco 9348 switch. None of this switching is managed by or integrated into the customer data center network.
6 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
The 100Gb Ethernet network, as stated, is of a spine and leaf topology. Each Compute Node has a connection to both leaf switches.
Each leaf, is in turn, connected to the spine switches. Each spine switch has connectivity to the Storage Nodes and the
Management nodes, as well as a group of ports for external connectivity. Ports 1-4 are reserved for custom networking
requirements specified by the customer, while port 5 is for the default uplink connection. Note: When referring to a port, such as
‘Port 5’, we are actually referring to both port 5 on the first spine switch located at Rack Unit 22 and the second spine switch,
located at Rack Unit 23.
Each of the four customer reserved ports, numbered 1 through 4, may be configured in a number of ways. Each port may be
broken down as:
The default uplink ports, located on Ports 5/1 and 5/2 are configured as 10Gb Ethernet ports and cannot be changed. Ports 5/3 and
5/4 are reserved for future use, and may not be used at this time.
PCA uses redundant physical network hardware components, pre-cabled at the factory, to help ensure continuity of service
during maintenance or in case of a failure.
Network connectivity
The Private Cloud Appliance provides external network access for connectivity to a datacenter’s networks. The PCA connects to
the datacenter network via a pair of next-level switches, also referred to TOR (top of rack) switches. This provides resiliency
against a single point of failure. Software Defined Networks (SDN) based on the physical network devices connect virtual
machines to networks, storage and other virtual machines, maintaining the traffic separation traditionally provided by hard-wired
connections. Optional custom external networks further isolate traffic and maximize bandwidth.
The PCA uses private, “internal” networks that are not exposed to the customer’s datacenter network. This provides isolation,
security, and the ability to use pre-defined IP address ranges for each networked component without conflict with existing
datacenter network addresses. PCA uses internal networks for appliance management, storage access, and inter-VM
communication. Every PCA rack component has a predefined IP address. Oracle storage, management and compute nodes have a
second IP address for Oracle Integrated Lights Out Manager (ILOM) connectivity.
Compute nodes connect to the internal networks and to the customer datacenter networks. Oracle VM Server on each compute
node communicates over Private Cloud Appliance internal networks for management, storage, heartbeat and live migration. By
default, compute nodes do not have IP addresses on the customer datacenter network, which increases their isolation and reduces
attack surface. Custom networks can be created to give compute nodes IP addresses on the customer network, for additional
bandwidth, traffic separation, and to present Ethernet-based storage to each compute node.
Subnets:
192.168.4.0/24 – internal machine administration network: connects ILOMs and physical hosts
192.168.32.0/21 – internal management network: traffic between management and compute nodes
192.168.64.0/21 – underlay network for east/west traffic within the appliance environment
192.168.40.0/21 – storage network: traffic between the servers and the ZFS storage appliance
7 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Note
Each /21 subnet comprises the IP ranges of eight /24 subnets or over 2000 IP addresses. For example: 192.168.32.0/21
corresponds with all IP addresses from 192.168.32.1 to 192.168.39.255.
In practice, this means that the data center should not utilize any of the aforementioned RFC1918 non-routable addresses (those
starting with 192.168 in the above list) for any purpose that the management nodes need to communicate with. These address
ranges should not be routed to the PCA (they will be filtered at ingress/egress by the management nodes, as is appropriate per RFC
1918) or leak from the PCA they are internal only. However, if any of these ranges need to be reached on the customer data center
network by the PCA, it will be unable to reach them due to specific, pre-configured routing and egress filtering within the PCA.
They may, of course, be used in the enterprise data center, but be aware of this limitation.
The Installation Guide goes on to state the same about the following VLANs:
VLANs:
Note
VLANs 3090-3093 are already in use for tagged traffic over the /21 subnets listed above.
These VLANs are not exposed outside of the PCA in any way, however avoiding the use of these VLANs internal to the PCA is
strongly advised. The PCA uses a default VPC ID of 1, but this too is unexposed outside of the PCA.
PCA administrators can define VLANs on top of the interfaces used for these networks, to comply with a datacenter’s network
standards and to permit traffic isolation. For example, a datacenter standard might require VM traffic be on VLANs 100 to 150,
and separate networks could be defined with those VLAN tags. Private VLANs can also be built to isolate traffic between different
virtual machines.
Additional Networks
The PCA contains a number of additional networks that it may prove useful to understand:
Administration Network
The administration network provides internal access to the management interfaces of all appliance components. These have
Ethernet connections to the Cisco Nexus 9348GC-FXP Switch, and all have a predefined IP address in the 192.168.4.0/24
range. In addition, all management and compute nodes have a second IP address in this range, which is used for Oracle Integrated
Lights Out Manager (ILOM) connectivity. It is only accessible through physical connections to the administrative switch, and
cannot be accessed from the VMs or outside of the PCA by design.
While the appliance is initializing, the data network is not accessible, which means that the internal administration network is
temporarily the only way to connect to the system. Therefore, the administrator should connect a workstation to the reserved
Ethernet port 48 in the Cisco Nexus 9348GC-FXP Switch, and assign the fixed IP address 192.168.4.254 to the workstation.
8 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
From this workstation, the administrator opens a browser connection to the web server on the master management node at
https://192.168.4.216 , in order to monitor the initialization process and perform the initial configuration steps when
the appliance is powered on for the first time.
If desired, a bastion host may be placed on the Administration network at the fixed IP address of 192.168.4.199. More detail on
this may be found in the Installation and Administration Guides for the PCA.
Management Network
The management network provides outside access to the PCA management interfaces, namely the PCA Dashboard, and the OVM
Manager interface. The PCA dashboard is used to set the base system configuration such as IP addresses for the management
nodes, the IP addresses for DNS and NTP servers and what VLAN to place the management network on. The PCA dashboard is
initially accessed through the Administration Network to perform the initial configuration by Oracle Field Service personnel.
Once used to configure the IP addresses on the customer data center network, the dashboard and OVM interface are available
through the external facing ports on 5/1 and 5/2 of the spine switches. More on this ahead.
Data Network
The appliance data connectivity is built on redundant Cisco Nexus 9336C-FX2 Switches in a leaf-spine design. In this two-layer
design, the leaf switches interconnect the rack hardware components, while the spine switches form the backbone of the network
and perform routing tasks. Each leaf switch is connected to all the spine switches, which are also interconnected. The main
benefits of this network architecture are extensibility and path optimization. An Oracle Private Cloud Appliance rack contains
two leaf and two spine switches.
The Cisco Nexus 9336C-FX2 Switch offers a maximum throughput of 100Gbit per port. The spine switches use 5 interlinks
(500Gbit); the leaf switches use 2 interlinks (200Gbit) and 2x2 crosslinks to the spines. Each compute node is connected to both
leaf switches in the rack, through the bond1 interface that consists of two 100Gbit Ethernet ports in link aggregation mode. The
two storage controllers are connected to the spine switches using 4x40Gbit connections.
For external connectivity, 5 ports are reserved on each spine switch, four ports are available for custom network configurations;
one port is required for the default uplink. This default external uplink requires that port 5 on both spine switches is split using a
QSFP-to-SFP+ four way splitter or breakout cable. Two of those four 10GbE SFP+ breakout ports per spine switch, ports 5/1 and
5/2, must be connected to a pair of next-level data center switches, also called top-of-rack or ToR switches. Ports 5-3 and 5-4 are
reserved for future expansion.
• The Internal Storage Network is a redundant 40Gbit Ethernet connection from the spine switches to the ZFS storage
appliance. All four storage controller interfaces are bonded using LACP into one datalink. Management and compute
nodes can reach the internal storage over the 192.168.40.0/21 subnet on VLAN 3093. This network also fulfills
the heartbeat function for the clustered Oracle VM server pool.
9 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
• The Internal Management Network provides connectivity between the management nodes and compute nodes in the
subnet 192.168.32.0/21 on VLAN 3092. It is used for all network traffic inherent to Oracle VM Manager, Oracle
VM Server and the Oracle VM Agents.
• The Internal Underlay Network provides the infrastructure layer for data traffic between compute nodes. It uses the
subnet 192.168.64.0/21 on VLAN 3091. On top of the internal underlay network, internal VxLAN overlay
networks are built to enable virtual machine connectivity where only internal access is required.
One such internal VxLAN is configured in advance: the default internal VM network, to which all compute nodes are connected
with their vx2 interface. Untagged traffic is supported by default over this network. Customers can add VLANs of their choice to
the Oracle VM network configuration, and define the subnet(s) appropriate for IP address assignment at the virtual machine level.
• The External Underlay Network provides the infrastructure layer for data traffic between Oracle Private Cloud
Appliance and the data center network. It uses the subnet 192.168.72.0/21 on VLAN 3090. On top of the
external underlay network, VxLAN overlay networks with external access are built to enable public connectivity for the
physical nodes and all the virtual machines they host.
One such public VxLAN is configured in advance: the default_external network, to which all compute nodes and
management nodes are connected with their vx13040 interface. Both tagged and untagged traffic are supported by default over
this network. Customers can add VLANs of their choice to the Oracle VM network configuration, and define the subnet(s)
appropriate for IP address assignment at the virtual machine level.
The default_external network also provides access to the management nodes from the data center network and allows the
management nodes to run a number of system services. The management node external network settings are configurable through
the Network Settings tab in the Oracle Private Cloud Appliance Dashboard. If this network is a VLAN, its ID or tag must be
configured in the Network Setup tab of the Dashboard.
For the appliance default networking to be configured successfully, the default external uplink must be in place before the
initialization of the appliance begins. At the end of the initialization process, the administrator assigns three reserved IP addresses
from the data center (public) network range to the management node cluster of the Oracle Private Cloud Appliance: one for each
management node, and an additional Virtual IP shared by the clustered nodes. From this point forward, the Virtual IP is used to
connect to the master management node's web server, which hosts both the Oracle Private Cloud Appliance Dashboard and the
Oracle VM Manager web interface.
These ports may also carry ‘host network’ traffic, used to present storage from storage devices external to the PCA containing
iSCSI LUNS or OVM Repositories to the Compute Nodes. The use of host networks is beyond the scope of this document;
however, it is detailed in the Installation and Administration Guides for the PCA.
The default configuration of the PCA places both the default networks for VM traffic and PCA Management traffic on the
physical ports 5/1 and 5/2 of the two spine switches, with no overlying VLANs to segregate the traffic, as has been previously
mentioned. This configuration is almost always certain to function; hence it is the default. However, it is not recommended or best
practice due to the co-mingling of administrative traffic and VM ‘user’ traffic on the same network, potentially opening up the
management channel to attack or other compromise.
These choices present us with six possible scenarios.
10 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
1. Ports 5/1 and 5/2 are used for unsegregated VM and Management traffic. No VLANs are assigned to either. Not
Recommended.
2. Ports 5/1 and 5/2 are used for segregated VM and management traffic by assigning a single VLAN for management traffic
through the dashboard. Recommended
3. Ports 5/1 and 5/2 are used for segregated VM and management traffic by assigning one or more VLANs for VM client traffic
on top of the default_external network through OVM Manager. Recommended.
4. Ports 5/1 and 5/2 are used for segregated VM and management traffic by assigning both a single VLAN for management
traffic through the dashboard and assigning one or more VLANs for VM client traffic on top of the default_external
network through OVM Manager. Best Practice.
5. Ports 5/1 and 5/2 are used for management traffic only without an VLAN. All customer network traffic is through a new
custom network, with or without VLANs on a port group residing on one or more of the physical ports 1-4. The
default_external network is unused. Recommended.
6. Ports 5/1 and 5/2 are used for management traffic only with a VLAN assigned through the dashboard. All customer network
traffic is through a new custom network, with or without VLANs on a port group residing on one or more of the physical
ports 1-4. The default_external network is unused. Best Practice.
The upstream connection can be of several configurations, however the two most common are: a pair of upstream TOR switches
supporting MLAG between them; or a single upstream TOR switch with 4 ports in a single LAG configuration. See the figure
below.
Note that connections upstream are in a Port Channel, or Virtual Port Channel (VPC) configuration, utilize LAG, are cross
connected in the case of the 2 TOR switch scenario, and are fixed at 10Gb.
11 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Once connected, a simple ‘ping’ from a management node should reach the default gateway, and a browser on a system connected
to the data center management network should be able to see the PCA dashboard at:
https://<management-vip-ip>:7002/dashboard
Once management connectivity is verified, OVM Manager, which may be opened at
https://<management-vip-ip>:7002/ovm/console
can be used to create additional VLAN based networks as required through the ‘Networks’ tab.
Creating Custom Networks
If additional physical networks are required, they may be built by first connecting to a pair of ports on the spine switches. Ports 1
through 4 are set aside for this purpose. Each of these four ports may be configured to operate in the following subport
configurations:
When operating at 100Gb or 25Gb, QSFP28 transceivers, such as Oracle Part Number 7119728 QSFP28 short-range transceiver, in
conjunction with either a splitter cable (for 25Gb), or an MPO to MPO cable (for 100Gb) such as those listed below in Table 1 are
appropriate.
When operating at 40Gb or 10Gb, the X2124A transceiver may be used with the cables listed in Table 1, MPO to MPO for 40Gb
and splitter cables for 10Gb.
7102869 Optical cable assembly: 10 meters, MT ferrule terminated, 12-fiber, multimode, MPO connectors
7102870 Optical cable assembly: 20 meters, MT ferrule terminated, 12-fiber, multimode, MPO connectors
7102871 Optical cable assembly: 50 meters, MT ferrule terminated, 12-fiber, multimode, MPO connectors
7119868 Optical cable assembly: 100 meters, 12-fiber, OM4, multimode, MPO connectors
• The speeds supported for uplink port connectivity are 10G, 25G, 40G and 100G. Ports 5/1 and 5/2 must be configured at
10Gb
• The max mtu size is configured for the uplink port configuration is 9216 (for purpose of jumbo frames)
12 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
• The PCA spine switches operate in MST (Multiple Instance Spanning Tree) mode, which is backward compatible with
other legacy stp such as RSTP, PVST etc.
interface Ethernet1/19
description "Rack22 9336 RU22 P5"
channel-group 221 mode active
interface Ethernet1/20
description "Rack22 9336 RU22 P5"
channel-group 221 mode active
interface Ethernet1/21
description "Rack22 9336 RU23 P5"
channel-group 221 mode active
interface Ethernet1/22
description "Rack22 9336 RU23 P5"
channel-group 221 mode active
13 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
et-0/0/20 {
ether-options {
802.3ad ae2;
}
inactive: unit 0 {
family ethernet-switching {
vlan {
members default;
}
storm-control default;
}
}
}
et-0/0/21 {
ether-options {
802.3ad ae2;
}
inactive: unit 0 {
family ethernet-switching {
vlan {
members default;
}
storm-control default;
}
}
}
et-0/0/22 {
ether-options {
802.3ad ae2;
}
inactive: unit 0 {
family ethernet-switching {
vlan {
members default;
}
storm-control default;
}
}
}
et-0/0/23 {
ether-options {
802.3ad ae2;
}
inactive: unit 0 {
family ethernet-switching {
14 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
vlan {
members default;
}
storm-control default;
}
}
}
• IP address and external facing domain name for Management Node 05.
• IP address and external facing domain name for Management Node 06.
• IP address for the floating, or virtual IP, of the management interface.
• The default gateway and subnet mask to be used.
• The DNS domain to be used.
• A VLAN ID, if so chosen, for the management VLAN.
• IP addresses for a minimum of one, preferably two or more DNS servers.
• IP address of an NTP server used by the datacenter.
15 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Once this information is gathered and used by the installer to configure the PCA through the PCA dashboard as seen below, the
machine may be physically connected to the datacenter network.
16 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
The PCA dashboard may also be used if it is necessary to change the networking information at a later time.
Warning
Custom networks must never be deleted in Oracle VM Manager. Doing so would leave the environment in an error state that is
extremely difficult to repair. To avoid downtime and data loss, always perform custom network operations in the Oracle Private
Cloud Appliance CLI.
Caution
• The maximum number of custom external networks is 7 per tenant group or per compute node.
• The maximum number of custom internal networks is 3 per tenant group or per compute node.
• The maximum number of VLANs is 256 per tenant group or per compute node.
• Only one host network can be assigned per tenant group or per compute node.
Caution
When configuring custom networks, make sure that no provisioning operations or virtual machine environment modifications
take place. This might lock Oracle VM resources and cause your Oracle Private Cloud Appliance CLI commands to fail.
Creating custom networks requires use of the CLI. The administrator chooses between three types: a network internal to the
appliance, a network with external connectivity, or a host network. Custom networks appear automatically in Oracle VM
Manager. The internal and external networks take the virtual machine network role, while a host network may have the virtual
machine and storage network roles.
The host network is a particular type of external network: its configuration contains additional parameters for subnet and routing.
The servers connected to it also receive an IP address in that subnet, and consequently can connect to an external network device.
The host network is particularly useful for direct access to storage devices.
For all networks with external connectivity the spine Cisco Nexus 9336C-FX2 Switch ports must be specified so that these are
reconfigured to route the external traffic. These ports must be cabled to create the physical uplink to the next-level switches in the
data center.
17 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
PCA>
If your custom network requires public connectivity, you need to use one or more spine switch ports. Verify the number of ports
available and carefully plan your network customizations accordingly. The following example shows how to retrieve that
information from your system:
PCA> list network-port
Status: Success
For a custom network with external connectivity, configure an uplink port group with the uplink ports you wish to use for this
traffic. Select the appropriate breakout mode
PCA> create uplink-port-group MyUplinkPortGroup '1:1 1:2' 10g-4x
Status: Success
Note
The port arguments are specified as 'x:y' where x is the switch port number and y is the number of the breakout port, in case a
splitter cable is attached to the switch port. The example above shows how to retrieve that information.
You must set the breakout mode of the uplink port group. When a 4-way breakout cable is used, all four ports must be set to
either 10Gbit or 25Gbit. When no breakout cable is used, the port speed for the uplink port group should be either 100Gbit or
40Gbit, depending on connectivity requirements. See the section “create uplink-port-group” for command details.
Network ports cannot be part of more than one network configuration.
• rack_internal_network
• external_network
• host_network
18 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Status: Success
For an external network, specify a network name and the spine switch port group to be configured for external traffic.
PCA> create network MyPublicNetwork external_network MyUplinkPortGroupp
Status: Success
For a host network, specify a network name, the spine switch ports to be configured for external traffic, the subnet, and optionally
the routing configuration.
PCA> create network MyHostNetwork host_network MyUplinkPortGroup \
10.10.10 255.255.255.0 10.1.20.0/24 10.10.10.250
Status: Success
Note
In this example the additional network and routing arguments for the host network are specified as follows, separated by spaces:
10.10.10 = subnet prefix
255.255.255.0 = netmask
10.1.20.0/24 = route destination (as subnet or IPv4 address)
10.10.10.250 = route gateway
The subnet prefix and netmask are used to assign IP addresses to servers joining the network. The optional route gateway and
destination parameters are used to configure a static route in the server's routing table. The route destination is a single IP address
by default, so you must specify a netmask if traffic could be intended for different IP addresses in a subnet.
Note:
PCA does not support host networks smaller than /24.
Caution
Network and routing parameters of a host network cannot be modified. To change these settings, delete the custom network and
re-create it with updated settings.
Connect the required servers to the new custom network. You must provide the network name and the names of the servers to
connect.
PCA> add network MyPublicNetwork ovcacn07r1
Status: Success
PCA> add network MyPublicNetwork ovcacn08r1
Status: Success
PCA> add network MyPublicNetwork ovcacn09r1
Status: Success
----------------------------------------
Network_Name MyPublicNetwork
Trunkmode None
Description None
Ports ['1:1', '1:2']
vNICs None
Status ready
Network_Type external_network
19 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Compute_Nodes ovcacn07r1, ovcacn08r1, ovcacn09r1
Prefix None
Netmask None
Route Destination None
Route Gateway None
----------------------------------------
Status: Success
As a result of these commands, a VxLAN interface is configured on each of the servers to connect them to the new custom
network. These configuration changes are reflected in the Networking tab and the Servers and VMs tab in Oracle VM Manager.
Note
If the custom network is a host network, the compute node is assigned an IP address based on the prefix and netmask parameters
of the network configuration, and the final octet of the compute node’s internal management IP address.
For example, if the compute node with internal IP address 192.168.4.9 were connected to the host network used for illustration
purposes in this procedure, it would receive the address 10.10.10.9 in the host network.
The figure below shows a custom network named MyPublicNetwork, which is VLAN-capable and uses the compute node's
vx13041 interface.
To disconnect servers from the custom network use the remove network command.
Warning
Before removing the network connection of a server, make sure that no virtual machines are relying on this network.
When a server is no longer connected to a custom network, make sure that its port configuration is cleaned up in Oracle VM.
PCA> remove network MyPublicNetwork ovcacn09r1
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
20 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Before deleting a custom network, make sure that all servers have been disconnected from it first.
Using SSH and an account with superuser privileges, log into the active management node.
Note
The default root password is Welcome1. For security reasons, you must set a new password at your earliest convenience.
# ssh [email protected]
[email protected]'s password:
root@ovcamn05r1 ~]#
Launch the Oracle Private Cloud Appliance command line interface.
# pca-admin
Welcome to PCA! Release: 2.4.2
PCA>
Verify that all servers have been disconnected from the custom network. No vNICs or nodes should appear in the network
configuration.
Caution
Related configuration changes in Oracle VM must be cleaned up as well.
PCA> show network MyPublicNetwork
----------------------------------------
Network_Name MyPublicNetwork
Trunkmode None
Description None
Ports ['1:1', '1:2']
vNICs None
Status ready
Network_Type external_network
Compute_Nodes None
Prefix None
Netmask None
Route_Destination None
Route_Gateway None
----------------------------------------
Delete the custom network.
PCA> delete network MyPublicNetwork
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
Caution
If a custom network is left in an invalid or error state, and the delete command fails, you may use the --force option and retry.
21 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Creating and Managing VLAN Interfaces
The VLAN Interfaces tab allows you to define and manage VLAN interfaces on the Compute Nodes inside your PCA. If your data
center network is configured to support VLANs, you should define interfaces for your different VLAN IDs on each of your
compute nodes, so that you are able to define Oracle VM networks that take advantage of VLAN separation.
The VLAN Interfaces tab is divided into two sections, a Navigation Pane and a Management Pane displaying a tabular view of the
VLAN Interfaces associated with the item selected in the navigation pane.
Navigation Pane
The Navigation Pane displayed on the VLAN Interfaces tab is used to control the VLAN interfaces that are displayed in the
Management Pane. The Navigation Pane presents a tree view of the compute nodes that have been discovered within your
environment. Compute Nodes are grouped by Server Pools or are automatically presented beneath the node titled Unassigned
Servers.
Expanding the Server Pools node, allows you to see each of the server pools configured within your environment. Selecting a
particular server pool from the list presented, updates the Management Pane to show only the VLAN interfaces configured for
that server pool.
Clicking on the Unassigned Servers node is equivalent to selecting an individual server pool. The Management Pane is updated to
show only the VLAN interfaces configured for servers that do not belong to a server pool.
Both the Unassigned Servers node and any of the server pool nodes can be further expanded to list the servers that belong to these
nodes. The VLAN interfaces displayed in the Management Pane can be further limited to each individual server listed.
Finally, you are able to expand server nodes to list the ports or bonds that are available for each server. By selecting a particular
port for any server in the Navigation Pane, the list of VLAN interfaces presented in the Management Pane is limited to those
interfaces configured for that specific port or bond.
Management Pane
The Management Pane on the VLAN Interface tab allows you to create, edit or delete VLAN interfaces configured within Oracle
VM Manager. VLAN interfaces are listed in a tabular view that includes a toolbar providing options to manage interfaces. The
VLAN Interfaces that are displayed in the table at any point in time are controlled using the Navigation Pane.
The VLAN Interface tab includes a toolbar in the management pane that consists of the following options:
Create VLAN Displays the Create VLAN Interface(s) dialog box. Use this option to
Interface... create a new VLAN Interface within Oracle VM Manager.
Displays the Edit VLAN Interface(s) dialog box. Use this option to change
Edit Selected VLAN
the description, MTU settings or IP address assignment options for an
Interface...
existing VLAN Interface.
Delete Selected Displays the Delete Confirmation dialog box. Use this option to delete the
VLAN Interface selected VLAN Interface.
22 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
The table view in the VLAN Interface subtab includes the following fields:
A Filter By VLAN ID drop-down selector is provided at the top of the table, to allow you to quickly limit the VLAN interfaces
displayed in the table to make changes in a more orderly fashion.
9. Click Finish when you have finished editing VLAN interfaces to save the changes, or click Cancel to exit.
23 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Edit VLAN Interfaces
4. Click the Delete VLAN Interface(s) icon at the top of the table displayed in the management pane.
5. The Delete Confirmation dialog box is displayed. Click OK to delete the VLAN interface from Oracle VM Manager.
Create New Displays the Create Network dialog box. Use this option to create a
Network... new network within Oracle VM Manager.
Edit Selected Displays the Edit Network dialog box. Use this option to change the
Network... network configuration for an existing network.
Delete Selected Displays the Delete Confirmation dialog box. Use this option to
Network delete the selected network.
The table view in the Networks subtab includes the following fields:
24 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
• ID: The UUID for the network.
• Name: The name defined for the network within Oracle VM Manager.
• Intra-Network Server: The name of the server on which a local network is defined, if the network is of this type.
• Network Channels: A grouping of the different network channels or roles that a network may support.
• Server Management: A check box item to indicate whether or not the network is a member of the Server Management
channel.
• Cluster Heartbeat: A check box item to indicate whether or not the network is a member of the Cluster Heartbeat
channel.
• Live Migrate: A check box item to indicate whether or not the network is a member of the Live Migrate channel.
• Storage: A check box item to indicate whether or not the network is a member of the Storage channel.
• Virtual Machine: A check box item to indicate whether or not the network is a member of the Virtual Machine channel.
• Description: A field to present the text description defined for the network within Oracle VM Manager.
Caution
Note that in PCA, we only edit or add networks of type Virtual Macine. All others are pre-defined and should never be changed.
2. Click the Create New Network icon . A dialog is displayed, choose the following option:
• Create a Network with Ports/Bond Ports/VLAN Interfaces: Use this option to create a normal network that
makes use of physical ports, bond ports or VLAN Interfaces defined on your servers.
Note
If you have not already defined any VLAN interfaces as described in “VLAN Interfaces”, you are unable to add any VLAN
interfaces to the network. In other words, you must define your VLAN interfaces on the VLAN Interfaces subtab before
attempting to add any VLAN interfaces to a network.
3. Click Next and select: Create a Network with Ports/Bond Ports/VLAN Interfaces
1. After you select a network configuration, the Create Network wizard displays the following fields to configure your
network:
• Name: The name of the network in Oracle VM Manager.
• Description: Optional information you would like to add about this network.
• Network Uses: A set of check boxes that allow you to define the various network roles or channels that are
enabled on the network.
o Management: Used to manage the physical Oracle VM Servers in a server pool, for example, to update
the Oracle VM Agent on the different Oracle VM Servers. This network function is assigned to at
least one network by default.
o Live_Migrate: Used to migrate virtual machines from one Oracle VM Server to another in a server
pool, without changing the status of the virtual machine.
o Cluster_Heartbeat: Used to verify if the Oracle VM Servers in a clustered server pool are up and
running.
o Virtual_Machine: Used for the network traffic between the different virtual machines in a server
pool.
25 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
o Storage: Enables you to associate specific networks with storage use.
Warning:
The only network type valid for creation in PCA is the Virtual Machine type. .Do not attempt to create any other type of network.
Click Next to continue to the next stage of the wizard, or click Cancel to exit the wizard without making any changes.
2. The Select Ports stage of the wizard is displayed. This dialog allows you to define which ports or bonds on each Oracle
VM Server are attached to the network and presents a tabular view of ports and bonds that are attached to the network.
Warning:
PCA networks should only contain VLAN interfaces, there is no requirement for you to add any network ports to the network in
this dialog. You are able to add VLAN interfaces to the network in the next stage of the wizard.
1. Click the Add New Ports icon at the top of the table.
2. The Add Ports to Network dialog is displayed.
3. PCA does not use the underlying physical ports for virtual networking. DO NOT add networks to
physical ports.
4. Click Next to continue to the next stage of the wizard. Note that your network on PCA should only
contain VLAN interfaces, therefore there is no requirement for you to add any network ports to the
network in this dialog, as you are able to add VLAN interfaces to the network in the next stage of the
wizard.
3. The Select VLAN Interfaces stage of the wizard is displayed. This dialog allows you to define which VLAN
interfaces on each Compute Node are attached to the network and presents a tabular view of VLAN interfaces that
are attached to the network. Note that in order to add a VLAN interface to a network it must have been defined
prior to starting this wizard, as described in “VLAN Interfaces”.
26 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Note
If you have not already defined any VLAN interfaces as described in “VLAN Interfaces”, no VLAN interfaces are displayed in this
view and you are unable to add any VLAN interfaces to the network. In other words, you must define your VLAN interfaces on
the VLAN Interfaces subtab, before attempting to add any VLAN interfaces to a network.
a. Identify the VLAN interfaces that you wish to add to the network and check the check box alongside
them.
b. Click OK to add the selected VLAN interfaces to the network, or click Cancel to exit without making any
changes.
c. Select the VLAN interface that you wish to edit within the network.
h. Select the VLAN interfaces that you wish to remove from the network.
When you have finished adding any VLAN interfaces to the network, you are able to click Finish to save the new network
configuration. Alternatively, you can click Cancel to exit the wizard without saving the network changes. Note that if you edited
the configuration for a Port, Bond or VLAN interface in any of the sub-wizards, these changes have already been affected.
1. On the Networking tab, click the Networks subtab link and select the network that you wish to edit.
Note
27 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
If the network is used by many servers, modifying this parameter may cause the edit operation to take several minutes to
complete.
▪ Management: Used to manage the physical Oracle VM Servers in a server pool, for example,
to update the Oracle VM Agent on the different Oracle VM Servers. This network function
is assigned to at least one network by default.
▪ Live_Migrate: Used to migrate virtual machines from one Oracle VM Server to another in a
server pool, without changing the status of the virtual machine.
▪ Cluster_Heartbeat: Used to verify if the Oracle VM Servers in a clustered server pool are up
and running.
▪ Virtual_Machine: Used for the network traffic between the different virtual machines in a
server pool.
▪ Storage: Reserved for future use and currently has no practical function or application.
Warning:
The only network type valid for creation in PCA is the Virtual Machine type. Do not attempt to create any other type of network.
• Ports: A tab allowing you to manage which ports or bonds on each Oracle VM Server are attached to the
network. This feature is not used on PCA. Do not edit or add physical ports to networks.
• VLAN Interfaces: A tab allowing you to manage which VLAN interfaces are attached to the network. This tab
contains a tabular view of VLAN interfaces that are already attached to the network. At the top of the table is a
toolbar containing options :
Note:
If you have not already defined any VLAN interfaces as described in “VLAN Interfaces”, no VLAN interfaces are displayed in this
view and you are unable to add any VLAN interfaces to the network. In other words, you must define your VLAN interfaces on
the VLAN Interfaces subtab, before attempting to add any VLAN interfaces to a network.
c. Identify the VLAN interfaces that you wish to add to the network and check the check box alongside
them.
28 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
d. Click OK to add the selected VLAN interfaces to the network, or click Cancel to exit without making
any changes.
e. Select the VLAN interface that you wish to edit within the network.
j. Select the VLAN interfaces that you wish to remove from the network.
To delete a network:
1. On the Networking tab, click the Networks subtab link and select the network that you wish to delete.
Keep software levels up to date with current versions, to close security exposures and avoid prevent experiencing of
fixed bugs.
Install the Oracle VM Guest Additions to improve operational flexibility and to assist in tracking networking information
within the VMs themselves. The Guest Additions are described at
https://docs.oracle.com/cd/E64076_01/E64083/html/vmadm-guestadd.html
Segregate VM traffic from PCA Management traffic using VLANs assigned to the VM traffic and the Management
Network.
29 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
Consider the use of a bastion host to connect the administrative network of the PCA to the administration and/or
maintenance network of the data center for out of band access to the PCA if required. Do not, however, connect the
administration network to any data center network.
Use PCA internal networks for inter-VM communication. This provides better performance and isolation. Use different
VLANs for independent applications so they can operate without interference or the need to coordinate network design.
All network traffic placed on a network with external connections, either the default_external network, a custom
external network, or VLANs on top of either of those, will exit the PCA. If that traffic was intended for another VM within
the PCA, the external switch will simply return it to the PCA, an unnecessary use of external facing bandwidth. Use the
default_internal network instead.
Consider the implementation of Dynamic Resource Scheduler (DRS) policies not only on CPU utilization of your PCA
Compute Nodes, but on networking thresholds to migrate VMs to Compute Nodes with lower network utilization and
more available bandwidth.
The 100Gb Ethernet networks on PCA provide substantially higher performance for applications with high network traffic
between members of an application cluster. Isolate networks and traffic to as few compute nodes as possible to preserve
backplane bandwidth.
Use VLAN based networks on top of the default_external, default_internal, or custom external networks.
The PCA provides you with up to 256 VLANs per Tenant Group (PCA) or VM Zone (PCC). This can allow for highly
complex networking configurations which can lead to complex routing and debugging. Simplify wherever possible, but
make no simpler than necessary.
VLANs and custom internal/external networks make it possible to internalize functions such as load balancers and
firewall previously principally implemented as black box physical appliances. By utilizing VMs as firewalls, load
balancers, and even routers, (All functions which can be implemented in a simple Oracle Linux VM or using third party
virtual appliances) significant cost reductions can be realized.
Applications and VMs with high or dedicated bandwidth requirements to other VMs and associated applications should
be considered candidate for dedicated intra-VM VLANs.
For highly secure point to point intra-VM traffic use a VLAN with only the required endpoint VMs on it. A single
application VM to database VM connection is but one example.
Applications with high message traffic to external hosts benefit from the faster external connectivity PCA has to
datacenter networks. PCA network connections can be 100, 40, 25 or 10 Gb Ethernet, and can use LACP for load
balancing. If throughput between PCA and other hosts is a gating factor for performance, PCA provides the opportunity
for custom networks that may be dedicated to a single network or group of VLANs. Possible uses include dedicated
connectivity to database servers like Exadata, to backup and restore traffic, to external firewalls or load balancers.
Restrictions
30 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
This paper has described best practices for applications networking the Oracle Private Cloud Appliance. While these are best
practices gathered from years of experience with the PCA, they may not apply in all situations, when in doubt, consult with your
Oracle Systems Engineer or Advanced Customer Services for additional information.
CONCLUSION
The Oracle Private Cloud Appliance has become the ideal platform to host applications which formerly were targeted for Exalogic
and commodity servers. This paper describes the advantages of PCA and techniques to migrate applications from earlier platforms
to PCA.
31 WHI T E PAPER / Networking the Oracle Private Cloud Appliance and Private Cloud at Customer
CONNECT WITH US
Call +1.800.ORACLE1 or visit oracle.com.
Outside North America, find your local office at oracle.com/contact.
Copyright © 2020, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This
document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of
merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, an d no contractual obligations are formed either directly or indirectly by
this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open
Group. 0120