1-FlashStack With Red Hat OpenShift Container and Virtualization Platform Using Cisco UCS X-Series - Cisco
1-FlashStack With Red Hat OpenShift Container and Virtualization Platform Using Cisco UCS X-Series - Cisco
Log in
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 1/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
In partnership with:
Executive Summary
The FlashStack solution is a validated, converged infrastructure developed jointly by Cisco and Pure
Storage. The solution offers a predesigned data center architecture that incorporates compute, storage,
and network to reduce IT risk by validating the architecture and helping ensure compatibility among the
components. The FlashStack solution is successful because of its ability to evolve and incorporate both
technology and product innovations in the areas of management, compute, storage, and networking. This
document covers the deployment details of Red Hat OpenShift Container Platform (OCP) and Red Hat
OpenShift Virtualization on FlashStack Bare Metal infrastructure. Some of the most important advantages
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 2/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
of FlashStack with Red Hat OpenShift Container Platform and Red hat OpenShift Virtualization on Bare
Metal are:
● Simplify IT operations with a unified platform: With Red Hat OpenShift Container Platform and
Virtualization, containers and virtual machines can be run side-by-side within a cluster avoiding
operational, complexity and challenges of maintaining separate platforms for running these
workloads.
● Consistent infrastructure configuration: Cisco Intersight and UCS help bring up the entire server
farm with standardized methods and consistent configuration tools that helps to improve the
compute availability, avoid human configuration errors and achieve higher Return on Investments
(ROI).
● Simpler and programmable Infrastructure: The entire underlying infrastructure can be configured
using infrastructure as code delivered using Red Hat Ansible.
● End-to-End 100Gbps Ethernet: This solution offers 100Gbps connectivity among the servers and
storage using 5th Gen Cisco UCS VIC, Fabric Interconnect and 100Gbps adapters on storage
controllers.
● Single Storage platform for both virtual and containerized workloads: Using the Pure Storage
FlashArray as backend storage, Portworx Enterprise by Pure Storage provides persistent and
container-native data platform with enterprise grade features such as snapshots, clones,
replication, compression, de-duplication and so on, for the workloads running inside containers
and virtual machines.
In addition to the compute-specific hardware and software innovations, integration of the Cisco Intersight
cloud platform with Pure Storage FlashArray and Cisco Nexus delivers monitoring, orchestration, and
workload optimization capabilities for different layers of the FlashStack solution.
If you are interested in understanding the FlashStack design and deployment details, including
configuration of various elements of design and associated best practices, refer to Cisco Validated
Designs for FlashStack here: https://www.cisco.com/c/en/us/solutions/design-zone/data-center-design-
guides/data-center-design-guides-all.html#FlashStack
This document serves as the deployment guide for the solution. The design guide for the solution
will be available soon.
Solution Overview
● Introduction
● Audience
● Purpose of this Document
● Highlights of this Solution
Introduction
The FlashStack solution with Red Hat OpenShift on Bare Metal configuration represents a cohesive and fle
xible infrastructure solution that combines computing hardware, networking, and storage resources into a
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 3/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
single, integrated architecture. Designed as a collaborative effort between Cisco and Pure Storage, this c
onverged infrastructure platform is engineered to deliver high levels of efficiency, scalability, and performa
nce, suitable for a multitude of datacenter workloads. By standardizing on a validated design, organization
s can accelerate deployment, reduce operational complexities, and confidently scale their IT operations to
meet evolving business demands. The FlashStack architecture leverages Cisco's Unified Computing Syste
m (Cisco UCS) servers, Cisco Nexus networking, and Pure’s innovative storage systems, providing a robu
st foundation for containerized, virtualized and non-virtualized environments.
Audience
The intended audience for this document includes, but is not limited to IT architects, sales engineers, field
consultants, professional services, IT managers, IT engineers, partners, and customers who are interested
to take the advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document provides deployment guidance for bringing up the FlashStack solution with Red Hat
OpenShift Container Platform and virtualization on Bare Metal Infrastructure. This document introduces
various design elements and explains various considerations and best practices for a successful Red Hat
OpenShift deployment.
Technology Overview
This chapter contains the following:
● FlashStack Components
● Benefits of Portworx Enterprise with OpenShift Virtualization
● Benefits of Portworx Enterprise with FlashArray
FlashStack Components
The FlashStack architecture was jointly developed by Cisco and Pure Storage. All FlashStack components
are integrated, allowing customers to deploy the solution quickly and economically while eliminating many
of the risks associated with researching, designing, building, and deploying similar solutions from the
foundation. One of the main benefits of FlashStack is its ability to maintain consistency at scale. Figure 1
illustrates the series of hardware components used for building the FlashStack architectures.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 4/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
All FlashStack components are integrated, so you can deploy the solution quickly and economically while
eliminating many of the risks associated with researching, designing, building, and deploying similar
solutions from the foundation. One of the main benefits of FlashStack is its ability to maintain consistency
at scale. Each of the component families shown in Figure 1 (Cisco UCS, Cisco Nexus, Cisco MDS,
Portworx by Pure Storage and Pure Storage FlashArray systems) offers platform and resource options to
scale up or scale out the infrastructure while supporting the same features and functions.
Refer to the Appendix for more detailed information of the above components used in this solution.
Portworx with Red Hat OpenShift Virtualization and KubeVirt enhances data management for virtual
machines and containers by offering integrated, enterprise-grade storage. Includes simplified storage
operations through Kubernetes, high availability and resiliency across environments, advanced disaster
recovery options, and automated scaling capabilities. This integration supports a unified infrastructure
where traditional and modern workloads coexist, providing flexibility in deployment across diverse
infrastructures and ensuring robust data security.
Portworx + Stork offers capabilities like VM migration between clusters, Synchronous Disaster Recovery,
Ability to backup and restore VMs running on Red Hat OpenShift to comply with the service level
agreements.
Portworx on FlashArray enhances Kubernetes environments with robust data reduction, resiliency,
simplicity, and support. It lowers storage costs through deduplication, compression, and thin provisioning,
providing 2-10x data reduction. FlashArray’s reliable infrastructure ensures high availability, reducing
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 5/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
server-side rebuilds. Portworx simplifies Kubernetes deployment with minimal configuration and end-to-
end visibility via Pure1. Additionally, unified support, powered by Pure1 telemetry, offers centralized,
proactive assistance for both storage hardware and Kubernetes services, creating an efficient and
scalable solution for enterprise needs.
● Design Considerations
● Requirements
Design Considerations
The FlashStack Datacenter with Cisco UCS and Cisco Intersight meets the following general design
requirements:
● Resilient design across all the layers of infrastructure with no single point of failure
● Scalable design with the flexibility to add compute capacity, storage, or network bandwidth as
needed
● Modular design that can be replicated to expand and grow as the needs of the business grow
● Flexible design that can support different models of various components with ease
● Simplified design with the ability to integrate and automate with external automation tools
● AI-Ready design to support required NVIDIA GPUs for running AI/ML based workloads
● Cloud-enabled design which can be configured, managed, and orchestrated from the cloud using
GUI or APIs
To deliver a solution which meets all these design requirements, various solution components are
connected and configured as explained in later sections.
Requirements
Physical Topology
FlashStack with Cisco UCS X-Series supports both Ethernet and Fibre Channel (FC) storage access. This
Red Hat OpenShift Bare Metal deployment is built over Ethernet-based design. For this solution, Cisco
Nexus 93699CD-GX switches are used to provide the connectivity between the servers and storage.
ISCSI configuration on the Cisco UCS and Pure Storage FlashArray is utilized to set up storage access.
The physical components and connectivity details for Ethernet -based design are covered below.
Figure 2 shows the physical topology and network connections used for this Ethernet-based FlashStack
design.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 6/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
● One Cisco UCS X9508 chassis, equipped with a pair of Cisco UCS X9108 100G IFMs, contains six
Cisco UCS X210c M7 compute nodes and two Cisco UCS X440p PCIe nodes each with two
NVDIA L40S GPUs. Other configurations of servers with and without GPUs are also supported.
Each compute node is equipped with fifth-generation Cisco VIC card 15231 providing 100-G
ethernet connectivity on each side of the fabric. A pair of Fabric Modules installed at the rear side
of the chassis enables connectivity between the X440p PCIe nodes and X210c M7 nodes.
● Cisco fifth-generation 6536 fabric interconnects are used to provide connectivity to the compute
nodes installed in the chassis.
● High-speed Cisco NXOS-based Nexus C93600CD-GX switching design to support up to 100 and
400-GE connectivity.
● Pure Storage FlashArray//X170 with 100Gigabit Ethernet connectivity. FlashArray introduces
native block and file architectures built on a single global storage pool, simplifying management
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 7/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
and treating both services as equal citizens—the first truly Unified Block and File Platform in the
market.
● Cisco Intersight platform to deploy, maintain, and support the FlashStack components.
● Cisco Intersight Assist virtual appliance to help connect the Pure Storage FlashArray and Cisco
Nexus Switches with the Cisco Intersight platform to enable visibility into these platforms from
Intersight.
● Red Hat OpenShift Container Platform for providing a consistent hybrid cloud foundation for
building and scaling containerized and virtualized applications.
● Portworx by Pure Storage (Portworx Enterprise) data platform for providing enterprise grade
storage for containerized and virtualized workloads hosted on OpenShift platform.
Each worker had two additional vNICs with the iSCSI A and B VLANs configured as native to allow iSCSI
persistent storage attachment and future iSCSI boot. Each worker node is also configured with three
additional vNICs for Virtual Machine’s management traffic and direct storage access using In-Guest iSCSI.
The following sections provide more details on the network configuration of worker nodes.
FlashStack Cabling
The information in this section is provided as a reference for cabling the physical equipment in a
FlashStack environment. Figure 3 illustrates how all the hardware components are connected.
This document assumes that out-of-band management ports are plugged into an existing
management infrastructure at the deployment site. These interfaces will be used in various
configuration steps.
Figure 3 details the cable connections used in the validation lab for the FlashStack topology based on the
5th generation Cisco UCS 6536 fabric interconnect. On each side of the Fabric Interconnect, two 100G
ports on each UCS 9108 100G IFMs are used to connect the Cisco UCS X9508 chassis to the Fabric
Interconnects. Two 100G port on each FI are connected to the pair of Cisco Nexus 93600CD-GX
switches that are configured with a vpc domain. Each Pure Storage FlashArray//Xl170 controller is
connected to the pair of Nexus 93600CD-GX switches over 100G ports. Additional 1Gb management
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 8/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
connections will be needed for one or more out-of-band network switches that sit apart from the
FlashStack infrastructure. Each Cisco UCS fabric interconnect and Cisco Nexus switch is connected to the
out-of-band network switches, and each Pure Storage FlashArray controller has a connection to the out-
of-band network switches. Layer 3 network connectivity is required between the Out-of-Band (OOB) and
In-Band (IB) Management Subnets.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 9/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
The control plane (master node) will just have one vNIC eno5 with Fabric-Failover option enabled.
VLAN Configuration
Table 2 lists the VLANs configured for setting up the FlashStack environment along with their usage.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 10/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Table 2 lists the infrastructure services running on either virtual machines or bar mental servers required
for deployment as outlined in the document. All these services are hosted on pre-existing infrastructure
with in the FlashStack
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 11/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Software Revisions
The FlashStack Solution with Red Hat OpenShift on Bare Metal infrastructure configuration is built using
the following components.
Table 3 lists the required software revisions for various components of the solution.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 12/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Image Bundle
Layer Device Comments
version
Physical Connectivity
Physical cabling should be completed by following the diagram and table references in section FlashStack
Cabling.
The following procedures describe how to configure the Cisco Nexus 93600CD-GX switches for use in a
FlashStack environment. This procedure assumes the use of Cisco Nexus 9000 10.1(2), the Cisco
suggested Nexus switch release at the time of this validation.
The procedure includes the setup of NTP distribution on both the mgmt0 port and the in-band
management VLAN. The interface-vlan feature and ntp commands are used to set this up. This
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 13/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
procedure also assumes that the default VRF is used to route the in-band management VLAN.
This document assumes that initial day-0 switch configuration is already done using switch
console ports and ready to use the switches using their management IPs.
config t
feature nxapi
feature udld
feature interface-vlan
feature netflow
feature hsrp
feature lacp
feature vpc
feature lldp
ntp master 3
clock timezone <timezone> <hour-offset> <minute-Offset>
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 14/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
It is important to configure the local time so that logging time alignment and any backup
schedules are correct. For more information on configuring the timezone and daylight savings
time or summer time, please see https://www.cisco.com/c/en/us/td/docs/dcn/nx-
os/nexus9000/102x/configuration/fundamentals/cisco-nexus-9000-nx-os-fundamentals-
configuration-guide-102x/m-basic-device-management.html#task_1231769
Sample clock commands for the United States Eastern timezone are:
Vlan <oob-mgmt-vlan-id>
name OOB-Mgmt-VLAN
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 15/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Vlan <ib-mgmt-vlan-id>
name IB-Mgmt-VLAN
Vlan <native-vlan-id>
name Native-VLAN
Vlan <ocp-iscsi-a-vlan-id>
name OCP-iSCSI-A
Vlan <ocp-iscsi-b-vlan-id>
name OCP-iSCSI-B
Vlan <vm-mgmt-vlan-id>
name VM-Mgmt-VLAN
Step 1. From the global configuration mode, run the following commands:
no shut
exit
Cisco Nexus - B
Step 2. From the global configuration mode, run the following commands:
no shut
exit
Step 1. From the global configuration mode, run the following commands:
interface port-channel 10
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 16/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
interface port-channel 20
interface port-channel 30
mtu 9216
### Optional: The below port channels is for connecting the Nexus switches to the existing customer
network
interface port-channel 106
description connectting-to-customer-Core-Switches
mtu 9216
Step 1. From the global configuration mode, run the following commands:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 17/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
peer-switch
role priority 10
peer-gateway
auto-recovery
ip arp synchronize
Cisco Nexus - B
Step 1. From the global configuration mode, run the following commands:
peer-switch
role priority 20
peer-gateway
auto-recovery
ip arp synchronize
Step 1. From the global configuration mode, run the following commands:
interface Ethernet1/1
description FI6536-A-uplink-Eth1
no shutdown
interface Ethernet1/2
description FI6536-B-uplink-Eth1
no shutdown
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 18/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
interface Ethernet1/33
description Nexus-B-33
no shutdown
interface Ethernet1/34
description Nexus-B-34
no shutdown
## Optional: Configuration for interfaces that connected to the customer existing management network
interface Ethernet1/35/1
description customer-Core-1:Eth1/37
no shutdown
interface Ethernet1/35/2
description customer-Core-2:Eth1/37
no shutdown
Cisco Nexus-B
Step 1. From the global configuration mode, run the following commands:
interface Ethernet1/1
description FI6536-A-uplink-Eth2
channel-group 20 mode active
no shutdown
interface Ethernet1/2
description FI6536-B-uplink-Eth2
no shutdown
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 19/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
interface Ethernet1/33
description Nexus-A-33
no shutdown
interface Ethernet1/34
description Nexus-A-34
no shutdown
## Optional: Configuration for interfaces that connected to the customer existing management network
interface Ethernet1/35/1
description customer-Core-1:Eth1/38
no shutdown
interface Ethernet1/35/2
description customer-Core-2:Eth1/38
no shutdown
Step 1. From the global configuration mode, run the following commands:
interface port-channel 10
vpc peer-link
interface port-channel 20
vpc 20
interface port-channel 30
vpc 30
interface port-channel 106
vpc 106
Step 2. The following commands can be used to check for correct switch configuration:
Show run
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 20/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
show vpc
show port-channel summary
show ntp peer-status
show cdp neighbours
show lldp neighbours
Procedure 1. Configure Interfaces for Pure Storage on Cisco Nexus and Cisco
Nexus B
Cisco Nexus - A
Step 1. From the global configuration mode, run the following commands:
interface Ethernet1/27
description PureXL170-ct0-eth19
mtu 9216
no shutdown
interface Ethernet1/28
description PureXL170-ct1-eth19
mtu 9216
no shutdown
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 21/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Cisco Nexus - B
Step 1. From the global configuration mode, run the following commands:
interface Ethernet1/27
description PureXL170-ct0-eth18
mtu 9216
no shutdown
interface Ethernet1/28
description PureXL170-ct1-eth18
mtu 9216
no shutdown
copy run start
This section provides the steps to claim the Cisco Nexus switches using Cisco Intersight Assist.
This procedure assumes that Cisco Intersight is already hosted outside the OpenShift cluster and
claimed into the Intersight.com.
Procedure 1. Claiming Cisco Nexus Switches into Cisco Intersight using Cisco
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 22/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Intersight Assist
Cisco Nexus - A
Step 1. Log into Nexus Switches and confirm nxapi feature is enabled.
show nxapi
nxapi enabled
NXAPI timeout 10
Certificate Information:
Issuer: issuer=C = US, ST = CA, L = San Jose, O = Cisco Systems Inc., OU = dcnxos, CN = nxos
Step 2. Log into Cisco Intersight with your login credentials. From the drop-down list located on the left
top, select System.
Step 3. Under Admin, click Target and click Claim a New Target. Under Categories, select Network,
then click Cisco Nexus Switch and then click Start.
Step 4. Select the Cisco Assist name which is already deployed and configured. Provide the Cisco
Nexus Switch management IP address, username and password details and click Claim.
Step 6. When the storage is successfully claim, from the top left drop-down list, select Infrastructure
Services. Under Operate, click Networking tab. On the right you will find the newly claimed
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 23/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Cisco Nexus switch details and browse through the Switches for viewing the inventory details.
The following screenshot shows the L2 neighbors of the Cisco Nexus Switch-A:
● Create Pools
● vNIC Templates and vNICs
● Ethernet Adapter Policy for iSCSI Traffic
● Storage Policy
● Compute Configuration Policies
● Management Configuration Policies
The procedures in this section describe how to configure a Cisco UCS domain for use in a base
FlashStack environment. A Cisco UCS domain is defined as a pair for Cisco UCS FIs and all the servers
connected to it. These can be managed using two methods: UCSM and IMM. The procedures outlined
below are for Cisco UCS Fabric Interconnects running in Intersight managed mode (IMM).
The Cisco Intersight platform is a management solution delivered as a service with embedded analytics
for Cisco and third-party IT infrastructures. The Cisco Intersight Managed Mode (also referred to as Cisco
IMM or Intersight Managed Mode) is an architecture that manages Cisco Unified Computing System
(Cisco UCS) fabric interconnect–attached systems through a Redfish-based standard model. Cisco
Intersight managed mode standardizes both policy and operation management for Cisco UCS C-Series
M7 and Cisco UCS X210c M7 compute nodes used in this deployment guide.
This deployment guide assumes an Intersight account is already created, configured with
required licenses and ready to use. Intersight Default Resource Group and Default Organizations
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 24/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
are used for claiming all the physical components of the FlashStack solution.
This deployment guide assumes that the initial day-0 configuration of Fabric Interconnects is
already done in the IMM mode and claimed into the Intersight account.
Log into the Intersight portal and select Infrastructure Service. On the left select Profiles then under
Profiles select UCS Domain Profiles.
Click Create UCS Domain Profile to create a new domain profile for Fabric Interconnects. Under the
General tab, select the Default Organization, enter name and descriptions of the profile.
Under VLAN & VSAN Configuration > VLAN Configuration, click select Policy and click Create New.
On the Create VLAN page, General tab, enter a name (AA06-FI-VLANs)and click Next to go to Policy
Details.
For the Prefix, enter the VLAN name as OOB-Mgmt-VLAN. For the VLAN ID, enter the VLAN id 1061.
Leave Auto Allow on Uplinks enabled and Enable VLAN Sharing disabled.
Under Multicast Policy, click Select Policy and select Create New to create a Multicast policy.
On the Create Multicast Policy page, enter name (AA06-FI-MultiCast) of the policy and click Next to
go to Policy Details. Leave the Snooping State and Source IP Proxy state checked/enabled
and click Create. Now select the newly created Multicast policy.
Step 11. Repeat steps 1 through 10 to add all the required VLANs to the VLAN policy.
Step 12. After adding all the VLANs, click Set Native VLAN ID and enter native VLANs (for example 2)
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 25/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
and click Create. The following screenshot shows the VLANs used for this solution:
Step 13. Select the newly created VLAN policy for both Fabric Interconnects A and B. Click Next to go
to Port Configuration.
Step 14. Enter name of the policy (AA06-FI-PortConfig) and click Next twice to go to Port Roles Page.
Step 15. In the right pane, under ports, select port 1 and 2 and click Configure.
Step 16. Set Role as Server and leave Auto Negotiation enabled and click Save.
Step 17. In the right pane click Port Channel tab and click Create Port Channel.
Step 18. Select Ethernet Uplink Port Channel for the Role. Enter 201 as Port Channel ID. Set Admin
speed as 100Gbps and FEC as Cl91.
Step 19. Under Link Control, create a new link control policy with the following options. Once created,
select the policy.
Step 20. Select Ports 1 and 2 for the Uplink Port Channel and click Create to complete the Port Roles
policy.
The following Management and Network related policies are created and used.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 26/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 27/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Weight: 5
MTU: 9216
Step 22. When the UCS Domain profile is created with the above mentioned policies, edit the policy and
assign it to the Fabric Interconnects.
Intersight will go through the discovery process and discover all the Cisco UCS C and X -Series compute
nodes attached to the Fabric Interconnects.
The server profile templates captured in this deployment guide supports Cisco UCS X210c M7 compute
nodes with 5th Generation VICs and can be modified to support other Cisco UCS blades and rack mount
servers. Server profile templates captured in this deployment guide supports Cisco UCS X210c M7
compute nodes with 5th Generation VICs.
Create Pools
The following pools need to be created before proceeding with server profile template creation.
MAC Pools
The following two MAC pools for the vNICs that will be configured in the templates.
UUID pool
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 28/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
In this deployment, separate server profile templates are created for Worker and Master Nodes where
Worker Nodes have storage network interfaces to support workloads, but Master Nodes do not. The vNIC
layout is covered below. While most of the policies are common across various templates, the LAN
connectivity policies are unique and will use the information in the tables below.
The following vNIC templates are used for deriving the vNICs for OpenShift worker nodes for host
management, VM management and iSCSI storage traffics.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 29/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Switch ID A A B B
CDN Source vNIC Name vNIC Name vNIC Name vNIC Name
setting
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 30/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
If you are going to have many VMs added to the OpenShift cluster, then AA06-OCP-EthAdapter-
16RXQs-5G adapter policy with MTU set to 1500 can be used for AA06-VMMgmt-vNIC template
as well. This will provide more receive and transmit queues to the vNIC that carries the Virtual
Machine management traffic.
You can optionally configure a tweaked ethernet adapter policy for additional hardware receive queues
handled by multiple CPUs in scenarios where there is a lot of traffic and multiple flows. In this deployment,
a modified ethernet adapter policy, AA06-EthAdapter-16RXQs-5G, is created and attached to storage
vNICs. Non-storage vNICs will use the default Linux-v2 Ethernet Adapter policy. Table 9 lists the settings
that are changed from defaults in the Adapter policy used for the iSCSI traffic. The remaining settings are
left at defaults.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 31/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Interrupt Settings Interrupts: 19, Interrupt Mode: MSX ,Interrupt Timer: 125
Using the templates listed in Table 9, separate LAN connectivity policies are created for control and
worker nodes.
Control nodes are configured with one vNIC which is derived from the AA06-OCP-Mgmt-vNIC template.
Following screenshot shows the LAN connectivity policy (AA06-OCP-master-LANCon) created with one
vNIC for control node.
Worker nodes are configured with six vNICs which are derived from the templates discussed above.
Following screenshot shows the LAN connectivity policy (AA06-OCP-Worker-LANConn) created with six
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 32/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
vNICs eno5, eno6 and eno7 will be used to carry the management and iSCSI-A and iSCSI-B
traffics of worker nodes while eno8,eno9, and eno10 will carry the virtual machine’s management,
iSCSI-A and iSCSI-B traffics.
vNICs eno9 and eno10 are derived using the same vNIC templates as that of eno7 and eno8.
Storage Policy
For this solution, Cisco UCS X210c nodes are configured to boot from local M.2 SSD disks. Two M.2
disks are used and configured with RAID-1 configuration. Boot from SAN option will be supported in the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 33/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
next releases. The following screenshot shows the storage policy (AA06-OCP-Storage-M2R1), and the
settings used for configuring the M.2 disks in RAID-1 mode.
Boot Policy
To facilitate the automatic boot from the Red Hat CoreOS Discovery ISO image, CIMC Mapped DVD boot
option is used. The following boot policy is used for both controller and workers nodes.
It is critical to not enable UEFI Secure Boot. Secure Boot needs to be disabled for the proper
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 34/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
functionality of Portworx Enterprise and the NVIDIA GPU Operator GPU driver initialization.
Local Disk boot option being at the top ensures that the nodes always boot from the M.2 disks once after
CoreOS installed. The CIMC Mapped DVC option at the second is used to install the CoreOS using
Discovery ISO which is mapped using a Virtual Media policy (CIMCMap-ISO). KVM Mapped DVD will be
used if you want to manually mount any ISO to the KVM session of the server and install the OS. This
option will be used when installing CoreOS during the OpenShift cluster expansion by adding additional
worker node.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 35/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
accessed by OOB-Mgmt network. In this solution, the HTTP file share service is used to share the
Discovery ISO over the network.
Do not Add Virtual Media at this time, but the policy can be modified later and used to map an
OpenShift Discovery ISO to a CIMC Mapped DVD policy.
Step 1. Create BIOS policy and select pre-defined policy as shown above and click Next.
Step 2. Expand the Server Management and set Consistent Device Name (CDN) to enabled for
Consistent Device Naming within the Operating System.
Step 3. The remaining bios tokens and their values mentioned here are based on the best practices
guide from M7 platform. For more details, go to: Performance tuning best practices Guide for
Cisco UCS M7 platform
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 36/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
as UCSX-210C-M7 and set Firmware Version to the latest version. The following screenshot
shows the firmware policy used in this solution.
Step 1. Select All-Platform (unless you want to create a dedicated power policy for FI-Attached
servers) and select the following option and leave the rest of the settings at default. When you
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 37/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
apply this policy to the server profile template, the system will take appropriate settings and
apply to the server.
Since certain features are not yet enabled for Out-of-Band Configuration (accessed using the
Fabric Interconnect mgmt0 ports), you need to access the OOB-MGMT VLAN (1060) through the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 38/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 39/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
baremetal servers.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 40/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
The following table provides list of polices and pools used for creating Server Profile template (AA06-
OCP-Worker-M.2) for worker nodes:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 41/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
The following screenshot show the two server profile templates created for control and worker nodes:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 42/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
nodes).
When the Server profiles are created, associate these server profiles to the control and workers nodes as
shown below.
Now the Cisco UCS X210c M7 blades are ready and OpenShift can be installed on these machines.
In this solution, Pure Storage FlashArray//XL170 is used as the storage provider for all the application
pods and virtual machines provisioned on the OpenShift cluster using Portworx Enterprise. The Pure
Storage FlashArray//XL170 array will be used as Cloud Storage Provider for Portworx which allows us to
store data on-premises with FlashArray while benefiting from Portworx Enterprise cloud drive features.
This section describes high-level steps to configure Pure Storage FlashArray//X170 network interfaces
required for storage connectivity over iSCSI. For this solution, Pure Storage FlashArray was loaded with
Purity//FA Version 6.6.10.
This document is not intended to explain every day-0 initial configuration steps to bring the array
up and running. For detailed day-0 configuration steps, see:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 43/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ucs_xser
ies_e2e_5gen.html#FlashArrayConfiguration
The compute nodes are redundantly connected to the storage controllers through 4 x 100Gb connections
(2 x 100Gb per storage controller module) from the redundant Cisco Nexus switches.
The Pure Storage FlashArray network settings were configured with three subnets across three VLANs.
Storage Interfaces CT0.Eth0 and CT1.Eth0 were configured to access management for the storage on
VLAN 1063. Storage Interfaces (CT0.Eth18, CT0.Eth19, CT1.Eth18, and CT1.Eth19) were configured to
run iSCSI Storage network traffic on the VLAN 3010 and VLAN 3020.
The following tables provides the IP addressing configured on the interfaces used for storage access.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 44/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 4. Click Enable and add the IP information from Table 10 and Table 11 and set the MTU to 9000.
Step 6. Repeat steps 1 through 5 to configure the remaining interfaces CT0.eth19, CT1.eth18 and
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 45/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
CT1.eth19.
This procedure assumes that Cisco Intersight Is already hosted outside the OpenShift cluster and
claimed into the Intersight.com.
Step 1. Log into Cisco Intersight with your login credentials. From the drop-down list located on the left
top, select System.
Step 2. Under Admin, select Target and click Claim a New Target. Under Categories, select Storage,
click Pure Storage FlashArray and then click Start.
Step 3. Select the Cisco Assist name which is already deployed and configured. Provide the Pure
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 46/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Storage FlashArray management IP address, username and password details and click Claim.
Step 4. When the storage is successfully claimed, from the top left drop-down list, select Infrastructure
Services. Under Operate, click Storage. On the right you will find the newly claimed Pure
Storage FlashArray and browse through it for viewing the inventory details.
The Red Hat OpenShift Assisted Installer provides support for installing OpenShift Container Platform on
bare metal nodes. This guide provides a methodology to achieving a successful installation using the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 47/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Assisted Installer.
Prerequisites
The FlashStack for OpenShift utilizes the Assisted Installer for OpenShift installation. Therefore, when
provisioning and managing the FlashStack infrastructure, you must provide all the supporting cluster
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 48/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
infrastructure and resources, including an installer VM or host, networking, storage, and individual cluster
machines.
The following supporting cluster resources are required for the Assisted Installer installation:
● The control plane and compute machines that make up the cluster
● Cluster networking
● Storage for the cluster infrastructure and applications
● The Installer VM or Host
Network Requirements
The following infrastructure services need to be deployed to support the OpenShift cluster, during the
validation of this solution we have provided VMs on your hypervisor of choice to run the required services.
You can use existing DNS and DHCP services available in the data center.
There are various infrastructure services prerequisites for deploying OpenShift 4.16. These prerequisites
are as follows:
● DNS and DHCP services – these services were configured on Microsoft Windows Server VMs in
this validation
● NTP Distribution was done with Nexus switches
● Specific DNS entries for deploying OpenShift – added to the DNS server
● A Linux VM for initial automated installation and cluster management – a Rocky Linux / RHEL VM
with appropriate packages
NTP
Each OpenShift Container Platform node in the cluster must have access to at least two NTP servers.
NICs
NICs configured on the Cisco UCS servers based on the design previously discussed.
DNS
Clients access the OpenShift Container Platform cluster nodes over the bare metal network. Configure a
subdomain or subzone where the canonical name extension is the cluster name.
The following domain and OpenShift cluster names are used in this deployment guide:
The DNS domain name for the OpenShift cluster should be the cluster name followed by the base
domain, for example fs-ocp1. flashstack.local.
Table 12 lists the information for fully qualified domain names used during validation. The API and
Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and
worker nodes are exemplary, so you can use any host naming convention you prefer.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 49/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
DHCP
For the bare metal network, a network administrator must reserve several IP addresses, including:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 50/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
The KVM IP address also needs to be gathered for the master and worker nodes from the server
profiles.
Step 2. Select Infrastructure Service > Profiles > UCS Server Profile (for example, AA06-OCP-
Worker-M.2_3).
Step 3. In the center pane, select Inventory > Network Adapters > Network Adapter (for example,
UCSX-ML-V5D200G).
Step 6. Select the General tab and select Identifiers in the center pane.
Table 13 lists the IP addresses used for the OpenShift cluster including bare metal network IPs and UCS
KVM Management IPs for IMPI or Redfish access.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 51/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 8. From Table 13, enter the hostnames, IP addresses, and MAC addresses as reservations in your
DHCP and DNS server(s) or configure the DHCP server to dynamically update DNS.
Step 9. You will also need to pipe VLAN interfaces for all 2 storage VLANs (3010 and 3020) and 2
management VLANs (1061 and 1062) into your DHCP server(s) and assign IPs in the storage
networks on those interfaces. Then create a DHCP scope for each management and storage
VLANs with appropriate subnets. Ensure that the IPs assigned by the scope do not overlap with
already consumed IPs ( like FlashArray//XL170 storage iSCSI interface IPs and OpenShift
reserved IPs). Either enter the nodes in the DNS server or configure the DHCP server to forward
entries to the DNS server. For the cluster nodes, create reservations to map the hostnames to
the desired IP addresses.
Step 10. Setup either a VM (installer/bastion node) or spare server with the network interface connected
to the Bare Metal VLAN and install either Red Hat Enterprise Linux (RHEL) 9.4 or Rocky Linux 9.4
“Server with GUI” and create an administrator user. Once the VM or host is up and running,
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 52/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
update it and install and configure XRDP. Connect to this host with a Windows Remote Desktop
client as the admin user.
Step 11. ssh into the installer node VM, open a terminal session and create an SSH key pair to use to
communicate with the OpenShift hosts:
cd
ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519
Step 12. Copy the public SSH key to the user directory:
cp ~/.ssh/id_ed25519.pub ~/
sshadd ~/.ssh/id_ed25519
Procedure 2. Install Red Hat OpenShift Container Platform using the Assisted
Installer
Step 1. Launch Firefox and connect to https://console.redhat.com/openshift/cluster-list. Log into your
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 53/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 6. Select the latest OpenShift version, scroll down and click Next.
Step 7. Select the latest OpenShift version, scroll down and click Next.
Step 10. Under provisioning type, from the drop-down list select the Full Image file. Under SSH public
key, click Browse and browse to, select, and open the id_ed25519.pub file. The contents of the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 54/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
public key should now appear in the box. Click Generate Discovery ISO and click Download
Discovery ISO to download the Discovery ISO.
Step 11. Copy the Discovery ISO to a http or https file share server, use a web browser to get a copy of
the URL for the Discovery ISO.
Step 12. Log into Cisco Intersight and update the virtual Media policy with the Discovery ISO URL as
shown below. This Discovery ISO image will be mapped to the server using CIMC Mapped DVD
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 55/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
To demonstrate the OpenShift cluster expansion (adding additional worker node), only the first
five nodes (3 master/control and 2 workers) will be used for the initial OpenShift cluster
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 56/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
deployment. The sixth node is reserved for now and will be used for cluster expansion which will
be discussed in the following sections.
Step 13. Reset first five UCSX-201c M7 server by selecting Operate > Power > Reset System.
Step 14. When all five servers have booted “RHEL CoreOS (Live)” from the Discovery ISO, they will
appear in the Assisted Installer. Use the drop-down lists under Role to assign the appropriate
server roles. Scroll down and click Next.
Step 15. Expand each node and confirm the role of the M.2 disk is set to Installation disk. Click Next.
Step 16. Under Network Management, make sure Cluster-Managed Networking is selected. Under
Machine network, from the drop-down list, select the subnet for the BareMetal VLAN. Enter the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 57/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
API IP for the api.cluster.basedomain entry in the DNS servers. For the Ingress IP, enter the IP for
the *.apps.cluster.basedomain entry in the DNS servers.
Step 17. Scroll down. All nodes should have a status of Ready.
If you see insufficient warning message for the nodes due to missing ntp server information,
expand one of the nodes, click Add NTP Sources and provide ntp servers IPs separated by a
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 58/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
comma.
You would see a warning message on each worker node around having multiple network devices
on the L2 network. To resolve this, ssh into each worker and de-activate eno8,eno9 and eno10
interfaces using nmtui utility.
Step 18. When all the nodes are in ready status, click Next.
Step 19. Review the information and click Install cluster to begin the cluster installation.
Step 20. On the Installation progress page, expand the Host inventory. The installation will take 30-45
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 59/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
minutes. When installation is complete, all nodes will show a Status of Installed.
Step 21. Select Download kubeconfig to download the kubeconfig file. In a terminal window, setup a
cluster directory and save credentials:
cd
mkdir <clustername> # for example, ocp
cd <clustername>
mkdir auth
cd auth
mv ~/Downloads/kubeconfig ./
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 60/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
mkdir ~/.kube
cp kubeconfig ~/.kube/config
Step 22. In the Assisted Installer, click the icon to copy the kubeadmin password:
Step 23. Click Open console to launch the OpenShift Console. Use kubeadmin and the kubeadmin
password to login. Click the ? mask located on the right most corner of the page. Links for
various tools are provided in that page. Download oc for Linux for x86_64 and virtctl for Linux
for x86_64 Common Line Tools.
cd ..
mkdir client
cd client
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 61/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
ls ~/Downloads
mv ~/Downloads/oc.tar.gz ./
mv ~/Downloads/virtctl.tar.gz ./
tar xvf oc.tar
Step 24. To enable oc tab completion for bash, run the following:
Step 25. In Cisco Intersight, edit the Virtual Media policy and remove the link to the Discovery ISO. Click
Save & Deploy then click Save & Proceed. Do not select “Reboot Immediately to Activate.”
Click Deploy. The virtual media mount will be removed from the servers without rebooting them.
Step 26. In Firefox, in the Assisted Installer page, click Open console to launch the OpenShift Console.
Use kubeadmin and the kubeadmin password to login. On the left, select Compute > Nodes to
see the status of the OpenShift nodes.
Step 27. In the Red Hat OpenShift console, select Compute > Bare Metal Hosts. For each Bare Metal
Host, click the three dots to the right of the host and select Edit Bare Metal Host. Select Enable
power management.
Step 28. From Table 13, fill in the BMC Address. Also, make sure the Boot MAC Address matches the
MAC address in Table 13. For the BMC Username and BMC Password, use what was entered
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 62/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
into the Cisco Intersight IPMI over LAN policy. Click Save to save the changes. Repeat this step
for all Bare Metal Hosts.
Step 29. Select Compute > Bare Metal Hosts. When all hosts have been configured, the Status displays
“Externally provisioned,” and the Management Address are populated. You can now manage
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 63/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
For an IPMI connection to the server, use the BMC IP address. However, for Redfish to connect to
the server, use this format for the BMS address; redfish://<BMC IP>/redfish/v1/Systems/<server
Serial Number> and make sure to check Disable Certificate Verification. For Instance, for
master1.fs-ocp1.flashstack.local Bare Metal node, the redfish BMC management Address will be:
redfish://10.106.0.21/redfish/v1/Systems/FCH270978H0. When using the redfish to connect to
the server, it is critical to select the Disable Certificate Verification checkbox.
It is recommended to reserve enough resources ( cpus and memory) for system components like
kubelet and kube-proxy on the nodes. OpenShift Container Platform can automatically determine
the optimal system-reserved CPU and memory resources for nodes associated with a specific
machine config pool and update the nodes with those values when the nodes start.
Step 30. To automatically determine and allocate the system-reserved resources on nodes, create a
KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter as shown
below and apply the machine configuration files:
cat dynamic-resource-alloc-workers.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: dynamic-node-master
spec:
autoSizingReserved: true
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/master: ""
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 64/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
cat dynamic-resource-alloc-master.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: dynamic-resource-allow-master
spec:
autoSizingReserved: true
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/master: ""
oc apply -f dynamic-resource-alloc-workers.yaml
oc apply -f dynamic-resource-alloc-master.yaml
To manually configure the resources for the system components on the nodes, go to:
https://docs.openshift.com/container-platform/4.16/nodes/nodes/nodes-nodes-resources-
configuring.html#nodes-nodes-resources-configuring-setting_nodes-nodes-resources-configuring
If you have GPUs installed in your Cisco UCS servers, you need to install the Node Feature Discovery
(NFD) Operator to detect NVIDIA GPUs and the NVIDIA GPU Operator to make these GPUs available to
containers and virtual machines.
Step 1. In the OpenShift Container Platform web console, click Operators > OperatorHub.
Type Node Feature in the Filter box and then click the Node Feature Discovery Operator with Red Hat
in the upper right corner. Click Install.
Step 4. When the Install operator is ready for use, click View Operator.
Step 8. When the nfd-instance has a status of Available, Upgradeable, select Compute > Nodes.
Step 9. Select a node that has one or more GPUs and then select Details.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 65/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 12. Type NVIDIA in the Filter box and then click the NVIDIA GPU Operator. Click Install.
Step 14. When the Install operator is ready for use, click View Operator.
Step 17. Do not change any settings and scroll down and click Create. This will install the latest GPU
driver.
Step 19. Connect to a terminal window on the OpenShift Installer machine. Type the following
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 66/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
commands. The output shown is for two servers that are equipped with GPUs:
oc project nvidia-gpu-operator
oc get pods
Step 20. Connect to one of the nvidia-driver-daemonset containers and view the GPU status:
+----------------------------------------------------------------------------------------
-+
|-----------------------------------------+------------------------+----------------------
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 67/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
| | | MIG M. |
|=========================================+========================+=============
=========|
| | | N/A |
+-----------------------------------------+------------------------+---------------------
-+
| | | N/A |
+-----------------------------------------+------------------------+---------------------
-+
+----------------------------------------------------------------------------------------
-+
| Processes: |
| ID ID Usage |
|================================================================================
=========|
+----------------------------------------------------------------------------------------
-+
This section assumes that a new server profile is already derived from the existing template and
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 68/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 4. On the Add hosts wizard, select x86_64 for the CPU architecture and DHCP Only for the Host’s
network configuration. Click Next.
Step 5. For the provision type select Full image file from the drop-down list, for SSH public key browse
or copy/paste the contents of id-ed25519.pub file. Click Generate Discovery ISO and when the
file is generated and click Download Discovery ISO file.
Copy the Discovery ISO to a http or https file share server, use web browser to get a copy of the URL for
the new Discovery ISO.
Step 7. Log into the Cisco Intersight and update the virtual Media policy as explained in the previous
section. This Discovery ISO image is a mapped server using the CIMC Mapped DVD option. Now
Reset sixth UCSX-201c M7 server by selecting Power > Reset System.
Step 8. When the server has booted “RHEL CoreOS (live)” from the newly generated Discovery ISO, it
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 69/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
If you see insufficient warning message for the node due to missing ntp server information,
expand one of the nodes, click Add NTP Sources and provide ntp servers IPs separated by a
comma.
If a warning message appears stating you have multiple network devices on the L2 network, ssh
into worker node and deactivate eno8,eno9, and eno10 interfaces using the nmtui utility.
Step 9. When the node status shows Ready, click Install ready hosts. After few minutes, the required
components will be installed on the node and finally shows the status as Installed.
Step 10. When the server successfully produces the CoreOS installed, log into Cisco Intersight, edit the
vMedia policy and remove the virtual media mount. Go to Profiles > Server Profiles page,
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 70/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
deploy the profile to the newly added worker profile without rebooting the host. The Inconsistent
state on the remaining profiles should be cleared.
Step 11. Log into the cluster with kubeadmin user and go to Compute > Nodes > and select the newly
added worker node and approve the Cluster join request of the worker node and request for
server certificate signing.
Step 12. Wait for few seconds and the node will be ready and the pods get scheduled on the newly
added worker node.
Step 13. Create secret and BareMetalHost objects in openshift-machine-api namespace by executing
the below manifest (bmh-worker3.yaml).
cat bmh-worker3.yaml
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 71/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
---
apiVersion: v1
kind: Secret
metadata:
name: ocp-worker3-bmc-secret
namespace: openshift-machine-api
type: Opaque
data:
username: aXBtaXVzZXIK
password: SDFnaFYwbHQK
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: worker3.fs-ocp1.flashstack.local
namespace: openshift-machine-api
spec:
online: True
bootMACAddress: 00:25:B5:A6:0A:0B
bmc:
address: redfish://10.106.0.26/redfish/v1/Systems/FCH27477BZU
credentialsName: ocp-worker3-bmc-secret
disableCertificateVerification: True
customDeploy:
method: install_coreos
externallyProvisioned: true
The username and password shown in the above file are base64 encoded values.
In this case, redfish connection is used for connecting to the server. 00:25:B5:A6:0A:0B is the
MAC address of eno5 interface, 10.106.0.26 is the OOB management IP and FCH27477BZU is
the serial number of the newly added worker node. These values are updated in the table 5. If you
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 72/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
would like to use IPMI over LAN instead of redfish, just put the server’s out of band management
IP for the bmc address field.
A new entry will be created for the newly added worker node under Compute > Bare Metal Hosts.
The node field is not yet populated for this bare metal host as it is not yet logically linked to any
OpenShift Machine.
Since there are only two machines (workers) in the cluster, the worker MachineSets count needs
to be increased from 2 to 3.
Step 14. To increase the worker’s machineset count, go to Compute > MachineSets. Click the ellipses
of worker-0 machineset and select Edit Machine Count and increase the count from 2 to 3.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 73/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Click Save.
A new worker node will be provisioned to match to the worker machine count of 3. It will be under
provisioning state until the node is logically mapped to the Bare Metal Host.
Procedure 2. Link the Machine and Bare Metal Host, Node and Bare Metal
Host
Step 1. To logically link Bare Metal Host to Machine, gather the name of the newly created machine from
its manifest file or by executing oc get machine -n openshift-machine-api.
Step 2. Update the machine name in the Bare Metal Host’s manifest file under spec. consumerRef as
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 74/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
consumerRef:
apiVersion: machine.openshift.io/v1beta1
kind: Machine
name: fs-ocp1-wqpbg-worker-0-lct6q
namespace: openshift-machine-api
After updating the machine name under Bare Metal Host yaml manifest, the newly created machine will
turn to Provisioned as node state from Provisioning state.
The Bare Metal Host ProviderID needs to be generated and updated in the newly added worker
(worker3.fs-ocp1.flashstack.local).
The ProviderID is a combination of the name and UUID of the Bare Metal Host and is shown below. These
details can be gathered by running providerID: ‘baremetalhost:///openshift-machine-api/<Bare Metal
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 75/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
By using the provided information, the providerID for of the newly added Bare Metal Host is built as
providerID: ‘baremetalhost:///openshift-machine-api/worker3.fs-ocp1.flashstack.local/6410a65b-6fb1-
4f34-84c2-6649e1aabba9’.
Step 3. Copy the providerID of the Bare Metal Host into the third node yaml manifest file under spec. as
shown below.
When the providerID of worker3 is updated, the node details are automatically populated for the newly
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 76/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Install and Configure Portworx Enterprise on OpenShift with Pure Storage FlashArray
This chapter contains the following:
● Storage Architecture
● Prerequisites
● Configure Physical Environment
● Portworx Enterprise Console Plugin for OpenShift
Portworx by Pure Storage is fully integrated with Red Hat OpenShift. Hence you can install and manage
Portworx Enterprise from OpenShift web console itself. Portworx Enterprise can be installed with Pure
Storage FlashArray as a cloud storage provider. This allows you to store your data on-premises with Pure
Storage FlashArray while benefiting from Portworx Enterprise cloud drive features, such as:
Portworx Enterprise will create and manage the underlying storage pool volumes on the registered arrays.
Pure Storage recommends installing Portworx Enterprise with Pure Storage FlashArray Cloud
Drives before using Pure Storage FlashArray Direct Access volumes.
Storage Architecture
This section provides the steps for installing Portworx Enterprise on OpenShift Container Platform running
on Cisco UCSX-210C M7 bare metal servers. In this solution, Pure Storage FlashArray//XL170 is used as
a backend storage connected over Ethernet to provide required Cloud Drives to be used by Portworx
Enterprise. Figure 5 shows the high-level logical storage architecture of Portworx Enterprise deployment
on Pure Storage FlashArray.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 77/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
This is the high-level summary of the Portworx Enterprise implementation of distributed storage on a
typical Kubernetes based Cluster:
● Portworx Enterprise runs on each worker node as Daemonset pod and based on the configuration
information provided in the StorageClass spec, Portworx Enterprise provisions one or more
volumes on Pure Storage FlashArray for each worker node.
● All these Pure Storage FlashArray volumes are pooled together to form one or more Distributed
Storage Pools.
● When a user creates a PVC, Portworx Enterprise provisions the volume from the storage pool.
● The PVCs consume space on the storage pool, and if space begins to run low, Portworx
Enterprise can add or expand drive space from Pure Storage FlashArray.
● If a worker node goes down for less than 2 minutes, Portworx Enterprise will reattach Pure
Storage FlashArray volumes when it recovers. If a node goes down for more than two minutes, a
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 78/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
storageless node in the same zone or fault domain will take up the volumes and assume the
identity of the downed storage node.
Prerequisites
These prerequisites must be met before installing the Portworx Enterprise on OpenShift with Pure Storage
FlashArray:
● SecureBoot mode option must be disabled.
● The Pure Storage FlashArray should be time-synced with the same time service as the
Kubernetes cluster.
● Pure Storage FlashArray must be running a minimum Purity//FA version of at least 4.8. Refer to
the Supported models and versions topic for more information.
● Both multipath and iSCSI, if being used, should have their services enabled in systemd so that
they start after reboots. These services are already enabled in systemd within the Red Hat
CoreOS Linux.
Before you install Portworx Enterprise, ensure that your physical network is configured appropriately and
that you meet the prerequisites. You must provide Portworx Enterprise with your Pure Storage FlashArray
configuration details during installation.
● Each Pure Storage FlashArray management IP address can be accessed by each node.
● Your cluster contains an up-and-running Pure Storage FlashArray with an existing dataplane
connectivity layout (iSCSI, Fibre Channel).
● If you're using iSCSI, the storage node iSCSI initiators are on the same VLAN as the Pure Storage
FlashArray iSCSI target ports.
● You have an API token for a user on your Pure Storage FlashArray with at
least storage_admin permissions.
Step 2. Apply the following MachineConfig to the cluster configures each worker node with the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 79/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
following:
● Enable and start the multipathd.service with the specified multipath.conf configuration file.
● Enable and start the iscsid.service service
● Applies the Queue Settings with Udev rules.
The settings of multipath and Udev rules are defined as shown below:
cat multipath.conf
blacklist {
devnode "^pxd[0-9]*"
devnode "^pxd*"
device {
vendor "VMware"
defaults {
user_friendly_names no
find_multipaths yes
polling_interval 10
devices {
device {
vendor "PURE"
product "FlashArray"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 80/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
cat udevrules.txt
# Use none scheduler for high-performance solid-state storage for SCSI devices
The following is the MachineConfig file that takes base64 encode results of above two files and copy
them to the corresponding directory on each worker node. It also enables and starts iscsid and multipathd
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 81/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
services:
cat multipathmcp.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-multipath-setting
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-
8;base64,YmxhY2tsaXN0IHsKICAgICAgZGV2bm9kZSAiXnB4ZFswLTldKiIKICAgICAgZGV2bm9kZSAiXnB
4ZCoiCiAgICAgIGRldmljZSB7CiAgICAgICAgdmVuZG9yICJWTXdhcmUiCiAgICAgICAgcHJvZHVjdCAiVml
ydHVhbCBkaXNrIgogICAgICB9Cn0KZGVmYXVsdHMgewogdXNlcl9mcmllbmRseV9uYW1lcyBubwogZmlu
ZF9tdWx0aXBhdGhzIHllcwogcG9sbGluZ19pbnRlcnZhbCAgMTAKfQpkZXZpY2VzIHsKICAgIGRldmljZSB7
CiAgICAgICAgdmVuZG9yICAgICAgICAgICAgICAgICAgICJQVVJFIgogICAgICAgIHByb2R1Y3QgICAgICA
gICAgICAgICAgICAiRmxhc2hBcnJheSIKICAgICAgICBwYXRoX3NlbGVjdG9yICAgICAgICAgICAgInNlcnZp
Y2UtdGltZSAwIgogICAgICAgIGhhcmR3YXJlX2hhbmRsZXIgICAgICAgICAiMSBhbHVhIgogICAgICAgIHBh
dGhfZ3JvdXBpbmdfcG9saWN5ICAgICBncm91cF9ieV9wcmlvCiAgICAgICAgcHJpbyAgICAgICAgICAgIC
AgICAgICAgIGFsdWEKICAgICAgICBmYWlsYmFjayAgICAgICAgICAgICAgICAgaW1tZWRpYXRlCiAgICAgI
CAgcGF0aF9jaGVja2VyICAgICAgICAgICAgIHR1cgogICAgICAgIGZhc3RfaW9fZmFpbF90bW8gICAgICAgI
CAxMAogICAgICAgIHVzZXJfZnJpZW5kbHlfbmFtZXMgICAgICBubwogICAgICAgIG5vX3BhdGhfcmV0cnk
gICAgICAgICAgICAwCiAgICAgICAgZmVhdHVyZXMgICAgICAgICAgICAgICAgIDAKICAgICAgICBkZXZfb
G9zc190bW8gICAgICAgICAgICAgNjAwCiAgICB9Cn0K
filesystem: root
mode: 0644
overwrite: true
path: /etc/multipath.conf
- contents:
source: data:text/plain;charset=utf-
8;base64,IyBSZWNvbW1lbmRlZCBzZXR0aW5ncyBmb3IgUHVyZSBTdG9yYWdlIEZsYXNoQXJyYXkuCiMg
VXNlIG5vbmUgc2NoZWR1bGVyIGZvciBoaWdoLXBlcmZvcm1hbmNlIHNvbGlkLXN0YXRlIHN0b3JhZ2UgZ
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 82/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
m9yIFNDU0kgZGV2aWNlcwpBQ1RJT049PSJhZGR8Y2hhbmdlIiwgS0VSTkVMPT0ic2QqWyEwLTldIiwgU
1VCU1lTVEVNPT0iYmxvY2siLCBFTlZ7SURfVkVORE9SfT09IlBVUkUiLCBBVFRSe3F1ZXVlL3NjaGVkdWxlc
n09Im5vbmUiCkFDVElPTj09ImFkZHxjaGFuZ2UiLCBLRVJORUw9PSJkbS1bMC05XSoiLCBTVUJTWVNUR
U09PSJibG9jayIsIEVOVntETV9OQU1FfT09IjM2MjRhOTM3KiIsIEFUVFJ7cXVldWUvc2NoZWR1bGVyfT0ibm
9uZSIKCiMgUmVkdWNlIENQVSBvdmVyaGVhZCBkdWUgdG8gZW50cm9weSBjb2xsZWN0aW9uCkFDVEl
PTj09ImFkZHxjaGFuZ2UiLCBLRVJORUw9PSJzZCpbITAtOV0iLCBTVUJTWVNURU09PSJibG9jayIsIEVOVnt
JRF9WRU5ET1J9PT0iUFVSRSIsIEFUVFJ7cXVldWUvYWRkX3JhbmRvbX09IjAiCkFDVElPTj09ImFkZHxjaGF
uZ2UiLCBLRVJORUw9PSJkbS1bMC05XSoiLCBTVUJTWVNURU09PSJibG9jayIsIEVOVntETV9OQU1FfT09
IjM2MjRhOTM3KiIsIEFUVFJ7cXVldWUvYWRkX3JhbmRvbX09IjAiCgojIFNwcmVhZCBDUFUgbG9hZCBieS
ByZWRpcmVjdGluZyBjb21wbGV0aW9ucyB0byBvcmlnaW5hdGluZyBDUFUKQUNUSU9OPT0iYWRkfGNoY
W5nZSIsIEtFUk5FTD09InNkKlshMC05XSIsIFNVQlNZU1RFTT09ImJsb2NrIiwgRU5We0lEX1ZFTkRPUn09P
SJQVVJFIiwgQVRUUntxdWV1ZS9ycV9hZmZpbml0eX09IjIiCkFDVElPTj09ImFkZHxjaGFuZ2UiLCBLRVJOR
Uw9PSJkbS1bMC05XSoiLCBTVUJTWVNURU09PSJibG9jayIsIEVOVntETV9OQU1FfT09IjM2MjRhOTM3KiI
sIEFUVFJ7cXVldWUvcnFfYWZmaW5pdHl9PSIyIgoKIyBTZXQgdGhlIEhCQSB0aW1lb3V0IHRvIDYwIHNlY29
uZHMKQUNUSU9OPT0iYWRkfGNoYW5nZSIsIEtFUk5FTD09InNkKlshMC05XSIsIFNVQlNZU1RFTT09ImJs
b2NrIiwgRU5We0lEX1ZFTkRPUn09PSJQVVJFIiwgQVRUUntkZXZpY2UvdGltZW91dH09IjYwIgo=
filesystem: root
mode: 0644
overwrite: true
path: /etc/udev/rules.d/99-pure-storage.rules
systemd:
units:
- enabled: true
name: iscsid.service
- enabled: true
name: multipathd.service
Step 3. This machine config is applied to each worker node one by one. To see the status of this
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 83/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 4. After the MachineConfig is being applied on all the worker nodes, ssh into one of the worker
nodes and verify.
Step 5. From each worker node, ensure the Pure Storage FlashArray storage IPs are reachable with
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 84/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 6. Create Kubernetes secret object constituting the Pure Storage FlashArray API endpoints and API
Tokens that Portworx Enterprise needs to communicate with and manage the Pure Storage
FlashArray storage device.
Step 7. Log into Pure Storage FlashArray and go to Settings > Users and Polices. Create a dedicated
user (for instance ocp-user) with storage Admin role for Portworx Enterprise authentication.
Step 8. Click the ellipses of the user previously create and select Create API Token. On the Create API
Token wizard, set number of weeks (for instance 24) for the API key to expire and click Create.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 85/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
A new API Token for the ocp-user will be created and shown to you. Copy the API key and
preserve it as it will be used later to create Kubernetes Secret.
Step 9. Create Kubernetes secret with the above API key using the following manifest:
## Create a json file constituting FlashArray Management IP addresses and API key created in the above
step.
cat pure.json
"FlashArrays": [
"MgmtEndPoint": "10.103.0.55",
If multiple arrays configured with Availability Zones labels (AZs) are available, then you can use
these AZ topology labels and enter those into pure.json to distinguish the arrays. For more
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 86/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 2. On the right-side pane, enter Portworx Enterprise to filter the available operators in the Operator
Hub. Select Portworx Enterprise and click Install.
Step 3. On the Operator Installation screen, under Installed Namespace drop down list, select Create
Project and create a new project (for instance px) and select the newly created project to install
the Portworx Operator.
Step 4. Install the Portworx plugin for OpenShift by clicking Enable under the Console Plugin. Click
Install.
Portworx Console Plugin for OpenShift will be activated and shown only after installing the
StorageCluster. Follow the below steps to create Portworx StorageCluster.
Step 5. When the Portworx operator is successfully installed, the StorageCluster needs to be created.
The StorageCluster Specifications (manifest file) can be created by logging into
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 87/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://central.portworx.com/. Log into the portal using your credentials and click Get Started.
Step 7. Under the Generate Spec page, select the latest Portworx version (version 3.1 was the latest
when this solution was validated). Select Pure FlashArray as platform. Select OpenShift 4+ for
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 88/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
the Distribution drop-down list, provide px for the Namespace field. Click the Customize option
located at the bottom of the page.
Step 8. Get the Kubernetes version by running kubectl version | grep -i 'Server Version'. Click Next .
Step 9. From the Storage tab, select iSCSI for Storage Area Network. Provide the size of the Cloud drive
and click plus (+) to add additional disks. Click Next.
Step 10. On the Network tab, set Auto for both Data and Management Network Interfaces. Click
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 89/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Next.
Step 11. On the Customize tab, click Auto for both Data and Management Network Interfaces. Click
Next.
Ensure to enter both the iSCSI network subnets here. This enables the iSCSI volumes on the
workers nodes to leverage all the available data paths to access target volumes.
Step 12. Click Advanced Settings, enter the name of the portworx cluster (for instance, ocp-pxclus).
Click Finish. Click Download.yaml to download the StorageCluster specification file.
Step 13. From the OpenShift console, go to Operators > Installed Operators > Portworx Enterprise.
Click the StorageCluster tab and click Create Storage Cluster to create StorageCluster. this
opens the YAML view of Storage Cluster.
Step 14. Copy the contents of the spec file previously downloaded and paste it in the yaml body. Verify
that both the iSCSI subnet networks are listed under env: as shown below. Click Create to
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 90/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 15. Verify the cluster status by executing the command on any worker node: sudo
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 91/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
/opt/pwx/bin/pxctl status.
Step 16. Run the sudo multipath -ll command on one of the workers nodes to verify all four paths from
worker node to storage target are being used. As you see below there are four active running
paths for each volume.
For instance, the following manifest files are used to create two SCs. One for provisioning shared volume
with ReadWriteMany (RWX) attribute to share a volume among multiple pods at the same time for read-
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 92/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
write access. Other SC is for provisioning the volumes for OpenShift Virtual Machines. For provisioning
volumes for typical application pods, predefined SCs can be leveraged.
## This SC is used to provision sharedv4 Service volumes (exposed as CLusterIP service) with two
replicas.
## cat sharedv4-sc-svc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: px-sharedv4-svc
provisioner: pxd.portworx.com
parameters:
repl: "2"
sharedv4: "true"
sharedv4_svc_type: "ClusterIP"
reclaimPolicy: Retain
allowVolumeExpansion: true
## This SC is used to provision sharedv4 Service volumes (with specific NFS settings) with two replicas.
## cat px-rwx-kubevirt.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: px-sharedv4-kubevirt
provisioner: pxd.portworx.com
parameters:
repl: "2"
sharedv4: "true"
sharedv4_mount_options: vers=3.0,nolock
sharedv4_svc_type: "ClusterIP"
volumeBindingMode: immediate
reclaimPolicy: Retain
allowVolumeExpansion: true
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 93/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
multiple WordPress application accessing the sharev4 service volume (consumed by multiple
WordPress pods) and ReadWriteOnce volume (Consumed by one MySQL pod).
kind: PersistentVolumeClaim
metadata:
name: mysql-wordpress-pvc-rwo
annotations:
spec:
storageClassName: px-csi-db
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 94/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
# Use the Stork scheduler to enable more efficient placement of the pods
schedulerName: stork
containers:
- image: mysql:5.6
imagePullPolicy:
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password.txt
ports:
- containerPort: 3306
name: mysql
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 95/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-wordpress-pvc-rwo
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-pvc-rwx
labels:
app: wordpress
spec:
storageClassName: px-sharedv4-rwx
accessModes:
- ReadWriteMany
resources:
requests:
storage: 7Gi
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 96/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
replicas: 3
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
# Use the Stork scheduler to enable more efficient placement of the pods
schedulerName: stork
containers:
- image: wordpress:4.8-apache
name: wordpress
imagePullPolicy:
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password.txt
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 97/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-pvc-rwx
The following screenshot shows that 3 WordPress pods are created accessing the same sharedv4 volume
with ReadWriteMany access mode and one mysql pod created with single volume with ReadWriteOnce
access mode.
Sharedv4 Volume with ReadWriteMany (RWX) Access: Three WordPress pods are accessing the same
sharedv4 volume with the ReadWriteMany access mode, meaning multiple pods can concurrently read
from and write to this single volume. This is especially useful for applications like WordPress that may
need to scale out to handle load but still rely on a shared data volume.
ReadWriteOnce (RWO) Access for MySQL: There is also a single MySQL pod accessing a volume with the
ReadWriteOnce access mode, which allows only one pod to mount the volume at a time. This setup is
common for databases, where concurrent access could lead to data inconsistencies.
This setup highlights Portworx's ability to support various access modes for different use cases, allowing
flexible storage configurations that suit both shared applications (like WordPress) and single-instance
databases (like MySQL). By providing the ReadWriteMany access mode with sharedv4, Portworx enables
efficient scaling and resource usage, allowing applications to use shared storage across multiple pods
and hosts
Snapshots and Clones: Portworx Enterprise offers data protection of volumes using volume snapshots and
restore them for point in time recovery of the data. Any Storage Class that implements portworx csi driver
pxd.portworx.com supports volume Snapshots.
Step 2. Run the following to create VolumeSnapshotClass, thus creating a Snapshot of a PVC and then
restore the snapshot as a new PVC. The following manifest is used to create
VolumeSnapshotClass for taking local snap shots of the PVCs:
## For Openshift platform, px-csi-account service account needs to be added to the privileged security
context.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 98/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
## cat VolumeStroageClass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: px-csi-snapclass
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: pxd.portworx.com
deletionPolicy: Delete
parameters:
csi.openstorage.org/snapshot-type: local
## cat px-snaptest-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: px-snaptest-pvc
spec:
storageClassName: px-csi-db. ## Any Storage Class can be used which implements Portworx CSI
driver.
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
## Assume this pvc is attached to a pod and the pod has written some data into the pvc.
## Now create Snapshot of the above volume. It can be created using UI also.
## cat create-snapshot-snaptest-pvc.yaml
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 99/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: px-snaptest-pvc-snap1
spec:
volumeSnapshotClassName: px-csi-snapclass
source:
The following screenshot shows px-snaptest-pvc-snap1 is the snapshot of the PVC px-snaptest-pvc:
Step 4. You can now restore this as snapshot as a new PVC and then can be mounted to any other pod.
Step 5. Click the ellipses of the snapshot and select Restore as new PVC. In the Restore as new PVC
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 100/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 6. You can view the original and restored PVC under PVC list.
Step 7. To clone of a PVC (px-snaptest-pvc), click three of the PVC and select Clone PVC. Click Clone.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 101/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 102/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
running, administrators will see a message in the OpenShift web console to refresh their browser window
for the Portworx tabs to show up in the UI.
With this plugin, Portworx has built three different UI pages, including a Portworx Cluster Dashboard that
shows up in the left navigation menu, a Portworx tab under Storage > Storage Class section, and another
Portworx tab under Storage > Persistent Volume Claims.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 103/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
To obtain detailed inventory information of the Portworx Cluster, click the Drives and Pools tabs.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 104/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
get these details. Instead, all this information can be found in Console Plugin.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 105/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
plugin eliminates the need for administrators to use multiple “kubectl get” and “kubectl describe”
commands to find all these details—instead, they can just use a simple UI to monitor their storage classes.
OpenShift Virtualization
This chapter contains the following:
Red Hat® OpenShift® Virtualization, an included feature of Red Hat OpenShift, provides a modern platform
for organizations to run and deploy their new and existing virtual machine (VM) workloads. The solution
allows for easy migration and management of traditional virtual machines onto a trusted, consistent, and
comprehensive hybrid cloud application platform.
OpenShift Virtualization is an operator included with any OpenShift subscription. It enables infrastructure
architects to create and add virtualized applications to their projects from OperatorHub in the same way
they would for a containerized application.
Existing virtual machines can be migrated from other platforms onto the OpenShift application platform
through the use of free, intuitive migration tools. The resulting VMs will run alongside containers on the
same Red Hat OpenShift nodes.
The following sections and procedures provide detailed steps to create custom network policies for
creating management and iSCSI networks for virtual machines to use, steps to deploy Red Hat virtual
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 106/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
machines using pre-defined templates, steps to create custom Windows Server template and create
windows virtual machine from this custom template.
For OpenShift Virtualization operator, ensure that the Operator recommended namespace option
is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is
automatically created if it does not exist.
Step 2. When the operator is installed successfully, go to Operators > Installed Operators and type
virtualization under the Name checkbox and verify that operator is installed successfully.
The procedures in this section are typically performed after OpenShift Virtualization is installed. You can
configure the components that are relevant for your environment:
● Node placement rules for OpenShift Virtualization Operators, workloads, and controllers: The
default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you
can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads,
and controllers by configuring node placement rules. For detailed options on VM scheduling and
placement options, see: https://docs.openshift.com/container-
platform/4.16/virt/post_installation_configuration/virt-node-placement-virt-
components.html#virt-node-placement-virt-components
● Storage Configuration: Storage profile must be configured for OpenShift virtualization. A storage
profile provides recommended settings based on the associated storage class. When the
Portworx Enterprise is deployed on the OpenShift Cluster, it deploys several storage classes with
different settings for different use-cases. OpenShift Virtualization automatically creates a storage
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 107/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
profile with the recommended storage settings based on the associated storage class. Hence
there is no need for additional configuration.
● A Default Storage Class must be configured for the OpenShift Cluster. Otherwise, the cluster
cannot receive automated boot source updates. To configure an existing storage class (created
by Portworx) as Default Storage Class run the following command:
kubectl patch storageclass <StorageClassName> -p '{"metadata": {"annotations":
{"storageclass.kubernetes.io/is-default-class":"true"}}}'
● Network configuration: By default, OpenShift Virtualization is installed with a single, internal pod
network. After you install OpenShift Virtualization, you can install networking Operators and
configure additional networks.
The following sections provides more details on network configurations validated in this solution.
In this solution, the NMState operator and its CRDs are used for achieving the following requirements of
the virtual machines hosted on Red Hat OpenShift cluster:
● Allow the virtual machines to access external resources and services such as Active Directory,
DNS, NFS shares and so on.
● To access the VMs (using RDP, SSH, and so on) from the already existing management network.
● Allow the VMs to directly access the iSCSI storage targets directly using “In-Guest” iSCSI initiator.
To meet these requirements, custom Node Network Configuration Policies are defined using NMState
operator to create the following networks on each worker nodes.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 108/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Type of
network
Worker
Network created on
Interface VLAN Subnets and GW Purpose
Name each
Used
worker
node
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 109/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 2. When NMState operator is installed, go to Operators > Installed Operators, click Details of
NMState operator, click Create Instance and create NMState instance with default settings.
When NMState is created, Refresh the browser to see the new options
(NodeNetworkConfigurationPolicy, NetworkAttachmentDefinitions, NodeNetworkState) under the
Network tab.
Step 3. Use the following manifests to create NodeNetworkConfigurationPolicy policies for each type of
network:
## NodeNetworkConfiguration policy for creating a linux bridge for Vm-Management network by using e
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 110/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
## cat vm-network-bridge.yaml
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br-vm-network-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
- name: br-vm-network
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
port:
- name: eno8
#### NodeNetworkConfiguration policy for creating a linux bridge for Vm-iSCSI-A network by using
eno9 physical interface of the worker.
## cat iscsi-a-bridge-fs.yaml
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 111/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br-iscsi-a-eno9-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
- name: br-iscsi-a
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
port:
- name: eno9
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 112/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
#### NodeNetworkConfiguration policy for creating a linux bridge for Vm-iSCSI-B network by using
eno10 physical interface of the worker.
##cat iscsi-b-bridge-fs.yaml
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br-iscsi-b-eno11-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
- name: br-iscsi-b
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
port:
- name: eno10
When these polices are created, they will be applied on each worker node as shown below:
When these polices are applied on the worker nodes, you will no longer see the eno8,eno9 and
eno10 interfaces on the worker nodes. Instead, corresponding Linux-Bridges will be created on
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 113/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 4. Use the following manifest files to create NetworkAttachmentDefinitions using each of the
network configuration policies:
## cat vmnw-vlan1062-attachment.yml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
description: VM Management NW
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br-vm-network
name: vmnw-vlan1062
namespace: default
spec:
config: '{"name":"vmnw-vlan1062","type":"bridge","bridge":"br-vm-network","macspoofchk":false}'
## vm-iscsi-a-vlan3010-attachment.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br-vm-iscsi-a
name: vm-iscsi-a-vlan3010-test
spec:
config: '{"name":"vm-iscsi-a-vlan3010","type":"bridge","bridge":"br-vm-iscsi-
a","mtu":9000,"macspoofchk":false}'
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 114/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
## cat vm-iscsi-b-vlan3020-attachment.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br-vm-iscsi-b
name: vm-iscsi-b-vlan3020
spec:
config: '{"name":"vm-iscsi-b-vlan3020","type":"bridge","bridge":"br-vm-iscsi-
b","mtu":9000,"macspoofchk":false}'
Step 5. Set mtu to 9000 for the iSCSI traffic as shown in above manifests.
Step 6. When these polices are created, you can view them under Network >
NetworkAttachmentDefinitions.
For additional virtual machine management traffics with different VLANs (for instance, 1063,1064,
and so on), additional NetworkAttachmentDefinitions can be defined on the same br-vm-network
bridge and can be attached to the VMs.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 115/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
## This SC is used to provision sharedv4 Service volumes (with specific NFS settings) with two replicas.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 116/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
## cat px-rwx-kubevirt.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: px-sharedv4-kubevirt
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: pxd.portworx.com
parameters:
repl: "2"
sharedv4: "true"
sharedv4_mount_options: vers=3.0,nolock
sharedv4_svc_type: "ClusterIP"
volumeBindingMode: immediate
reclaimPolicy: Retain
allowVolumeExpansion: true
Step 4. The Automatic Images Download option, located here: Virtualization > Overview > Settings >
General Settings, is enabled by default. The boot images for Red Hat 8/9, CentOS 9 and
Fedora are downloaded automatically. The predefined templates will be automatically updated
with these images as boot volumes.
Step 5. If the Automatic Images Download Option is turned off, you can download the required RedHat,
Fedora and CentOS qcow2 (or KVM Guest Images) boot images from the following URLs and
use the following steps to create boot volumes using downloaded boot image files.
Step 7. In the Add Volume screen, set Source type as Upload Volume. For Upload PVC Image, browse
and select one of the QCOW2 images downloaded in the previous step. For this example, RHEL
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 117/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
9 qcow2 file is selected. Select px-sharedv4-kubevirt for the StorageClass and provide the
details for the rest of the fields as shown below. Click Save after updating all the fields.
When you upload the qcow2 image, a CDI (Containerized Data Importer) pod will be created to
format the qcow2 to KVM compatible format. The pod will disappear after successfully importing
the qcow2 image.
Step 8. Repeat steps 1 through 7 to create boot volumes for all required Linux based Operating
Systems. You can view list of all the boot volumes under Catalog as shown below. The following
image shows three boot volumes for Fedora, CentOS 9 and Red Hat 9 versions.
Based on different versions of the uploaded OS boot images, the corresponding predefined
OpenShift Virtual Machine templates will be updated to use these volumes as source volumes for
booting into the Operating system. For all the templates that are updated with these volumes as
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 118/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
boot sources volumes, you will notice these templated are marked with Source available in dark
blue.
Step 9. To create a VM, go to Virtualization > Virtual Machines > Create > From template. Select one
of the templates which has the Source available label on it.
Step 10. Provide a name to the VM and change the Disk size to the required size. Click Customize
Virtual Machine.
Step 11. From the Network Interface tab, remove the preconfigured Pod network interface and add new
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 119/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 12. From the Scripts tab, click the pen icon near Public SSH. Click Add New radio button and
upload the public key of your installer VM. Provide a name for the public key and turn on the
checkbox to use this key for all the VMs you create in future. Make a note of the default user
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 120/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
name (cloud-user) and the default password. You can click the pen icon and change default
user and password.
Step 13. Optionally, from the Overview tab, change the CPUs and memory resources to be allocated to
the VM. Click Create Virtual Machine.
Step 14. In a few seconds a new virtual machine will be provisioned. The interface of the VM will get
DHCP IP from vmnw-vlan1062 network and the VM can be accessed directly from rhel-
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 121/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
installer VM (aa06-rhel9) using its public ssh key without using password as shown below.
Step 15. Repeat steps 1 through 14 to create CentOS and Fedora linux VMs.
This section provides detailed steps to create a Windows VM from Windows Server ISO image and then
configure the VM to directly access the Pure Storage FlashArray volumes using In-Guest iSCSI.
Follow this procedure to create a temporary windows Server 2022 virtual machine using Windows Server
ISO, then install and configure the VM with all the required software components. Use this VM to create
sysprep image. Then use this sysprep image as gold image for all the future windows Server 2022 VMs.
Step 1. Download the Windows Server 2022 ISO and upload to a PVC using the Upload Data to PVC
option in the console or use virtctl utility to do so. The following screenshot shows uploading
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 122/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 2. Using win2022-iso PVC as boot volume, create a temporary windows VM. Go to Virtualization >
VirtualMachines > Create > Using template. Select Windows Server 2022.
Step 3. On the Create VM window, turn on the Boot From CD check bot and select the win2022-iso
PVC as shown below.
Step 4. Scroll down the scroll bar and Set Blank for Data Source and change the Disk size to from
default 30 to 60GiB. Ensure the Windows Drivers check box is selected. Click Customize
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 123/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
VirtualMachine.
Step 5. From the Network Interface tab, remove the preconfigured Pod network interface and add new
interface on vmnw-vlan1062 network as shown below:
Step 6. From the Disks tab, click the ellipses of the rootdisk and set the StorageClass to px-sharedv4-
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 124/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 7. When Widows VM started, press any key to start the Windows Server 2022 installation.
Step 8. Windows ISO cannot detect the rootdisk (60G) due to missing OpenShift virtio drivers. Click
Browse to the mounted virtio drivers and select 2k22 folder. Click OK. Then Select Red Hat
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 125/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 9. Windows ISO media detects the rootdisk. Select the disk and complete Windows Server
installation.
Step 10. When the Windows is installed successfully, go to E:\ disk and install the virtio driver by
double-clicking the virtio-win-gt-x64.msi. Complete the drivers installation with default options.
Step 11. When the drivers are installed, the network interfaces will come up and get assigned with
DHCP IP address. Turn off the firewalls temporarily to check if the VM can reach the outside
services like AD/DNS. Enable Remote Desktop.
You can install the required software, tools and drivers like MPIO before the VM is converted into
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 126/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
an gold image.
Step 12. Convert this temporary VM into sysprep image by executing the sysprep command as shown
below. Once sysprep is completed, the VM tries to restart. Before it restarts, stop the VM from
OpenShift console.
Step 13. Delete this temporary VM and ensure you DO NOT delete the PVCs by deselecting the
checkbox as shown below.
Step 14. Optionally, delete the PVC CDROM disk which was created during the temporary VM creation.
Now you can use the remaining PVC (win-2k22-template) which has syspred windows server 2022 gold
image to map to an existing Windows Server 2022 template or you can create a new template from the
existing windows Server 2022 template.
Step 15. To create a new custom Windows Server 2022 template from the existing templates, go to
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 127/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 16. Select the existing Windows Server 2022 template and click the ellipses and click Clone.
Step 17. From the Clone Template windows, provide a name for the template (windows2k22-template),
change the project to default and provide a name for Template Provider and click Clone.
Step 18. Go to this newly created windows2k22-template custom template, set boot source volume to
the PVC which has windows sysprep image.
Step 19. Go to Virtualization > Templates > Default Project, click the template, go to the Disk tab and
click Add disk to add a new disk. Provide a name for the disk and select the PVC (Win2k22-
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 128/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 20. Delete the existing boot disk and set the win2k22-templ-osdisk to boot disk.
Step 21. Go to Network Interfaces, add two network interfaces for iSCSI traffic as shown below. The
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 129/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
This template is configured to boot from the sysprep image, and also configured with a total of three
interfaces. One for management and two for iSCSI traffic.
Step 22. Create a fresh Windows Server 2022 virtual machine using windows2k22-template template.
Step 23. Go to Virtualization > VirtualMachines > Create > From template. Click User templates.
Select the new Windows2k22-template.
Step 24. Provide a name for the VM and click Quick Create VirtualMachine. Since the template is pre-
configured with everything according to our requirements, nothing has to be changed to create
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 130/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 25. When is VM is fully provisioned, verify all the three interfaces get corresponding DHCP IP
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 131/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
address.
Now you should reach the Pure Storage FlashArray target IP addresses with large packet size without
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 132/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
fragmenting the packet since the Jumbo Frames/MTU is already configured on the iSCSI interfaces.
Step 26. To directly access the volume created in the Pure Storage FlashArray, run the following
PowerShell commands to start iSCSI service and connect to the Pure Storage FlashArray
directly.
You would need to connect to the Pure Storage FlashArray and create a host for this Windows
Guest VM using its IQN and assign a volume to the newly created host.
## open PowerShell with Administrator right and execute the following steps.
Start-Service -Name MSiSCSI
(Get-InitiatorPort).NodeAddress
## Now log into FlashArray and create a host for this Guest using its IQN, create a vlume and assign the
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 133/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
$target = Get-IscsiTarget
Live migration of VMs is non disruptive process of moving VMs from one worker node to other without
downtime and has the following requirements:
● The OCP cluster must have shared storage with ReadWriteMany (RWX) access mode. An OCP
cluster backed by Pure Storage FlashArray and Portworx Enterprise CSI driver already supports
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 134/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
RWX access mode for filesystem. Any VM created with a StorageClass provisioned by Portworx
already uses RWX PVCs and can be live migrated without downtime.
● The default number of migrations that can run in parallel in the cluster is 5, with a maximum of two
2 migrations per node. To change these settings, go to Virtualization > Overview > Settings >
General Settings > Live Migration and change the settings according to your requirements.
● Using a dedicated network for Live migration is optional but recommended. Go to Virtualization >
Overview > Settings > General Settings > Live Migration > Live migration network and select the
required interface for live migration of VMs.
The virtual machines can be live migrated from one worker to other by selecting VM and click the ellipsis
(three dots) and select the Migrate option.
A live migration can be triggered from the GUI, CLI, API, or automatically.
You can monitor the VM migration status using the command: oc describe vmi <vm_name> -n
<namespace>
Migrate Virtual Machines from the VMware vSphere Cluster to OpenShift Virtualization
The Migration Toolkit for Virtualization (MTV) is an operator-based functionality that enables us to migrate
virtual machines at scale to Red Hat OpenShift Virtualization. MTV supports migration of virtual machines
from VMware vSphere, Red Hat Virtualization, OpenStack, OVA and OpenShift Virtualization source
providers to OpenShift Virtualization.
The following are some of the vSphere prerequisites to be met before planning for migration of virtual
machines from vSphere environments:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 135/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
https://access.redhat.com/articles/973163?
extIdCarryOver=true&sc_cid=701f2000001OH7JAAW#ocpvirt
● The guest OS must be supported by virt-v2v utility to convert them into OpenShift virtualization
compatible images as listed here: https://access.redhat.com/articles/1351473
● Must have a user with at least the minimal set of VMware privileges. Required privileges listed
here: https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html-
single/installing_and_using_the_migration_toolkit_for_virtualization/index#vmware-
prerequisites_mtv
● The Secure boot option must be disabled on the VMs.
● For a warm migration, changed block tracking (CBT) must be enabled on the VMs and on the VM
disks. Here are the steps for enabling CBT on the VMs running on vSphere cluster:
https://knowledge.broadcom.com/external/article/320557/changed-block-tracking-cbt-on-
virtual-ma.html
● The MTV can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring
virtual disks from VMware vSphere. Optionally, VDDK can be used for the faster migration. Here
are the steps to configure VDDK:
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html-
single/installing_and_using_the_migration_toolkit_for_virtualization/index#creating-vddk-
image_mtv
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html-
single/installing_and_using_the_migration_toolkit_for_virtualization/index#vmware-prerequisites_mtv
The virtual machines with guest-initiated storage connections, such as Internet Small Computer
Systems Interface (iSCSI) connections or Network File System (NFS) mounts, are not handled by
MTV and could require additional planning before or reconfiguration after the migration.
This section describes the migration of the following two VMs running on vSphere 8.0 cluster:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 136/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
The following screenshot shows the vSphere cluster with above two VMs (rhe8-vm1 and win2k19-vm2)
to be migrated to OpenShift cluster:
To migrate the VMs from vSphere cluster to OpenShift virtualization, following these procedures:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 137/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
option, complete the FortliftController installation. Refresh the browser to see the Migration tab
on the console.
The actions to perform the migration of VMs from vSphere environment to OpenShift Virtualization are as
follows:
● Identify the VMs on the vSphere environment and ensure the VMs are ready for migration by
verifying that all the pre-requisites are met
● Create Provider
● Create Migration Plan
● Execute the migration plan
Step 2. Click vm vSphere. Provide a name to the provider, URL to the provider sdk in the format
https://host-example.com/sdk. You can either skip the VDDK or provide the repository path
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 138/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 3. Provide a vSphere user id and password details to use for the migration activity. Click Fetch
Certificate from URL to generate the certificates and click Confirm. Click Create Provider to
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 139/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
You will see the vSphere provider (masked as source) just created along with the Default “host” provider
for OpenShift Virtualization. Openshift will connect to the vSphere environment and fetches the inventory
of the vSphere cluster.
Step 4. To create Migration Plan, go to Migration > Plan for Virtualization and click Create Plan. Select
the vSphere provider previously created. Select the VMs which you would like to migrate from
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 140/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 5. Provide a name to the migration plan. Select the Target provider as host.
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 141/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Step 6. Adjust and map the VM’s network and storage mappings from vSphere environment to
OpenShift virtualization environment. Click Create migration plan.
Now you will see the migration plan created under the Plans for Virtualization tab.
Step 8. When the VM migration starts, you can see the status of the migration by navigating to the
Migration plan and virtual machines tab as shown below. Scroll down to the Pipeline section to
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 142/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
When the VMs are migrated successfully, the migration plan will show as succeeded as shown below:
The migrated VMs will be powered on and shown in the project which you selected in the plan.
The In-Guest iSCSI disks of windows VM (win2k19-vm2) are discovered and connected
automatically as the VM is mapped to the required iSCSi network interfaces.
Virtual machines with guest-initiated storage connections are not handled by MTV and could
require additional steps or reconfiguration after the VM migration to OpenShift environment.
Especially, if you have different Storage Array with different IP addresses and VLANs in OpenShift
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 143/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
environment compared to vSphere, you might need perform additional steps within the Guest
VMs to connect to the guest-initiated storage volumes.
● Uninstall the VMware tools from the VMs since the OpenShift MTV installs the QEMU Guest agent
on both linux and windows based virtual machines.
● Additional configuration steps are required for the Guest initiated storage disks since the MTV
does not handle such storage devices.
● For Linux VMs, network interface names will be renamed to enp1sx while the original names on
vSphere environment would be ensxx. You can change the interface name by editing the network
settings and restart the VM.
Gopu Narasimha Reddy is a Technical Marketing engineer with the UCS Solutions team at Cisco. He is
currently focused on validating and developing solutions on various Cisco UCS platforms for enterprise
database workloads with different operating environments including Windows, VMware, Linux, and
Kubernetes. Gopu is also involved in publishing database benchmarks on Cisco UCS servers. His areas of
interest include building and validating reference architectures, development of sizing tools in addition to
assisting customers in database deployments.
Vijay Kulari works at Pure Storage part of Solutions team. He specializes in designing, developing, and
optimizing solutions across Storage, Converged Infrastructure, Cloud, and Container technologies. His
role includes establishing best practices, streamlining automation, and creating technical content. As an
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 144/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
experienced Solution Architect, Vijay has a strong background in VMware products, storage solutions,
converged and hyper-converged infrastructure, and container platforms
Acknowledgements
For their support and contribution to the design, validation, and creation of this Cisco Validated Design,
the authors would like to thank:
● Venkat Pinisetti, Member of Technical Staff, Pure Storage, Pure Storage, Inc.
Appendix
Compute
Cisco Intersight: https://www.intersight.com
Network
Cisco Nexus 9300-GX Series Switches:
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/nexus-9300-
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 145/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
gx-series-switches-ds.html
Pure Storage
Pure Storage FlashArray//X: https://www.purestorage.com/products/nvme/flasharray-x.html
Portworx Enterprise
https://docs.portworx.com/
Interoperability Matrix
Cisco UCS Hardware Compatibility Matrix: https://ucshcltool.cloudapps.cisco.com/public/
Pure Storage FlashStack Compatibility Matrix. Note, this interoperability list will require a support login
from Pure:
https://support.purestorage.com/FlashStack/Product_Information/FlashStack_Compatibility_Matrix
Feedback
For comments and suggestions about this guide and related guides, join the discussion on Cisco
Community at https://cs.co/en-cvds.
CVD Program
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS
(COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND
ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING
FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF
THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER
PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 146/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING
ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,
the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade
Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS X-Series,
Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric
Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network
Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation,
EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink,
Internet Quotient, IOS, iPhone, iQuick Study, LightStream, Linksys, MediaTone, MeetingPlace,
MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX,
PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The
Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered
trade-marks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
(LDW_P1)
All other trademarks mentioned in this document or website are the property of their respective owners.
The use of the word partner does not imply a partnership relationship between Cisco and any other
company. (0809R)
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 147/148
6/28/25, 2:25 PM FlashStack with Red Hat OpenShift Container and Virtualization Platform using Cisco UCS X-Series - Cisco
Quick Links -
About Cisco
Contact Us
Careers
Feedback
Help
Privacy
Accessibility
Trademarks
Newsroom
Sitemap
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flashstack_ocp_baremetal_imm.html 148/148