extending-and-multi-rack-cabling-guide-exadata-database-machine-dbmmr
extending-and-multi-rack-cabling-guide-exadata-database-machine-dbmmr
25.1
F29251-29
March 2025
Oracle Exadata Database Machine Extending and Multi-Rack Cabling Guide, 25.1
F29251-29
Contributors: Doug Archambault, Leo Agranonik, Nilesh Choudhury, Jaime Figueroa, Roger Hansen, Leslie Keller,
Frank Kobylanski, René Kundersma, Yang Liu, Juan Loaiza, Barb Lundhild, Philip Newlan, Dan Norris, Michael Nowak,
Gavin Parish, Hector Pujol, Darryl Presley, Ashish Ray, Richard Scales, Oliver Sharwood, Jia Shi, Kesavan Srinivasan,
Krishnadev Telikicherla, Cliff Thomas, Alex Tsukerman, Kothanda Umamageswaran, Doug Utzig, Zheren Zhang
This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or
allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit,
perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation
of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find
any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then
the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any
programs embedded, installed, or activated on delivered hardware, and modifications of such programs) and Oracle
computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial
computer software," "commercial computer software documentation," or "limited rights data" pursuant to the applicable
Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction,
duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle
programs (including any operating system, integrated software, any programs embedded, installed, or activated on
delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle
data, is subject to the rights and limitations specified in the license contained in the applicable contract. The terms
governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications that may create a risk of
personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its
affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle®, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used
under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc, and the AMD logo
are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open
Group.
This software or hardware and documentation may provide access to or information about content, products, and
services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an
applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss,
costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth
in an applicable agreement between you and Oracle.
Contents
Preface
Audience vii
Documentation Accessibility vii
Diversity and Inclusion vii
Related Documentation vii
Conventions ix
iii
2.1.2.1 Reviewing and Validating Current Configuration of Eighth Rack Oracle
Exadata Database Machine 2-4
2.1.2.2 Activating Database Server Cores in Oracle Exadata Database Machine
Eighth Rack 2-5
2.1.2.3 Activating Storage Server Cores and Disks in Oracle Exadata Database
Machine Eighth Rack 2-6
2.1.2.4 Creating Additional Grid Disks in Oracle Exadata Database Machine
Eighth Rack 2-7
2.1.2.5 Adding Grid Disks to Oracle ASM Disk Groups in Oracle Exadata
Database Machine Eighth Rack 2-13
2.1.2.6 Validating Expansion of Oracle Exadata Database Machine 2-15
2.2 Extending Elastic Configurations 2-17
2.2.1 Removing the Doors 2-17
2.2.2 Adding New RDMA Network Fabric Switches 2-17
2.2.2.1 Adding a RoCE Network Fabric Switch (Cisco Nexus 9336C-FX2) 2-18
2.2.2.2 Adding an InfiniBand Network Fabric Switch (Sun Datacenter InfiniBand
Switch 36) 2-19
2.2.3 Adding New Servers 2-20
2.2.3.1 Preparing to Install New Servers 2-21
2.2.3.2 Installing the Rack Assembly 2-21
2.2.3.3 Installing the Server 2-22
2.2.4 Cabling Database Servers 2-23
2.2.5 Cabling Storage Servers 2-28
2.2.6 Closing the Rack 2-30
2.3 Extending a Rack by Adding Another Rack 2-30
2.3.1 Overview of Adding Another Rack to an Existing System 2-31
2.3.2 Cabling Two Racks Together 2-31
2.3.2.1 Cabling Two RoCE Network Fabric Racks Together with No Down Time 2-32
2.3.2.2 Cabling Two RoCE Network Fabric Racks Together with Down Time
Allowed 2-152
2.3.2.3 Cabling Two InfiniBand Network Fabric Racks Together 2-158
2.3.3 Cabling Several Racks Together 2-160
2.3.3.1 Cabling Several RoCE Network Fabric Racks Together 2-160
2.3.3.2 Cabling Several InfiniBand Network Fabric Racks Together 2-169
iv
3.6 Adding Grid Disks to Oracle ASM Disk Groups 3-10
3.7 Adding Servers to a Cluster 3-13
3.8 Configuring Cell Alerts for New Oracle Exadata Storage Servers 3-19
3.9 Adding Oracle Database Software to the New Servers 3-20
3.10 Adding Database Instance to the New Servers 3-22
3.11 Returning the Rack to Service 3-23
4 Multi-Rack Cabling Tables for Oracle Exadata X9M and Later Models
4.1 Understanding Multi-Rack Cabling for X9M and Later Model Racks 4-1
4.2 Preparing for Multi-Rack Cabling with X9M and Later Model Racks 4-4
4.3 Two-Rack Cabling for X9M and Later Model Racks 4-5
4.4 Three-Rack Cabling for X9M and Later Model Racks 4-9
4.5 Four-Rack Cabling for X9M and Later Model Racks 4-15
4.6 Five-Rack Cabling for X9M and Later Model Racks 4-22
4.7 Six-Rack Cabling for X9M and Later Model Racks 4-30
4.8 Seven-Rack Cabling for X9M and Later Model Racks 4-39
4.9 Eight-Rack Cabling for X9M and Later Model Racks 4-51
4.10 Nine-Rack Cabling for X9M and Later Model Racks 4-63
4.11 Ten-Rack Cabling for X9M and Later Model Racks 4-77
4.12 Eleven-Rack Cabling for X9M and Later Model Racks 4-92
4.13 Twelve-Rack Cabling for X9M and Later Model Racks 4-109
4.14 Thirteen-Rack Cabling for X9M and Later Model Racks 4-127
4.15 Fourteen-Rack Cabling for X9M and Later Model Racks 4-147
v
6.1.1 Preparing for Multi-Rack Cabling with InfiniBand Network Fabric 6-4
6.1.2 Cabling Oracle Exadata Quarter Racks and Oracle Exadata Eighth Racks with
InfiniBand Network Fabric 6-7
6.2 Two-Rack Cabling with InfiniBand Network Fabric 6-9
6.3 Three-Rack Cabling with InfiniBand Network Fabric 6-11
6.4 Four-Rack Cabling with InfiniBand Network Fabric 6-13
6.5 Five-Rack Cabling with InfiniBand Network Fabric 6-16
6.6 Six-Rack Cabling with InfiniBand Network Fabric 6-19
6.7 Seven-Rack Cabling with InfiniBand Network Fabric 6-22
6.8 Eight-Rack Cabling with InfiniBand Network Fabric 6-26
vi
Preface
This guide describes how to extend Oracle Exadata Database Machine and cable multiple
racks together. It includes information about the cables, new server installation, and cabling
tables.
• Audience
• Documentation Accessibility
• Diversity and Inclusion
• Related Documentation
• Conventions
Audience
This guide is intended for Oracle Exadata Database Machine customers and those responsible
for data center site planning, configuration, and maintenance of Oracle Exadata.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Related Documentation
The following guides contain additional information for Oracle Exadata:
Oracle Exadata System Guides
vii
Preface
viii
Preface
• Sun Server X2-8 (formerly Sun Fire X4800 M2) Service Manual at http://
docs.oracle.com/cd/E20815_01/html/E20819/index.html
• Sun Fire X4800 Server Service Manual at http://docs.oracle.com/cd/E19140-01/html/
821-0282/index.html
• Sun Fire X4270 M2 Server Service Manual at http://docs.oracle.com/cd/E19245-01/
E21671/index.html
• Sun Fire X4170 M2 Server Service Manual at http://docs.oracle.com/cd/E19762-01/
E22369-02/index.html
• Sun Fire X4170, X4270, and X4275 Servers Service Manual at http://
docs.oracle.com/cd/E19477-01/820-5830-13/index.html
• Sun Datacenter InfiniBand Switch 36 Firmware Version 2.1 Documentation at http://
docs.oracle.com/cd/E36265_01/index.html
• Sun Datacenter InfiniBand Switch 36 Firmware Version 2.2 Documentation at http://
docs.oracle.com/cd/E76424_01/index.html
• Sun Flash Accelerator F20 PCIe Card User's Guide at http://docs.oracle.com/cd/
E19682-01/E21358/index.html
• Sun Flash Accelerator F40 PCIe Card User's Guide at http://docs.oracle.com/cd/
E29748_01/html/E29741/index.html
• Sun Flash Accelerator F80 PCIe Card User's Guide at http://docs.oracle.com/cd/
E41278_01/html/E41251/index.html
• Oracle Flash Accelerator F160 PCIe Card User Guide at http://docs.oracle.com/cd/
E54943_01/html/E54947/index.html
• Oracle Flash Accelerator F320 PCIe Card User Guide at http://docs.oracle.com/cd/
E65386_01/html/E65387/index.html
• Oracle Flash Accelerator F640 PCIe Card User Guide at https://docs.oracle.com/cd/
E87231_01/html/E87233/index.html
• Sun Storage 6 Gb SAS PCIe RAID HBA Documentation at http://docs.oracle.com/cd/
E19221-01/
• Oracle Storage 12 Gb/s SAS PCIe RAID HBA, Internal Documentation Library at http://
docs.oracle.com/cd/E52363_01/index.html
• Oracle Integrated Lights Out Manager (ILOM) Documentation at http://www.oracle.com/
goto/ilom/docs
• "Cisco Catalyst 4948E and 4948E-F Ethernet Switches Data Sheet" at https://
www.cisco.com/c/en/us/products/collateral/switches/catalyst-4948e-ethernet-
switch/data_sheet_c78-598933.html
• "Cisco Nexus 9300-EX and 9300-FX Platform Switches Data Sheet at https://
www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/
datasheet-c78-736651.html"
Conventions
The following text conventions are used in this document:
ix
Preface
Convention Meaning
boldface Boldface type indicates graphical user interface
elements associated with an action, or terms
defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or
placeholder variables for which you supply
particular values.
monospace Monospace type indicates commands within a
paragraph, URLs, code in examples, text that
appears on the screen, or text that you enter.
$ prompt The dollar sign ($) prompt indicates a command
run as the oracle user.
# prompt The pound (#) prompt indicates a command that is
run as the root user.
x
1
Preparing to Extend Oracle Exadata Database
Machine
Before extending any rack hardware, review the safety precautions and cabling information,
and collect information about the current rack in this section.
• About Extending Oracle Exadata
You can extend Oracle Exadata either by adding servers to the current configuration or by
cabling together multiple racks.
• Reviewing the Safety Precautions
Before upgrading Oracle Exadata Database Machines, read Important Safety Information
for Sun Hardware Systems included with the rack.
• Reviewing the Cable Precautions
• Estimating Cable Path Lengths
• Bundling Cables
• Reviewing the Cable Management Arm Guidelines
Review the following cable management arm (CMA) guidelines before routing the cables.
• Obtaining Current Configuration Information
• Preparing the Network Configuration
When adding additional servers to your rack, you will need IP address and the current
network configuration settings.
• Moving Audit and Diagnostic Files
• Reviewing Release and Patch Levels
When adding new servers to a rack, you must match the installed operating system
version and software releases.
• Performing Preliminary Checks
• Preparing to Add Servers
1-1
Chapter 1
About Extending Oracle Exadata
– All racks that are cabled together in a multi-rack configuration must use the same
RDMA Network Fabric. That is, all racks must use RoCE Network Fabric, or all racks
must use InfiniBand Network Fabric.
You cannot have a mixture of racks using RoCE Network Fabric and InfiniBand
Network Fabric. For example, you cannot cable together an X8-2 rack and an X9M-2
rack.
– All racks that are cabled together in a multi-rack configuration have the same database
server hardware architecture. That is, all racks must use 2-socket database servers, or
all racks must use 8-socket database servers.
You cannot have a mixture of racks using 2-socket and 8-socket database servers. For
example, you cannot cable together an X9M-2 rack and an X9M-8 rack.
• Prior to extending a system across multiple racks, you must acquire the appropriate RDMA
Network Fabric switches and transceivers.
• When extending Oracle Exadata Eighth Rack with Oracle Exadata Storage Expansion
Rack, Oracle recommends using separate disk groups for the disks in each rack.
Multiple Oracle Exadata racks can run as separate configurations while sharing the RDMA
Network Fabric. If you are planning to utilize multiple Oracle Exadata racks in this manner, then
note the following:
• All servers on the RDMA Network Fabric must have a unique IP address. When Oracle
Exadata is deployed, the default network is 192.168.10.1. You must modify the IP
addresses before re-configuring the RDMA Network Fabric. Failure to do so causes
duplicate IP addresses.
• After modifying the network, run the appropriate verification tools:
– For X8M and later, with RoCE Network Fabric:
Run the infinicheck command to verify the network. You should supply a file that
contains a list of all the database server host names or RoCE Network Fabric IP
addresses, and another file that lists all of the RoCE Network Fabric IP addresses for
the storage servers. For example:
INFINICHECK
[Network Connectivity, Configuration and Performance]
Checking for RoCE Policy Routing settings on all DBs and CELLs
Checking for RoCE DSCP ToS mapping on all DBs and CELLs
1-2
Chapter 1
About Extending Oracle Exadata
Checking for RoCE PFC settings and DSCP mapping on all DBs and
CELLs
If user equivalence for password-less SSH is not configured, then you must first run
infinicheck with the -s option. For example:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify-toplogy -t fattree
# ./infinicheck -g hosts -c cells
• When Oracle Exadata racks run in separate clusters, do not modify the cellip.ora files.
The cellip.ora file on a database server should only include the IP addresses for the
storage servers used with that database server.
• Storage servers with different media types may be used in a multi-rack configuration, but
different media types cannot be mixed in the same storage container (Oracle ASM disk
1-3
Chapter 1
Reviewing the Safety Precautions
group or Oracle Exadata Exascale storage pool). For example, one storage container
cannot contain a mixture of high capacity (HC) disks and extreme flash (EF) storage.
• Within each Oracle ASM disk group, ensure that all of the grid disks are the same size,
even if the underlying storage servers contain different sized disks. Any unused storage on
larger disks can be used to accommodate additional grid disks, which may be used by
another Oracle ASM disk group.
• When deploying multiple configurations on a multi-rack system, ensure that you use a
unique name for each storage server (cell).
Furthermore, you should use unique names for each storage container (Oracle ASM disk
group or Oracle Exadata Exascale storage pool).
• All equipment receives a Customer Support Identifier (CSI). Any new equipment for the
Oracle Exadata has a new CSI. Contact Oracle Support Services to reconcile the new CSI
with the existing Oracle Exadata CSI. Have the original instance numbers or serial
numbers available, as well as the new numbers when contacting Oracle Support Services.
• For X8M and later, with RoCE Network Fabric:
You can use the RDMA Network Fabric for limited external connectivity. The external
connectivity ports in the RoCE Network Fabric switches can connect to Oracle ZFS
Storage Appliance or Oracle Zero Data Loss Recovery Appliance to provide a backup
solution.
For details about the recommended connectivity options, see the following solution briefs:
– Exadata X8M and ZFS Storage 25Gb Backup Solution - Configuration of Oracle
Exadata X8M with Oracle ZFS Storage Appliance ZS7-2 using dedicated backup
switches
– 100Gb Backup Solution - Oracle Exadata Database Machine X8M with Oracle ZFS
Storage Appliance ZS7-2
• For X8 and earlier, with InfiniBand Network Fabric:
The RDMA Network Fabric can be used for external connectivity. The external connectivity
ports in the Sun Datacenter InfiniBand Switch 36 switches can connect to media servers
for tape backup, data loading, and client and application access. Use the available ports on
the leaf switches for external connectivity. There are 12 ports per rack. The available ports
are 5B, 6A, 6B, 7A, 7B, and 12A in each leaf switch. For high availability connections,
connect one port to one leaf switch and the other port to the second leaf switch. The
validated InfiniBand cable lengths are:
– Up to 5 meters for passive copper 4X QDR QSFP cables
– Up to 100 meters for fiber optic 4X QDR QSFP cables
Related Topics
• Elastic Configurations
1-4
Chapter 1
Reviewing the Cable Precautions
Note:
Contact a service representative or Oracle Advanced Customer Support to confirm
that Oracle has qualified your equipment for installation and use in Oracle Exadata
Database Machine. Oracle is not liable for any issues when you install or use non-
qualified equipment.
See Also:
• Oracle Exadata Database Machine Installation and Configuraton Guide for safety
guidelines
• Oracle Engineered System Safety and Compliance Guide for safety notices
1-5
Chapter 1
Bundling Cables
radius of bends. In this situation, the differences of the required lengths of the cables is
quite substantial.
• When calculating the cable path length and route is under the floor, take into consideration
the height of the raised floor.
Note:
Overhead cabling details are not included in this guide. For details on overhead
cabling, contact a certified service engineer.
1-6
Chapter 1
Obtaining Current Configuration Information
– AC power cables: 4 x diameter of the cable or 1 inch; 25.4 mm minimum bend radius
– For X8M and later, with RoCE Network Fabric:
* 30 AWG: Single cable diameter of 4.5 +/- 0.2 mm and lengths from 1 to 3 meters;
21 mm single bend minimum bend radius or 45 mm repeated bends.
* 26 AWG: Single cable diameter of 5.8 +0.3 mm/-1.0 mm and lengths from 2.5 to 5
meters; 29 mm single bend minimum bend radius or 58 mm repeated bends.
– For X8 and earlier, with InfiniBand Network Fabric:
* TwinAx: 5 x diameter of the cable or 1.175 inch; 33 mm minimum bend radius.
* Quad Small Form-factor Pluggable (QSFP) cable: 6 x diameter of the cable or 2
inch; 55 mm minimum bend radius.
* Fiber core cable: 10 x diameter of the cable or 1.22 inch; 31.75 mm minimum bend
radius for a 0.125 cable.
• Install the cables with the best longevity rate first.
• Current IP addresses defined for all Exadata Storage Servers and database servers using
the following command:
• Information about the configuration of the cells, cell disks, flash logs, and IORM plans
using the following commands:
1-7
Chapter 1
Obtaining Current Configuration Information
• HugePages memory configuration on the database servers using the following command:
Use the nm2version on each Sun Datacenter InfiniBand Switch 36 switch to get its
firmware version.
• The following network files from the first database server in the rack:
– /etc/resolv.conf
– /etc/ntp.conf
– /etc/network
– /etc/sysconfig/network-scripts/ifcfg-*
• Any users, user identifiers, groups and group identifiers created for cluster-managed
services that need to be created on the new servers, such as Oracle GoldenGate.
– /etc/passwd
– /etc/group
• Output of current cluster status using the following command:
• Patch information from the Grid Infrastructure and Oracle homes using the following
commands. The commands must be run as Grid Infrastructure home owner, and the
Oracle home owner.
In the preceding commands, GRID_HOME is the path for the Grid Infrastructure home
directory, and ORACLE_HOME is the path for the Oracle home directory.
Related Topics
• Oracle Autonomous Health Framework User's Guide
• Oracle Exadata Database Machine Exachk (My Oracle Support Doc ID 1070954.1)
1-8
Chapter 1
Preparing the Network Configuration
1-9
Chapter 1
Performing Preliminary Checks
Tip:
Check My Oracle Support note 888828.1 for latest information on minimum releases.
Older servers in a rack may need to be patched to a later release to meet the
minimum required software release. In addition, older database servers might use
Oracle Linux release 5.3. Those servers need to be updated to a newer Oracle Linux
release.
Additional patching considerations include the Oracle Grid Infrastructure and Oracle Database
home releases and updates. If new patches will be applied, then Oracle recommends changing
the existing servers so that the new servers will inherit the releases as part of the extension
procedure. This way, the number of servers being patched is lower. Any patching of the
existing servers should be performed in advance so they are at the desired level when the
extension work is scheduled, thereby reducing the total amount of work required during the
extension.
Related Topics
• Exadata Database Machine and Exadata Storage Server Supported Versions (My Oracle
Support Doc ID 888828.1)
• Updating key software components on database hosts to match those on the cells (My
Oracle Support Doc ID 1284070.1)
1-10
Chapter 1
Preparing to Add Servers
Note:
If you are extending Oracle Exadata Database Machine X4-2, Oracle Exadata
Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2
(with X4170 and X4275 servers) half rack, then order the expansion kit that
includes a Sun Datacenter InfiniBand Switch 36 switch.
1-11
Chapter 1
Preparing to Add Servers
See Also:
1-12
2
Extending the Hardware
You can extend Oracle Exadata Database Machine by adding database servers and storage
servers within a rack. You can also cable together multiple racks.
All new equipment receives a Customer Support Identifier (CSI). Any new equipment for your
Oracle Exadata Rack has a new CSI. Contact Oracle Support Services to reconcile the new
CSI with the existing Oracle Exadata Rack CSI. Have the original instance numbers or serial
numbers available, as well as the new numbers when contacting Oracle Support Services.
• Extending an Eighth Rack
• Extending Elastic Configurations
Oracle Exadata is available in Elastic Configurations that consist of a number of database
and storage servers up to the capacity of the rack, as defined within Oracle Exadata
Configuration Assistant (OECA).
• Extending a Rack by Adding Another Rack
You can extend your Oracle Exadata Rack by adding another rack and configuring the
racks together.
2-1
Chapter 2
Extending an Eighth Rack
2-2
Chapter 2
Extending an Eighth Rack
2-3
Chapter 2
Extending an Eighth Rack
2-4
Chapter 2
Extending an Eighth Rack
3. Review the current CPU core count on the database servers using the following command:
The following is an example of the expected output from an Oracle Exadata Database
Machine X9M-2 Eighth Rack with all CPU cores enabled:
dm01db01: 32
dm01db02: 32
Contact Oracle Support Services if the number of active database server CPU cores differs
from the expected value.
Related Topics
• Oracle Exadata Database Server Hardware Components
Note:
This procedure applies to:
• Original database server CPU cores that are disabled
• Additional CPU cores that are part of an approved CPU upgrade kit
The following is not required where database server hardware resources are
expanded by adding more database servers.
In the preceding command, number_of_cores is the total number of cores to activate. The
value includes the existing core count and the additional cores to be activated. The
2-5
Chapter 2
Extending an Eighth Rack
following command shows how to activate all the cores in Oracle Exadata Database
Machine X5-2 Eighth Rack:
For a description of the supported core counts for each server model, see Restrictions for
Capacity-On-Demand on Oracle Exadata Database Machine
3. Restart each database server.
Note:
If this procedure is done in a rolling fashion with the Oracle Database and Oracle
Grid Infrastructure active, then ensure the following before restarting the
database server:
• All Oracle ASM grid disks are online.
• There are no active Oracle ASM rebalance operations. You can query the
V$ASM_OPERATION view for the status of the rebalance operation.
• Shut down Oracle Database and Oracle Grid Infrastructure in a controlled
manner, failing over services as needed.
4. Verify the following items on the database server after the restart completes and before
proceeding to the next server:
• The Oracle Database and Oracle Grid Infrastructure services are active.
See Using SRVCTL to Verify That Instances are Running in Oracle Real Application
Clusters Administration and Deployment Guide and the crsctl status resource –w
"TARGET = ONLINE" —t command.
• The number of active cores is correct. Use the dbmcli -e list dbserver attributes
coreCount command to verify the number of cores.
See Also:
2.1.2.3 Activating Storage Server Cores and Disks in Oracle Exadata Database
Machine Eighth Rack
The following procedure describes how to activate the storage server cores and disks.
2-6
Chapter 2
Extending an Eighth Rack
Note:
This procedure applies only to the original storage servers in the following Oracle
Exadata Database Machine Eighth Rack models: X4-2, X5-2, and X6-2 with Extreme
Flash (EF) storage servers.
This procedure does not apply where storage server hardware resources are
expanded by adding more storage servers.
2.1.2.4 Creating Additional Grid Disks in Oracle Exadata Database Machine Eighth
Rack
Additional grid disk creation must follow a specific order to ensure the proper offset.
Note:
This procedure applies only to the original storage servers in the following Oracle
Exadata Database Machine Eighth Rack models: X4-2, X5-2, and X6-2 with Extreme
Flash (EF) storage servers.
This procedure does not apply where storage server hardware resources are
expanded by adding more storage servers.
The order of grid disk creation must follow the same sequence that was used during the initial
grid disk creation process. For a standard deployment using Oracle Exadata Deployment
Assistant (OEDA), the order is DATA, RECO, and DBFS_DG (if present). Create all DATA grid
disks first, followed by the RECO grid disks, and then the DBFS_DG grid disks (if present).
The following procedure describes how to create the grid disks:
2-7
Chapter 2
Extending an Eighth Rack
Note:
The commands shown in this procedure use the standard deployment grid disk prefix
names of DATA, RECO, and DBFS_DG. The sizes being checked are on cell disk 02.
Cell disk 02 is used because the disk layout for cell disks 00 and 01 are different from
the other cell disks in the server.
1. Check the size of the grid disks using the following commands. Each cell should return the
same size for the grid disks starting with the same grid disk prefix.
2-8
Chapter 2
Extending an Eighth Rack
Table 2-1 Commands to Create Disk Groups When Extending Oracle Exadata Database Machine Eighth
Rack
Rack Commands
Extreme Flash Oracle
Exadata Database dcli -g cell_group -l celladmin "cellcli -e create griddisk \
Machine DATA_FD_04_\'hostname -s\' celldisk=FD_04_\'hostname -
s\',size=datasize"
2-9
Chapter 2
Extending an Eighth Rack
Table 2-1 (Cont.) Commands to Create Disk Groups When Extending Oracle Exadata Database Machine
Eighth Rack
Rack Commands
2-10
Chapter 2
Extending an Eighth Rack
Table 2-1 (Cont.) Commands to Create Disk Groups When Extending Oracle Exadata Database Machine
Eighth Rack
Rack Commands
High Capacity Oracle
Exadata Database dcli -g cell_group -l celladmin "cellcli -e create griddisk
Machine \
DATA_CD_06_\'hostname -s\' celldisk=CD_06_\'hostname -
s\',size=datasize"
2-11
Chapter 2
Extending an Eighth Rack
Table 2-1 (Cont.) Commands to Create Disk Groups When Extending Oracle Exadata Database Machine
Eighth Rack
Rack Commands
2-12
Chapter 2
Extending an Eighth Rack
Table 2-1 (Cont.) Commands to Create Disk Groups When Extending Oracle Exadata Database Machine
Eighth Rack
Rack Commands
cachingPolicy=none"
2.1.2.5 Adding Grid Disks to Oracle ASM Disk Groups in Oracle Exadata Database
Machine Eighth Rack
The following procedure describes how to add the new grid disks to Oracle ASM disk groups.
Note:
This procedure applies only to the original storage servers in the following Oracle
Exadata Database Machine Eighth Rack models: X4-2, X5-2, and X6-2 with Extreme
Flash (EF) storage servers.
This procedure does not apply where storage server hardware resources are
expanded by adding more storage servers.
The grid disks created in Creating Additional Grid Disks in Oracle Exadata Database Machine
Eighth Rack must be added as Oracle ASM disks to their corresponding, existing Oracle ASM
disk groups.
1. Validate the following:
• No rebalance operation is currently running.
• All Oracle ASM disks are active.
2. Log in to the first database server as the owner who runs the Oracle Grid Infrastructure
software.
3. Set the environment to access the +ASM instance on the server.
4. Log in to the ASM instance as the sysasm user using the following command:
$ sqlplus / as sysasm
2-13
Chapter 2
Extending an Eighth Rack
6. Disable the appliance.mode attribute for any disk group that shows TRUE using the
following commands:
Table 2-2 Commands to Add Disk Groups When Extending Eighth Rack Oracle
Exadata Database Machine
Rack Commands
2-14
Chapter 2
Extending an Eighth Rack
Table 2-2 (Cont.) Commands to Add Disk Groups When Extending Eighth Rack
Oracle Exadata Database Machine
Rack Commands
9. Re-enable the appliance.mode attribute, if it was disabled in step 6 using the following
commands:
Note:
This procedure applies only to the original storage servers in the following Oracle
Exadata Database Machine Eighth Rack models: X4-2, X5-2, X6-2 with Extreme
Flash (EF) storage servers.
This procedure does not apply where hardware resources are expanded by adding
more servers.
2-15
Chapter 2
Extending an Eighth Rack
6. Validate the number of Oracle ASM disks using the following command:
Each High Capacity (HC) storage server (non-Eighth Rack) contains 12 disks.
2-16
Chapter 2
Extending Elastic Configurations
Note:
It is possible to extend the hardware while the machine is online, and with no
downtime. However, extreme care should be taken. In addition, patch application to
existing switches and servers should be done before extending the hardware.
2-17
Chapter 2
Extending Elastic Configurations
• Adding an InfiniBand Network Fabric Switch (Sun Datacenter InfiniBand Switch 36)
Note:
The steps in this procedure are specific to Oracle Exadata. They are not the same as
the steps in the Cisco Nexus manual.
1. Unpack the Cisco Nexus switch components from the packing cartons. The following items
should be in the packing cartons:
• Cisco Nexus 9336C-FX2 Switch
• Cable bracket and rack-mount kit
• Cable management bracket and cover
• Two rack rail assemblies
• Assortment of screws and captive nuts
• Cisco Nexus 9336C-FX2 Switch documentation
The service label procedure on top of the switch includes descriptions of the preceding
items.
2. Remove the trough from the rack in RU1. Put the cables aside while installing the RoCE
Network Fabric switch. The trough can be discarded.
3. Install cage nuts in each rack rail in the appropriate holes.
4. Attach the brackets with cutouts to the power supply side of the switch.
5. Attach the C-brackets to the switch on the side of the ports.
6. Slide the switch halfway into the rack from the front. Keep the switch to the left side of the
rack as far as possible while pulling the two power cords through the C-bracket on the right
side.
7. Slide the server in rack location U2 out to the locked service position. This improves
access to the rear of the switch during further assembly.
8. Install the slide rails from the rear of the rack into the C-brackets on the switch, pushing
them up to the rack rail.
9. Attach an assembled cable arm bracket to the slide rail and using a No. 3 Phillips
screwdriver, screw these together into the rack rail:
a. Install the lower screw loosely with the cable arm bracket rotated 90 degrees
downward. This allows better finger access to the screw.
b. Rotate the cable arm bracket to the correct position.
c. Install the upper screw.
d. Tighten both screws.
If available, a screwdriver with a long-shaft (16-inch / 400mm) will allow easier installation
such that the handle is outside the rack and beyond the cabling.
2-18
Chapter 2
Extending Elastic Configurations
10. Push the switch completely into the rack from the front, routing the power cords through
the cutout on the rail bracket.
11. Secure the switch to the front rack rail with M6 16mm screws. Tighten the screws using the
No. 3 Phillips screwdriver.
12. Install the lower part of the cable management arm across the back of the switch.
15. Slide the server in rack location U2 back into the rack.
16. Install power cords to the switch power supply slots on the front.
17. Loosen the front screws to install the vented filler panel brackets. Tighten the screws, and
snap on the vented filler panel in front of the switch.
See Also:
• Oracle Exadata Database Machine System Overview to view the rack layout
• Oracle Exadata Database Machine System Overview for information about
networking cables
Note:
The steps in this procedure are specific to Oracle Exadata Database Machine. They
are not the same as the steps in the Sun Datacenter InfiniBand Switch 36 manual.
1. Unpack the Sun Datacenter InfiniBand Switch 36 switch components from the packing
cartons. The following items should be in the packing cartons:
• Sun Datacenter InfiniBand Switch 36 switch
• Cable bracket and rackmount kit
• Cable management bracket and cover
• Two rack rail assemblies
• Assortment of screws and captive nuts
• Sun Datacenter InfiniBand Switch 36 documentation
The service label procedure on top of the switch includes descriptions of the preceding
items.
2. X5 racks only: Remove the trough from the rack in RU1 and put the cables aside while
installing the Sun Datacenter InfiniBand Switch 36 switch. The trough can be discarded.
3. Install cage nuts in each rack rail in the appropriate holes.
2-19
Chapter 2
Extending Elastic Configurations
4. Attach the brackets with cutouts to the power supply side of the switch.
5. Attach the C-brackets to the switch on the side of the Sun Datacenter InfiniBand Switch 36
ports.
6. Slide the switch halfway into the rack from the front. You need to keep it to the left side of
the rack as far as possible while pulling the two power cords through the C-bracket on the
right side.
7. Slide the server in rack location U2 out to the locked service position. This improves
access to the rear of the switch during further assembly.
8. Install the slide rails from the rear of the rack into the C-brackets on the switch, pushing
them up to the rack rail.
9. Attach an assembled cable arm bracket to the slide rail and using a No. 3 Phillips
screwdriver, screw these together into the rack rail:
a. Install the lower screw loosely with the cable arm bracket rotated 90 degrees
downward. This allows better finger access to the screw.
b. Rotate the cable arm bracket to the correct position.
c. Install the upper screw.
d. Tighten both screws.
If available, a screwdriver with a long-shaft (16-inch / 400mm) will allow easier installation
such that the handle is outside the rack and beyond the cabling.
10. Push the switch completely into the rack from the front, routing the power cords through
the cutout on the rail bracket.
11. Secure the switch to the front rack rail with M6 16mm screws. Tighten the screws using the
No. 3 Phillips screwdriver.
12. Install the lower part of the cable management arm across the back of the switch.
15. Slide the server in rack location U2 back into the rack.
16. Install power cords to the Sun Datacenter InfiniBand Switch 36 switch power supply slots
on the front.
17. Loosen the front screws to install the vented filler panel brackets. Tighten the screws, and
snap on the vented filler panel in front of the switch.
See Also:
• Oracle Exadata Database Machine System Overview to view the rack layout
• Oracle Exadata Database Machine System Overview for information about
InfiniBand networking cables
2-20
Chapter 2
Extending Elastic Configurations
You can add individual database servers or storage servers to meet growing resource
requirements using the Elastic Configuration method. See Oracle Exadata Configuration
Assistant (OECA) for details. The upgrade process includes adding new servers and cables.
Additional hardware may be required.
Note:
• Always load equipment into the rack from the bottom up, so that the rack does
not become top-heavy and tip over. Extend the rack anti-tip bar to prevent the
rack from tipping during equipment installation.
• The new servers need to be configured manually.
2-21
Chapter 2
Extending Elastic Configurations
Figure 2-1 Locking the Slide-Rail Assembly Against the Inside of the Rear Rack
Rail
4. Align the front of the slide-rail assembly against the outside of the front rack rail, and push
until the assembly locks into place and you hear the click.
5. Repeat steps 2 to 4 on the other side on the rack.
WARNING:
1. Read the service label on the top cover of the server before installing a server into the
rack.
2. Push the server into the slide rail assembly:
a. Push the slide rails into the slide rail assemblies as far as possible.
b. Position the server so the rear ends of the mounting brackets are aligned with the slide
rail assemblies mounted in the equipment rack.
2-22
Chapter 2
Extending Elastic Configurations
Figure 2-2 Aligning the Rear Ends of the Mounting Brackets with the Slide Rail
Assemblies in the Rack
Note:
Oracle recommends that two people push the servers into the rack: one
person to move the server in and out of the rack, and another person to
watch the cables and cable management arm (CMA).
e. Continue pushing until the slide rail locks on the front of the mounting brackets engage
the slide rail assemblies, and you hear the click.
3. Cable the new server as described in Cabling Exadata Storage Servers.
2-23
Chapter 2
Extending Elastic Configurations
Note:
1. Connect the CAT5e cables, AC power cables, and USB to their respective ports on the
rear of the server. Ensure the flat side of the dongle is flush against the CMA inner rail.
2-24
Chapter 2
Extending Elastic Configurations
d. Connector B
e. Connector C
f. Connector D
g. Slide-rail latching bracket (used with connector D)
h. Rear slide bar
i. Cable covers
j. Cable covers
3. Attach the CMA to the server.
4. Route the CAT5e and power cables through the wire clip.
5. Bend the CAT5e and power cables to enter the CMA, while adhering to the bend radius
minimums.
6. Secure the CAT5e and power cables under the cable clasps.
7. Route the cables through the CMA, and secure them with hook and loop straps at equal
intervals.
2-25
Chapter 2
Extending Elastic Configurations
Figure 2-7 Cables Secured with Hook and Loop Straps at Regular Intervals
8. Connect the RDMA Network Fabric or TwinAx cables with the initial bend resting on the
CMA. The TwinAx cables are for client access to the database servers.
Figure 2-8 RDMA Network Fabric or TwinAx Cables Positioned on the CMA
9. Secure the RDMA Network Fabric or TwinAx cables with hook and loop straps at equal
intervals.
Figure 2-9 RDMA Network Fabric or TwinAx Cables Secured with Hook and Loop
Straps at Regular Intervals
2-26
Chapter 2
Extending Elastic Configurations
11. Rest the cables over the green clasp on the CMA.
12. Attach the red ILOM cables to the database server.
14. Attach the cables from Oracle Database server to the RDMA Network Fabric switches.
16. Connect the red and blue Ethernet cables to the Cisco switch.
17. Verify operation of the slide rails and CMA for each server, as follows:
Note:
Oracle recommends that two people do this step. One person to move the server
in and out of the rack, and another person to observe the cables and CMA.
a. Slowly pull the server out of the rack until the slide rails reach their stops.
b. Inspect the attached cables for any binding or kinks.
c. Verify the CMA extends fully from the slide rails.
18. Push the server back into the rack, as follows:
See Also:
2-27
Chapter 2
Extending Elastic Configurations
Note:
Figure 2-10 Rear of the Server Showing Power and Network Cables
3. Route the cables through the CMA and secure them with hook and loop straps on both
sides of each bend in the CMA.
2-28
Chapter 2
Extending Elastic Configurations
Figure 2-11 Cables Routed Through the CMA and Secured with Hook and Loop
Straps
Note:
Oracle recommends that two people do this step: one person to move the server
in and out of the rack, and another person to watch the cables and the CMA.
a. Slowly pull the server out of the rack until the slide rails reach their stops.
b. Inspect the attached cables for any binding or kinks.
c. Verify that the CMA extends fully from the slide rails.
6. Push the server back into the rack:
a. Release the two sets of slide rail stops.
b. Locate the levers on the inside of each slide rail, just behind the back panel of the
server. They are labeled PUSH.
c. Simultaneously push in both levers and slide the server into the rack, until it stops in
approximately 46 cm (18 inches).
d. Verify that the cables and CMA retract without binding.
e. Locate the slide rail release buttons near the front of each mounting bracket.
f. Simultaneously push in both slide rail release buttons and slide the server completely
into the rack, until both slide rails engage.
7. Dress the cables, and then tie off the cables with the straps. Oracle recommends that you
dress the RDMA Network Fabric cables in bundles of eight or fewer.
8. Slide each server out and back fully to ensure that the cables are not binding or catching.
9. Repeat the procedure for all servers.
10. Connect the power cables to the power distribution units (PDUs). Ensure the breaker
switches are in the OFF position before connecting the power cables. Do not plug the
power cables into the facility receptacles now.
2-29
Chapter 2
Extending a Rack by Adding Another Rack
See Also:
Multi-Rack Cabling Tables
Oracle Exadata Database Machine System Overview for the cabling tables for your
system
2-30
Chapter 2
Extending a Rack by Adding Another Rack
2-31
Chapter 2
Extending a Rack by Adding Another Rack
• Cabling Two RoCE Network Fabric Racks Together with No Down Time
If your operational requirements cannot tolerate any scheduled down time, then choose
from the following procedures to extend your existing RoCE Network Fabric rack by adding
another rack.
• Cabling Two RoCE Network Fabric Racks Together with Down Time Allowed
Use this simpler procedure to cable together two racks with RoCE Network Fabric where
some down-time can be tolerated.
• Cabling Two InfiniBand Network Fabric Racks Together
Use this procedure to cable together two racks with InfiniBand Network Fabric.
2.3.2.1 Cabling Two RoCE Network Fabric Racks Together with No Down Time
If your operational requirements cannot tolerate any scheduled down time, then choose from
the following procedures to extend your existing RoCE Network Fabric rack by adding another
rack.
• Extending an X9M or Later Model Rack with No Down Time by Adding Another X9M or
Later Model Rack
• Extending an X8M Rack with No Down Time by Adding an X9M or Later Model Rack
• Extending an X8M Rack with No Down Time by Adding Another X8M Rack
2.3.2.1.1 Extending an X9M or Later Model Rack with No Down Time by Adding Another X9M or
Later Model Rack
WARNING:
Take time to read and understand this procedure before implementation. Pay
careful attention to the instructions that surround the command examples. A
system outage may occur if the procedure is not applied correctly.
Note:
For additional background information, see Understanding Multi-Rack Cabling for
X9M and Later Model Racks.
Use this procedure to extend a typical X9M or later model rack by cabling it together with a
second X9M or later model rack. The primary rack (designated R1) and all of the systems it
supports remain online throughout the procedure. At the beginning of the procedure, the
additional rack (designated R2) is shut down.
The following is an outline of the procedure:
• Preparation (steps 1 and 2)
In this phase, you prepare the racks, switches, and cables. Also, you install and cable the
spine switches in both racks.
• Configuration and Physical Cabling
2-32
Chapter 2
Extending a Rack by Adding Another Rack
In this phase, you reconfigure the leaf switches and finalize the cabling to the spine
switches. These tasks are carefully orchestrated to avoid downtime on the primary system,
as follows:
– Partially configure the lower leaf switches (step 3)
In this step, you reconfigure the switch ports on the lower leaf switches. There is no
physical cabling performed in this step.
– Partially configure the upper leaf switches (step 4)
In this step, you reconfigure the switch ports on the upper leaf switches, remove the
inter-switch cables that connect the leaf switches in both racks and connect the cables
between the upper leaf switches and the spine switches.
– Finalize the lower leaf switches (step 5)
In this step, you finalize the switch port configuration on the lower leaf switches. You
also complete the physical cabling by connecting the cables between the lower leaf
switches and the spine switches.
– Finalize the upper leaf switches (step 6)
In this step, you finalize the switch port configuration on the upper leaf switches.
• Validation and Testing (steps 7 and 8)
In this phase, you validate and test the RoCE Network Fabric across both of the
interconnect racks.
After completing the procedure, both racks share the RoCE Network Fabric, and the combined
system is ready for further configuration. For example, you can extend existing disk groups and
Oracle RAC databases to consume resources across both racks.
2-33
Chapter 2
Extending a Rack by Adding Another Rack
Note:
• This procedure applies only to typical rack configurations that initially have leaf
switches with the following specifications:
– The inter-switch ports are ports 4 to 7, and ports 30 to 33.
– The storage server ports are ports 8 to 14, and ports 23 to 29.
– The database server ports are ports 15 to 22.
For other rack configurations (for example, X9M-8 systems with three database
servers and 11 storage servers) a different procedure and different RoCE
Network Fabric switch configuration files are required. Contact Oracle for further
guidance.
• The procedure uses the following naming abbreviations and conventions:
– The abbreviation for the existing rack is R1, and the new rack is R2.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies a spine switch.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the existing rack (R1).
• Most operations must be performed in multiple locations. For example, step 1.h
instructs you to update the firmware on all the RoCE Network Fabric leaf
switches (R1LL, R1UL, R2LL, and R2UL). Pay attention to the instructions and
keep track of your actions.
Tip:
When a step must be performed on multiple switches, the instruction
contains a list of the applicable switches. For example, (R1LL, R1UL,
R2LL, and R2UL). You can use this list as a checklist to keep track of
your actions.
2-34
Chapter 2
Extending a Rack by Adding Another Rack
• Preparing for Multi-Rack Cabling with X9M and Later Model Racks
• Two-Rack Cabling for X9M and Later Model Racks.
Note that all specified cable lengths assume the racks are physically adjacent. If this is
not the case, you may need longer cables.
b. Position the new rack (R2) so that it is physically near the existing rack (R1).
Ensure that the RDMA Network Fabric cables can reach the switches in each rack.
c. Power on all of the servers and network switches in the new rack (R2).
This includes the database servers, storage servers, RoCE Network Fabric leaf
switches, and the Management Network Switch.
d. Connect the new rack (R2) to your existing management network.
Ensure that there are no IP address conflicts across the racks and that you can access
the management interfaces on the RoCE Network Fabric switches.
e. Ensure that you have a backup of the current switch configuration for each RoCE
Network Fabric switch (R1LL, R1UL, R2LL, and R2UL).
See Backing Up Settings on the RoCE Network Fabric Switch in Oracle Exadata
Database Machine Maintenance Guide.
f. Download the required RoCE Network Fabric switch configuration files.
This procedure requires specific RoCE Network Fabric switch configuration files, which
you must download from My Oracle Support document 2704997.1.
WARNING:
You must use different switch configuration files depending on whether your
system uses Exadata Secure RDMA Fabric Isolation. Ensure that you
download the correct archive that matches your system configuration.
For system configurations without Secure Fabric, download online_multi-
rack_14uplinks.zip. For system configurations with Secure Fabric,
download online_SF_enabled_multi-rack_14uplinks.zip.
Download and extract the archive containing the required RoCE Network Fabric switch
configuration files. Place the files on a server with access to the management
interfaces on the RoCE Network Fabric switches.
g. Copy the required RoCE Network Fabric switch configuration files to the leaf switches
on both racks.
You can use the following commands to copy the required configuration files to all of
the RoCE Network Fabric switches on a system without Secure Fabric enabled:
2-35
Chapter 2
Extending a Rack by Adding Another Rack
On a system with Secure Fabric enabled, you can use the following commands:
In the above commands, substitute the appropriate IP address or host name where
applicable. For example, in place of R1LL_IP, substitute the management IP address
or host name for the lower leaf switch (LL) on the existing rack (R1).
Note:
The command examples in the rest of this procedure use the configuration
files for a system configuration without Secure Fabric enabled. If required,
adjust the commands to use the Secure Fabric-enabled switch configuration
files.
h. Update the firmware to the latest available release on all of the RoCE Network Fabric
leaf switches (R1LL, R1UL, R2LL, and R2UL).
See Updating RoCE Network Fabric Switch Firmware in Oracle Exadata Database
Machine Maintenance Guide.
i. Examine the RoCE Network Fabric leaf switches (R1LL, R1UL, R2LL, and R2UL) and
confirm the port categories for the cabled ports.
Run the show interface status command on every RoCE Network Fabric leaf
switch:
2-36
Chapter 2
Extending a Rack by Adding Another Rack
• Confirm that the storage server ports are ports 8 to 14, and ports 23 to 29.
• Confirm that the database server ports are ports 15 to 22.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
2-37
Chapter 2
Extending a Rack by Adding Another Rack
j. For each rack (R1 and R2), confirm the RoCE Network Fabric cabling by running the
verify_roce_cables.py script.
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each
file, every server or switch must be listed on separate lines. Use fully qualified domain
names or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check that output in the CABLE OK? columns contains the OK status.
2-38
Chapter 2
Extending a Rack by Adding Another Rack
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack1sw-rocea0) :
CABLE OK? UPPER LEAF (rack1sw-roceb0) : CABLE OK?
----------- --------------- -------------------------------- :
-------- -------------------------------- : ---------
Eth1/4 (ISL peer switch) : rack1sw-rocea0 Ethernet1/4 :
OK rack1sw-roceb0 Ethernet1/4 : OK
Eth1/5 (ISL peer switch) : rack1sw-rocea0 Ethernet1/5 :
OK rack1sw-roceb0 Ethernet1/5 : OK
Eth1/6 (ISL peer switch) : rack1sw-rocea0 Ethernet1/6 :
OK rack1sw-roceb0 Ethernet1/6 : OK
Eth1/7 (ISL peer switch) : rack1sw-rocea0 Ethernet1/7 :
OK rack1sw-roceb0 Ethernet1/7 : OK
Eth1/8 (RU39) : rack1celadm14 port-1 :
OK rack1celadm14 port-2 : OK
Eth1/9 (RU37) : rack1celadm13 port-1 :
OK rack1celadm13 port-2 : OK
Eth1/10 (RU35) : rack1celadm12 port-1 :
OK rack1celadm12 port-2 : OK
Eth1/11 (RU33) : rack1celadm11 port-1 :
OK rack1celadm11 port-2 : OK
Eth1/12 (RU31) : rack1celadm10 port-1 :
OK rack1celadm10 port-2 : OK
Eth1/13 (RU29) : rack1celadm09 port-1 :
OK rack1celadm09 port-2 : OK
Eth1/14 (RU27) : rack1celadm08 port-1 :
OK rack1celadm08 port-2 : OK
Eth1/15 (RU26) : rack1adm08 port-1 :
OK rack1adm08 port-2 : OK
Eth1/16 (RU25) : rack1adm07 port-1 :
OK rack1adm07 port-2 : OK
Eth1/17 (RU24) : rack1adm06 port-1 :
OK rack1adm06 port-2 : OK
Eth1/18 (RU23) : rack1adm05 port-1 :
OK rack1adm05 port-2 : OK
Eth1/19 (RU19) : rack1adm04 port-1 :
OK rack1adm04 port-2 : OK
Eth1/20 (RU18) : rack1adm03 port-1 :
OK rack1adm03 port-2 : OK
Eth1/21 (RU17) : rack1adm02 port-1 :
OK rack1adm02 port-2 : OK
Eth1/22 (RU16) : rack1adm01 port-1 :
OK rack1adm01 port-2 : OK
Eth1/23 (RU14) : rack1celadm07 port-1 :
OK rack1celadm07 port-2 : OK
Eth1/24 (RU12) : rack1celadm06 port-1 :
OK rack1celadm06 port-2 : OK
Eth1/25 (RU10) : rack1celadm05 port-1 :
OK rack1celadm05 port-2 : OK
Eth1/26 (RU08) : rack1celadm04 port-1 :
OK rack1celadm04 port-2 : OK
Eth1/27 (RU06) : rack1celadm03 port-1 :
OK rack1celadm03 port-2 : OK
2-39
Chapter 2
Extending a Rack by Adding Another Rack
k. For each rack (R1 and R2), verify the RoCE Network Fabric operation by using the
infinicheck command.
• Use infinicheck with the -z option to clear the files that were created during the
last run of the infinicheck command.
• Use infinicheck with the -s option to set up user equivalence for password-less
SSH across the RoCE Network Fabric.
• Finally, verify the RoCE Network Fabric operation by using infinicheck with the -
b option, which is recommended on newly imaged machines where it is acceptable
to suppress the cellip.ora and cellinit.ora configuration checks.
In each command, the hosts input file (hosts.rack1 and hosts.rack2) contains a list
of database server RoCE Network Fabric IP addresses (2 RoCE Network Fabric IP
addresses for each database server), and the cells input file (cells.rack1 and
cells.rack2) contains a list of RoCE Network Fabric IP addresses for the storage
servers (2 RoCE Network Fabric IP addresses for each storage server).
i. Use the following recommended command sequence on the existing rack (R1):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -b
ii. Use the following recommended command sequence on the new rack (R2):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -b
2-40
Chapter 2
Extending a Rack by Adding Another Rack
The following example shows the expected command results for the final command in
the sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rackN -c cells.rackN -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
2-41
Chapter 2
Extending a Rack by Adding Another Rack
Use a switch list file (spines.lst) to apply the golden configuration settings to both
spine switches using one patchmgr command:
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
d. Connect the RoCE Network Fabric cables to the spine switches (R1SS and R2SS).
WARNING:
At this stage, only connect the cables to the spine switches.
To avoid later complications, ensure that each cable connects to the
correct switch and port.
DO NOT CONNECT ANY OF THE CABLES TO THE LEAF SWITCHES.
2-42
Chapter 2
Extending a Rack by Adding Another Rack
Use the cables that you prepared earlier (in step 1.a).
For the required cross-rack cabling information, see Two-Rack Cabling for X9M and
Later Model Racks.
3. Perform the first round of configuration on the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
Note:
During this step, the lower leaf switch ports are shut down. While the R1LL ports
are down, R1UL exclusively supports the RoCE Network Fabric. During this time,
there is no redundancy in the RoCE Network Fabric, and availability cannot be
maintained if R1UL goes down.
a. Shut down the switch ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
2-43
Chapter 2
Extending a Rack by Adding Another Rack
ii. On R2LL, the switch configuration file name must end with step3_R2_LL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
c. Start the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
2-44
Chapter 2
Extending a Rack by Adding Another Rack
R2LL(config)# <Ctrl-Z>
R2LL#
d. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
e. Verify the status of the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the inter-switch ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
2-45
Chapter 2
Extending a Rack by Adding Another Rack
f. Start the storage server ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
2-46
Chapter 2
Extending a Rack by Adding Another Rack
g. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
h. Verify the status of the storage server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the storage server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
2-47
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
2-48
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
i. Start the database server ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
j. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
k. Verify the status of the database server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the database server ports are connected.
2-49
Chapter 2
Extending a Rack by Adding Another Rack
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
2-50
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
Note:
Before proceeding, ensure that you have completed all of the actions in step 3 on
both lower leaf switches (R1LL and R2LL). If not, then ensure that you go back
and perform the missing actions.
4. Perform the first round of configuration on the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
Note:
At the start of this step, the upper leaf switch ports are shut down. While the
R1UL ports are down, R1LL exclusively supports the RoCE Network Fabric on
the existing rack. During this time, there is no redundancy in the RoCE Network
Fabric, and availability cannot be maintained if R1LL goes down.
a. Shut down the upper leaf switch ports (R1UL and R2UL).
2-51
Chapter 2
Extending a Rack by Adding Another Rack
i. On R1UL:
b. On both racks, remove the inter-switch links between the leaf switches (R1LL to R1UL,
and R2LL to R2UL).
On every leaf switch, remove the cables for the inter-switch links:
i. On R1LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
ii. On R1UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iii. On R2LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iv. On R2UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
c. On both racks, cable the upper leaf switch to both of the spine switches (R1UL and
R2UL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for X9M and Later Model Racks:
i. On R1UL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
ii. On R2UL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
2-52
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
ii. On R2UL, the switch configuration file name must end with step4_R2_UL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
e. Check the status of the RoCE Network Fabric ports on the upper leaf switches (R1UL
and R2UL).
Run the show interface status command on each upper leaf switch:
2-53
Chapter 2
Extending a Rack by Adding Another Rack
Examine the output to ensure that all of the cabled ports are disabled.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 disabled routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 disabled routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 disabled routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 disabled routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 disabled routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 disabled routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 disabled routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
2-54
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 4 on both upper leaf switches (R1UL and R2UL). If not, then
ensure that you go back and perform the missing actions.
2-55
Chapter 2
Extending a Rack by Adding Another Rack
Use a switch list file (ul.lst) to check both upper leaf switches using one patchmgr
command:
# cat ul.lst
R1UL_IP:mleaf_u14.102
R2UL_IP:mleaf_u14.104
On a system with Secure Fabric enabled, use the msfleaf_u14 tag in the switch list
file:
# cat ul.lst
R1UL_IP:msfleaf_u14.102
R2UL_IP:msfleaf_u14.104
The following shows the recommended command and an example of the expected
results:
In the command output, verify that the switch configuration is good for both upper leaf
switches. You can ignore messages about the ports that are down.
5. Finalize the configuration of the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
a. Reconfigure the lower leaf switch ports (R1LL and R2LL).
Run the following command sequence on both of the lower leaf switches (R1LL and
R2LL).
2-56
Chapter 2
Extending a Rack by Adding Another Rack
You must use the correct switch configuration file, which you earlier copied to the
switch (in step 1.g). In this step, the configuration file name must end with step5.cfg.
i. On R1LL:
Note:
This step can take approximately 5 to 8 minutes on each switch.
b. On both racks, cable the lower leaf switch to both of the spine switches (R1LL and
R2LL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for X9M and Later Model Racks:
i. On R1LL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
ii. On R2LL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
2-57
Chapter 2
Extending a Rack by Adding Another Rack
c. On the lower leaf switches, verify that all of the cabled RoCE Network Fabric ports are
connected (R1LL and R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that all of the cabled ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 connected routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
2-58
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 RouterPort8 connected routed full 100G
QSFP-100G-CR4
Eth1/31 RouterPort9 connected routed full 100G
QSFP-100G-CR4
Eth1/32 RouterPort10 connected routed full 100G
QSFP-100G-CR4
Eth1/33 RouterPort11 connected routed full 100G
QSFP-100G-CR4
Eth1/34 RouterPort12 connected routed full 100G
QSFP-100G-CR4
Eth1/35 RouterPort13 connected routed full 100G
QSFP-100G-CR4
Eth1/36 RouterPort14 connected routed full 100G
QSFP-100G-CR4
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
2-59
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 5 on both lower leaf switches (R1LL and R2LL). If not, then
ensure that you go back and perform the missing actions.
# cat ll.lst
R1LL_IP:mleaf_u14.101
R2LL_IP:mleaf_u14.103
On a system with Secure Fabric enabled, use the msfleaf_u14 tag in the switch list
file:
# cat ll.lst
R1LL_IP:msfleaf_u14.101
R2LL_IP:msfleaf_u14.103
The following shows the recommended command and an example of the expected
results:
2-60
Chapter 2
Extending a Rack by Adding Another Rack
In the command output, verify that the switch configuration is good for both lower leaf
switches.
e. Verify that nve is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
At this point, you should see one nve peer with State=Up.
For example:
f. Verify that BGP is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
Look for two entries with Up in the rightmost column that are associated with different
IP addresses.
For example:
6. Finalize the configuration of the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
a. Start the inter-switch ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-61
Chapter 2
Extending a Rack by Adding Another Rack
Copy complete
R1UL(config)# <Ctrl-Z>
R1UL#
b. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
c. Verify the status of the inter-switch ports on the upper leaf switches (R1UL and R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the inter-switch ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
2-62
Chapter 2
Extending a Rack by Adding Another Rack
2-63
Chapter 2
Extending a Rack by Adding Another Rack
d. Start the storage server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
e. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
f. Verify the status of the storage server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the storage server ports are connected.
2-64
Chapter 2
Extending a Rack by Adding Another Rack
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 connected routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 disabled 3888 full 100G
QSFP-100G-CR4
2-65
Chapter 2
Extending a Rack by Adding Another Rack
g. Start the database server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-66
Chapter 2
Extending a Rack by Adding Another Rack
h. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
i. Verify the status of the database server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the database server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 connected routed full 100G
2-67
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/7 RouterPort7 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 RouterPort8 connected routed full 100G
QSFP-100G-CR4
Eth1/31 RouterPort9 connected routed full 100G
QSFP-100G-CR4
Eth1/32 RouterPort10 connected routed full 100G
QSFP-100G-CR4
Eth1/33 RouterPort11 connected routed full 100G
QSFP-100G-CR4
Eth1/34 RouterPort12 connected routed full 100G
2-68
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/35 RouterPort13 connected routed full 100G
QSFP-100G-CR4
Eth1/36 RouterPort14 connected routed full 100G
QSFP-100G-CR4
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
j. Verify that nve is up on the leaf switches (R1LL, R1UL, R2LL, and R2UL).
Run the following command on each leaf switch and examine the output:
In the output, you should see three nve peers with State=Up.
For example:
k. Verify that BGP is up on the upper leaf switches (R1UL and R2UL).
Run the following command on each upper leaf switch and examine the output:
In the output, look for two entries with Up in the rightmost column that are associated
with different IP addresses.
For example:
7. For each rack (R1 and R2), confirm the multi-rack cabling by running the
verify_roce_cables.py script.
2-69
Chapter 2
Extending a Rack by Adding Another Rack
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each file,
every server or switch must be listed on separate lines. Use fully qualified domain names
or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check the output of the verify_roce_cables.py script against the tables in Two-Rack
Cabling for X9M and Later Model Racks. Also, check that output in the CABLE OK? columns
contains the OK status.
The following examples show extracts of the expected command results:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack1sw-rocea0) : CABLE OK? UPPER LEAF (rack1sw-
roceb0) : CABLE OK?
----------- --------------- --------------------------- : ---------
--------------------------- : ---------
...
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack2sw-rocea0) : CABLE OK? UPPER LEAF (rack2sw-
roceb0) : CABLE OK?
----------- --------------- --------------------------- : ---------
--------------------------- : ---------
...
8. Verify the RoCE Network Fabric operation across both interconnected racks by using the
infinicheck command.
Use the following recommended command sequence to verify the RoCE Network Fabric
operation across both racks.
In each command, hosts.all contains a list of database server RoCE Network Fabric IP
addresses from both racks (2 RoCE Network Fabric IP addresses for each database
server), and cells.all contains a list of RoCE Network Fabric IP addresses for the
storage servers from both racks (2 RoCE Network Fabric IP addresses for each storage
server).
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -z
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -s
2-70
Chapter 2
Extending a Rack by Adding Another Rack
c. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
See step 1.k for most information about each infinicheck command.
The following example shows the expected command results for the final command in the
sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
At this point, both racks share the RoCE Network Fabric, and the combined system is ready for
further configuration.
See Configuring the New Hardware.
Related Topics
• Cabling Two Oracle Exadata Database Machine RoCE Network Fabric Racks With No
Downtime (My Oracle Support Doc ID 2704997.1)
• Verify RoCE Cabling on Oracle Exadata Database Machine X8M-2 and X8M-8 Servers
(My Oracle Support Doc ID 2587717.1)
2-71
Chapter 2
Extending a Rack by Adding Another Rack
2.3.2.1.2 Extending an X8M Rack with No Down Time by Adding an X9M or Later Model Rack
WARNING:
Take time to read and understand this procedure before implementation. Pay
careful attention to the instructions that surround the command examples. A
system outage may occur if the procedure is not applied correctly.
Note:
This procedure assumes that the RoCE Network Fabric switches on the existing X8M
rack contain the golden configuration settings from Oracle Exadata System Software
20.1.0 or later. Otherwise, before using this procedure, you must update the Oracle
Exadata System Software and update the golden configuration settings on the RoCE
Network Fabric switches. Downtime is required to update the golden configuration
settings on the RoCE Network Fabric switches.
Note:
For additional background information, see Understanding Multi-Rack Cabling for
X8M Racks and Understanding Multi-Rack Cabling for X9M and Later Model Racks.
Use this procedure to extend a typical X8M rack without down-time by cabling it together with
an X9M or later model rack. The primary rack (designated R1) and all of the systems it
supports remain online throughout the procedure. At the beginning of the procedure, the
additional rack (designated R2) is shut down.
The following is an outline of the procedure:
• Preparation (steps 1 and 2)
In this phase, you prepare the racks, switches, and cables. Also, you install and cable the
spine switches in both racks.
• Configuration and Physical Cabling
In this phase, you reconfigure the leaf switches and finalize the cabling to the spine
switches. These tasks are carefully orchestrated to avoid downtime on the primary system,
as follows:
– Partially configure the lower leaf switches (step 3)
In this step, you reconfigure the switch ports on the lower leaf switches. There is no
physical cabling performed in this step.
– Partially configure the upper leaf switches (step 4)
In this step, you reconfigure the switch ports on the upper leaf switches, remove the
inter-switch cables that connect the leaf switches in both racks and connect the cables
between the upper leaf switches and the spine switches.
– Finalize the lower leaf switches (step 5)
2-72
Chapter 2
Extending a Rack by Adding Another Rack
In this step, you finalize the switch port configuration on the lower leaf switches. You
also complete the physical cabling by connecting the cables between the lower leaf
switches and the spine switches.
– Finalize the upper leaf switches (step 6)
In this step, you finalize the switch port configuration on the upper leaf switches.
• Validation and Testing (steps 7 and 8)
In this phase, you validate and test the RoCE Network Fabric across both of the
interconnect racks.
After completing the procedure, both racks share the RoCE Network Fabric, and the combined
system is ready for further configuration. For example, you can extend existing disk groups and
Oracle RAC databases to consume resources across both racks.
2-73
Chapter 2
Extending a Rack by Adding Another Rack
Note:
• This procedure applies only to typical rack configurations that initially have leaf
switches with the following specifications:
– The inter-switch ports are ports 4 to 7, and ports 30 to 33.
– The storage server ports are ports 8 to 14, and ports 23 to 29.
– The database server ports are ports 15 to 22.
For other rack configurations (for example, 8-socket systems with three database
servers and 11 storage servers) a different procedure and different RoCE
Network Fabric switch configuration files are required. Contact Oracle for further
guidance.
• The procedure uses the following naming abbreviations and conventions:
– The abbreviation for the existing X8M rack is R1, and the new X9M or later
model rack is R2.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies a spine switch.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the existing rack (R1).
• Most operations must be performed in multiple locations. For example, step 1.h
instructs you to update the firmware on all the RoCE Network Fabric leaf
switches (R1LL, R1UL, R2LL, and R2UL). Pay attention to the instructions and
keep track of your actions.
Tip:
When a step must be performed on multiple switches, the instruction
contains a list of the applicable switches. For example, (R1LL, R1UL,
R2LL, and R2UL). You can use this list as a checklist to keep track of
your actions.
2-74
Chapter 2
Extending a Rack by Adding Another Rack
This includes the database servers, storage servers, RoCE Network Fabric leaf
switches, and the Management Network Switch.
c. Prepare the RoCE Network Fabric cables that you will use to interconnect the racks.
Label both ends of every cable.
For the required cross-rack cabling information, see Two-Rack Cabling for a System
Combining an X8M Rack and a Later Model Rack.
d. Connect the new rack (R2) to your existing management network.
Ensure that there are no IP address conflicts across the racks and that you can access
the management interfaces on the RoCE Network Fabric switches.
e. Ensure that you have a backup of the current switch configuration for each RoCE
Network Fabric switch (R1LL, R1UL, R2LL, and R2UL).
See Backing Up Settings on the RoCE Network Fabric Switch in Oracle Exadata
Database Machine Maintenance Guide.
f. Download the required RoCE Network Fabric switch configuration files.
This procedure requires specific RoCE Network Fabric switch configuration files, which
you must download from My Oracle Support document 2704997.1.
WARNING:
You must use different switch configuration files depending on whether your
system uses Exadata Secure RDMA Fabric Isolation. Ensure that you
download the correct archive that matches your system configuration.
For system configurations without Secure Fabric, download online_multi-
rack_8and14uplinks.zip. For system configurations with Secure Fabric,
download online_SF_enabled_multi-rack_8and14uplinks.zip.
Download and extract the archive containing the required RoCE Network Fabric switch
configuration files. Place the files on a server with access to the management
interfaces on the RoCE Network Fabric switches.
g. Copy the required RoCE Network Fabric switch configuration files to the leaf switches
on both racks.
You can use the following commands to copy the required configuration files to all of
the RoCE Network Fabric switches on a system without Secure Fabric enabled:
2-75
Chapter 2
Extending a Rack by Adding Another Rack
On a system with Secure Fabric enabled, you can use the following commands:
In the above commands, substitute the appropriate IP address or host name where
applicable. For example, in place of R1LL_IP, substitute the management IP address
or host name for the lower leaf switch (LL) on the existing rack (R1).
Note:
The command examples in the rest of this procedure use the configuration
files for a system configuration without Secure Fabric enabled. If required,
adjust the commands to use the Secure Fabric-enabled switch configuration
files.
h. Update the firmware to the latest available release on all of the RoCE Network Fabric
leaf switches (R1LL, R1UL, R2LL, and R2UL).
See Updating RoCE Network Fabric Switch Firmware in Oracle Exadata Database
Machine Maintenance Guide.
i. Examine the RoCE Network Fabric leaf switches (R1LL, R1UL, R2LL, and R2UL) and
confirm the port categories for the cabled ports.
Run the show interface status command on every RoCE Network Fabric leaf
switch:
2-76
Chapter 2
Extending a Rack by Adding Another Rack
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
2-77
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
j. For each rack (R1 and R2), confirm the RoCE Network Fabric cabling by running the
verify_roce_cables.py script.
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each
file, every server or switch must be listed on separate lines. Use fully qualified domain
names or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check that output in the CABLE OK? columns contains the OK status.
2-78
Chapter 2
Extending a Rack by Adding Another Rack
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack1sw-rocea0) :
CABLE OK? UPPER LEAF (rack1sw-roceb0) : CABLE OK?
----------- --------------- -------------------------------- :
-------- -------------------------------- : ---------
Eth1/4 (ISL peer switch) : rack1sw-rocea0 Ethernet1/4 :
OK rack1sw-roceb0 Ethernet1/4 : OK
Eth1/5 (ISL peer switch) : rack1sw-rocea0 Ethernet1/5 :
OK rack1sw-roceb0 Ethernet1/5 : OK
Eth1/6 (ISL peer switch) : rack1sw-rocea0 Ethernet1/6 :
OK rack1sw-roceb0 Ethernet1/6 : OK
Eth1/7 (ISL peer switch) : rack1sw-rocea0 Ethernet1/7 :
OK rack1sw-roceb0 Ethernet1/7 : OK
Eth1/8 (RU39) : rack1celadm14 port-1 :
OK rack1celadm14 port-2 : OK
Eth1/9 (RU37) : rack1celadm13 port-1 :
OK rack1celadm13 port-2 : OK
Eth1/10 (RU35) : rack1celadm12 port-1 :
OK rack1celadm12 port-2 : OK
Eth1/11 (RU33) : rack1celadm11 port-1 :
OK rack1celadm11 port-2 : OK
Eth1/12 (RU31) : rack1celadm10 port-1 :
OK rack1celadm10 port-2 : OK
Eth1/13 (RU29) : rack1celadm09 port-1 :
OK rack1celadm09 port-2 : OK
Eth1/14 (RU27) : rack1celadm08 port-1 :
OK rack1celadm08 port-2 : OK
Eth1/15 (RU26) : rack1adm08 port-1 :
OK rack1adm08 port-2 : OK
Eth1/16 (RU25) : rack1adm07 port-1 :
OK rack1adm07 port-2 : OK
Eth1/17 (RU24) : rack1adm06 port-1 :
OK rack1adm06 port-2 : OK
Eth1/18 (RU23) : rack1adm05 port-1 :
OK rack1adm05 port-2 : OK
Eth1/19 (RU19) : rack1adm04 port-1 :
OK rack1adm04 port-2 : OK
Eth1/20 (RU18) : rack1adm03 port-1 :
OK rack1adm03 port-2 : OK
Eth1/21 (RU17) : rack1adm02 port-1 :
OK rack1adm02 port-2 : OK
Eth1/22 (RU16) : rack1adm01 port-1 :
OK rack1adm01 port-2 : OK
Eth1/23 (RU14) : rack1celadm07 port-1 :
OK rack1celadm07 port-2 : OK
Eth1/24 (RU12) : rack1celadm06 port-1 :
OK rack1celadm06 port-2 : OK
Eth1/25 (RU10) : rack1celadm05 port-1 :
OK rack1celadm05 port-2 : OK
Eth1/26 (RU08) : rack1celadm04 port-1 :
OK rack1celadm04 port-2 : OK
Eth1/27 (RU06) : rack1celadm03 port-1 :
OK rack1celadm03 port-2 : OK
2-79
Chapter 2
Extending a Rack by Adding Another Rack
k. For each rack (R1 and R2), verify the RoCE Network Fabric operation by using the
infinicheck command.
• Use infinicheck with the -z option to clear the files that were created during the
last run of the infinicheck command.
• Use infinicheck with the -s option to set up user equivalence for password-less
SSH across the RoCE Network Fabric.
• Finally, verify the RoCE Network Fabric operation by using infinicheck with the -
b option, which is recommended on newly imaged machines where it is acceptable
to suppress the cellip.ora and cellinit.ora configuration checks.
In each command, the hosts input file (hosts.rack1 and hosts.rack2) contains a list
of database server RoCE Network Fabric IP addresses (2 RoCE Network Fabric IP
addresses for each database server), and the cells input file (cells.rack1 and
cells.rack2) contains a list of RoCE Network Fabric IP addresses for the storage
servers (2 RoCE Network Fabric IP addresses for each storage server).
i. Use the following recommended command sequence on the existing rack (R1):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -b
ii. Use the following recommended command sequence on the new rack (R2):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -b
2-80
Chapter 2
Extending a Rack by Adding Another Rack
The following example shows the expected command results for the final command in
the sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rackN -c cells.rackN -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
2-81
Chapter 2
Extending a Rack by Adding Another Rack
Use a switch list file (spines.lst) to apply the golden configuration settings to both
spine switches using one patchmgr command:
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
d. Connect the RoCE Network Fabric cables to the spine switches (R1SS and R2SS).
WARNING:
At this stage, only connect the cables to the spine switches.
To avoid later complications, ensure that each cable connects to the
correct switch and port.
DO NOT CONNECT ANY OF THE CABLES TO THE LEAF SWITCHES.
2-82
Chapter 2
Extending a Rack by Adding Another Rack
Use the cables that you prepared earlier (in step 1.c).
For the required cross-rack cabling information, see Two-Rack Cabling for a System
Combining an X8M Rack and a Later Model Rack.
3. Perform the first round of configuration on the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
Note:
During this step, the lower leaf switch ports are shut down. While the R1LL ports
are down, R1UL exclusively supports the RoCE Network Fabric. During this time,
there is no redundancy in the RoCE Network Fabric, and availability cannot be
maintained if R1UL goes down.
a. Shut down the switch ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
ii. On R2LL:
2-83
Chapter 2
Extending a Rack by Adding Another Rack
ii. On R2LL, the switch configuration file name must end with step3_R2_LL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
c. Start the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
ii. On R2LL:
2-84
Chapter 2
Extending a Rack by Adding Another Rack
R2LL(config)# <Ctrl-Z>
R2LL#
d. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
e. Verify the status of the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the inter-switch ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
2-85
Chapter 2
Extending a Rack by Adding Another Rack
f. Start the storage server ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
2-86
Chapter 2
Extending a Rack by Adding Another Rack
g. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
h. Verify the status of the storage server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the storage server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
2-87
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
2-88
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
i. Start the database server ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
j. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
k. Verify the status of the database server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the database server ports are connected.
2-89
Chapter 2
Extending a Rack by Adding Another Rack
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
2-90
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
Note:
Before proceeding, ensure that you have completed all of the actions in step 3 on
both lower leaf switches (R1LL and R2LL). If not, then ensure that you go back
and perform the missing actions.
4. Perform the first round of configuration on the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
Note:
At the start of this step, the upper leaf switch ports are shut down. While the
R1UL ports are down, R1LL exclusively supports the RoCE Network Fabric on
the existing rack. During this time, there is no redundancy in the RoCE Network
Fabric, and availability cannot be maintained if R1LL goes down.
a. Shut down the upper leaf switch ports (R1UL and R2UL).
2-91
Chapter 2
Extending a Rack by Adding Another Rack
i. On R1UL:
ii. On R2UL:
b. On both racks, remove the inter-switch links between the leaf switches (R1LL to R1UL,
and R2LL to R2UL).
On every leaf switch, remove the cables for the inter-switch links:
i. On R1LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
ii. On R1UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iii. On R2LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iv. On R2UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
c. On both racks, cable the upper leaf switch to both of the spine switches (R1UL and
R2UL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for a System Combining an
X8M Rack and a Later Model Rack:
i. On R1UL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
ii. On R2UL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
2-92
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
ii. On R2UL, the switch configuration file name must end with step4_R2_UL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
e. Check the status of the RoCE Network Fabric ports on the upper leaf switches (R1UL
and R2UL).
Run the show interface status command on each upper leaf switch:
2-93
Chapter 2
Extending a Rack by Adding Another Rack
Examine the output to ensure that all of the cabled ports are disabled.
The following example shows the expected output on the X9M or later model rack
(R2UL). On the X8M rack (R1UL), ports 01, 02, 03, 34, 35, and 36 are not physically
connected.
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 disabled routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 disabled routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 disabled routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 disabled routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 disabled routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 disabled routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 disabled routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
2-94
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 4 on both upper leaf switches (R1UL and R2UL). If not, then
ensure that you go back and perform the missing actions.
2-95
Chapter 2
Extending a Rack by Adding Another Rack
Use a switch list file (ul.lst) to check both upper leaf switches using one patchmgr
command:
# cat ul.lst
R1UL_IP:mleaf.102
R2UL_IP:mleaf_u14.104
On a system with Secure Fabric enabled, use the msfleaf and msfleaf_u14 tags in
the switch list file:
# cat ul.lst
R1UL_IP:msfleaf.102
R2UL_IP:msfleaf_u14.104
The following shows the recommended command and an example of the expected
results:
In the command output, verify that the switch configuration is good for both upper leaf
switches. You can ignore messages about the ports that are down.
5. Finalize the configuration of the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
a. Reconfigure the lower leaf switch ports (R1LL and R2LL).
Run the following command sequence on both of the lower leaf switches (R1LL and
R2LL).
2-96
Chapter 2
Extending a Rack by Adding Another Rack
You must use the correct switch configuration file, which you earlier copied to the
switch (in step 1.g). In this step, the configuration file name must end with step5.cfg.
i. On R1LL:
ii. On R2LL:
Note:
This step can take approximately 5 to 8 minutes on each switch.
b. On both racks, cable the lower leaf switch to both of the spine switches (R1LL and
R2LL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for a System Combining an
X8M Rack and a Later Model Rack:
i. On R1LL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
ii. On R2LL, cable ports 01, 02, 03, 04, 05, 06, 07, 30, 31, 32, 33, 34, 35, and 36 to
R1SS and R2SS.
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
2-97
Chapter 2
Extending a Rack by Adding Another Rack
c. On the lower leaf switches, verify that all of the cabled RoCE Network Fabric ports are
connected (R1LL and R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that all of the cabled ports are connected.
The following example shows the expected output on the X9M or later model rack
(R2LL). On the X8M rack (R1LL), ports 01, 02, 03, 34, 35, and 36 are not physically
connected.
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 connected routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
2-98
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 RouterPort8 connected routed full 100G
QSFP-100G-CR4
Eth1/31 RouterPort9 connected routed full 100G
QSFP-100G-CR4
Eth1/32 RouterPort10 connected routed full 100G
QSFP-100G-CR4
Eth1/33 RouterPort11 connected routed full 100G
QSFP-100G-CR4
Eth1/34 RouterPort12 connected routed full 100G
QSFP-100G-CR4
Eth1/35 RouterPort13 connected routed full 100G
QSFP-100G-CR4
Eth1/36 RouterPort14 connected routed full 100G
QSFP-100G-CR4
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
2-99
Chapter 2
Extending a Rack by Adding Another Rack
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 5 on both lower leaf switches (R1LL and R2LL). If not, then
ensure that you go back and perform the missing actions.
# cat ll.lst
R1LL_IP:mleaf.101
R2LL_IP:mleaf_u14.103
On a system with Secure Fabric enabled, use the msfleaf and msfleaf_u14 tags in
the switch list file:
# cat ll.lst
R1LL_IP:msfleaf.101
R2LL_IP:msfleaf_u14.103
The following shows the recommended command and an example of the expected
results:
2-100
Chapter 2
Extending a Rack by Adding Another Rack
In the command output, verify that the switch configuration is good for both lower leaf
switches.
e. Verify that nve is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
At this point, you should see one nve peer with State=Up.
For example:
f. Verify that BGP is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
Look for two entries with Up in the rightmost column that are associated with different
IP addresses.
For example:
6. Finalize the configuration of the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
a. Start the inter-switch ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-101
Chapter 2
Extending a Rack by Adding Another Rack
Copy complete
R1UL(config)# <Ctrl-Z>
R1UL#
ii. On R2UL:
b. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
c. Verify the status of the inter-switch ports on the upper leaf switches (R1UL and R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the inter-switch ports are connected.
The following example shows the expected output on the X9M or later model rack
(R2UL). On the X8M rack (R1UL), ports 01, 02, 03, 34, 35, and 36 are not physically
connected.
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
2-102
Chapter 2
Extending a Rack by Adding Another Rack
2-103
Chapter 2
Extending a Rack by Adding Another Rack
d. Start the storage server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
e. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
f. Verify the status of the storage server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
2-104
Chapter 2
Extending a Rack by Adding Another Rack
Examine the output to ensure that the storage server ports are connected.
The following example shows the expected output on the X9M or later model rack
(R2UL). On the X8M rack (R1UL), ports 01, 02, 03, 34, 35, and 36 are not physically
connected.
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/4 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort5 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort6 connected routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort7 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
2-105
Chapter 2
Extending a Rack by Adding Another Rack
g. Start the database server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-106
Chapter 2
Extending a Rack by Adding Another Rack
R1UL(config)# <Ctrl-Z>
R1UL#
h. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
i. Verify the status of the database server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the database server ports are connected.
The following example shows the expected output on the X9M or later model rack
(R2UL). On the X8M rack (R1UL), ports 01, 02, 03, 34, 35, and 36 are not physically
connected.
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
Eth1/1 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/2 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/3 RouterPort3 connected routed full 100G
QSFP-100G-CR4
2-107
Chapter 2
Extending a Rack by Adding Another Rack
2-108
Chapter 2
Extending a Rack by Adding Another Rack
j. Verify that nve is up on the leaf switches (R1LL, R1UL, R2LL, and R2UL).
Run the following command on each leaf switch and examine the output:
In the output, you should see three nve peers with State=Up.
For example:
k. Verify that BGP is up on the upper leaf switches (R1UL and R2UL).
Run the following command on each upper leaf switch and examine the output:
In the output, look for two entries with Up in the rightmost column that are associated
with different IP addresses.
For example:
2-109
Chapter 2
Extending a Rack by Adding Another Rack
7. For each rack (R1 and R2), confirm the multi-rack cabling by running the
verify_roce_cables.py script.
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each file,
every server or switch must be listed on separate lines. Use fully qualified domain names
or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check the output of the verify_roce_cables.py script against the tables in Two-Rack
Cabling for a System Combining an X8M Rack and a Later Model Rack. Also, check that
output in the CABLE OK? columns contains the OK status.
The following examples show extracts of the expected command results:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack1sw-rocea0) : CABLE OK? UPPER LEAF (rack1sw-
roceb0) : CABLE OK?
----------- --------------- --------------------------- : ---------
--------------------------- : ---------
...
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack2sw-rocea0) : CABLE OK? UPPER LEAF (rack2sw-
roceb0) : CABLE OK?
----------- --------------- --------------------------- : ---------
--------------------------- : ---------
...
8. Verify the RoCE Network Fabric operation across both interconnected racks by using the
infinicheck command.
Use the following recommended command sequence to verify the RoCE Network Fabric
operation across both racks.
In each command, hosts.all contains a list of database server RoCE Network Fabric IP
addresses from both racks (2 RoCE Network Fabric IP addresses for each database
server), and cells.all contains a list of RoCE Network Fabric IP addresses for the
storage servers from both racks (2 RoCE Network Fabric IP addresses for each storage
server).
2-110
Chapter 2
Extending a Rack by Adding Another Rack
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -z
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -s
c. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
See step 1.k for most information about each infinicheck command.
The following example shows the expected command results for the final command in the
sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
At this point, both racks share the RoCE Network Fabric, and the combined system is ready for
further configuration.
See Configuring the New Hardware.
• Two-Rack Cabling for a System Combining an X8M Rack and a Later Model Rack
This section provides the cabling details to connect an X8M rack with an X9M or later
model rack, both of which use RoCE Network Fabric.
2-111
Chapter 2
Extending a Rack by Adding Another Rack
Related Topics
• Cabling Two Oracle Exadata Database Machine RoCE Network Fabric Racks With No
Downtime (My Oracle Support Doc ID 2704997.1)
• Verify RoCE Cabling on Oracle Exadata Database Machine X8M-2 and X8M-8 Servers
(My Oracle Support Doc ID 2587717.1)
2.3.2.1.2.1 Two-Rack Cabling for a System Combining an X8M Rack and a Later Model Rack
This section provides the cabling details to connect an X8M rack with an X9M or later model
rack, both of which use RoCE Network Fabric.
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first (X8M) rack is R1, and the second (X9M or later)
rack is R2.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X8M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X8M-8 or X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
The following illustration shows the cable connections for the spine switches when cabling a
two-rack hybrid system with an X8M rack and an X9M or later model rack:
2-112
Chapter 2
Extending a Rack by Adding Another Rack
R1 R1 R1 R1 R1 R1 R1 R1
UL UL UL UL LL LL LL LL
P5 P7 P4 P6 P5 P7 P4 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2
UL UL UL UL UL UL UL LL LL LL LL LL LL LL
P1 P2 P3 P5 P5 P6 P7 P1 P2 P3 P4 P5 P6 P7
R1 R1 R1 R1 R1 R1 R1 R1
UL UL UL UL LL LL LL LL
P31 P33 P30 P32 P31 P33 P30 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2 R2
UL UL UL UL UL UL UL LL LL LL LL LL LL LL
P30 P31 P32 P33 P34 P35 P36 P30 P31 P32 P33 P34 P35 P36
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
two-rack hybrid system with an X8M rack and a later model rack.
Table 2-3 Leaf Switch Connections for the X8M Rack (R1)
2-113
Chapter 2
Extending a Rack by Adding Another Rack
Table 2-4 Leaf Switch Connections for the X9M or Later Model Rack (R2)
2.3.2.1.3 Extending an X8M Rack with No Down Time by Adding Another X8M Rack
WARNING:
Take time to read and understand this procedure before implementation. Pay
careful attention to the instructions that surround the command examples. A
system outage may occur if the procedure is not applied correctly.
2-114
Chapter 2
Extending a Rack by Adding Another Rack
Note:
This procedure assumes that the RoCE Network Fabric switches on the X8M racks
contain the golden configuration settings from Oracle Exadata System Software
20.1.0 or later. Otherwise, before using this procedure, you must update the Oracle
Exadata System Software and update the golden configuration settings on the RoCE
Network Fabric switches. Downtime is required to update the golden configuration
settings on the RoCE Network Fabric switches.
Note:
For additional background information, see Understanding Multi-Rack Cabling for
X8M Racks.
Use this procedure to extend a typical X8M rack without down-time by cabling it together with a
second X8M rack. The primary rack (designated R1) and all of the systems it supports remain
online throughout the procedure. At the beginning of the procedure, the additional rack
(designated R2) is shut down.
The following is an outline of the procedure:
• Preparation (steps 1 and 2)
In this phase, you prepare the racks, switches, and cables. Also, you install and cable the
spine switches in both racks.
• Configuration and Physical Cabling
In this phase, you reconfigure the leaf switches and finalize the cabling to the spine
switches. These tasks are carefully orchestrated to avoid downtime on the primary system,
as follows:
– Partially configure the lower leaf switches (step 3)
In this step, you reconfigure the switch ports on the lower leaf switches. There is no
physical cabling performed in this step.
– Partially configure the upper leaf switches (step 4)
In this step, you reconfigure the switch ports on the upper leaf switches, remove the
inter-switch cables that connect the leaf switches in both racks and connect the cables
between the upper leaf switches and the spine switches.
– Finalize the lower leaf switches (step 5)
In this step, you finalize the switch port configuration on the lower leaf switches. You
also complete the physical cabling by connecting the cables between the lower leaf
switches and the spine switches.
– Finalize the upper leaf switches (step 6)
In this step, you finalize the switch port configuration on the upper leaf switches.
• Validation and Testing (steps 7 and 8)
In this phase, you validate and test the RoCE Network Fabric across both of the
interconnect racks.
2-115
Chapter 2
Extending a Rack by Adding Another Rack
After completing the procedure, both racks share the RoCE Network Fabric, and the combined
system is ready for further configuration. For example, you can extend existing disk groups and
Oracle RAC databases to consume resources across both racks.
Note:
• This procedure applies only to typical rack configurations that initially have leaf
switches with the following specifications:
– The inter-switch ports are ports 4 to 7, and ports 30 to 33.
– The storage server ports are ports 8 to 14, and ports 23 to 29.
– The database server ports are ports 15 to 22.
For other rack configurations (for example, X8M-8 systems with three database
servers and 11 storage servers) a different procedure and different RoCE
Network Fabric switch configuration files are required. Contact Oracle for further
guidance.
• The procedure uses the following naming abbreviations and conventions:
– The abbreviation for the existing rack is R1, and the new rack is R2.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies a spine switch.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the existing rack (R1).
• Most operations must be performed in multiple locations. For example, step 1.h
instructs you to update the firmware on all the RoCE Network Fabric leaf
switches (R1LL, R1UL, R2LL, and R2UL). Pay attention to the instructions and
keep track of your actions.
Tip:
When a step must be performed on multiple switches, the instruction
contains a list of the applicable switches. For example, (R1LL, R1UL,
R2LL, and R2UL). You can use this list as a checklist to keep track of
your actions.
2-116
Chapter 2
Extending a Rack by Adding Another Rack
This includes the database servers, storage servers, RoCE Network Fabric leaf
switches, and the Management Network Switch.
c. Prepare the RoCE Network Fabric cables that you will use to interconnect the racks.
Label both ends of every cable.
For the required cross-rack cabling information, see Two-Rack Cabling for X8M Racks.
d. Connect the new rack (R2) to your existing management network.
Ensure that there are no IP address conflicts across the racks and that you can access
the management interfaces on the RoCE Network Fabric switches.
e. Ensure that you have a backup of the current switch configuration for each RoCE
Network Fabric switch (R1LL, R1UL, R2LL, and R2UL).
See Backing Up Settings on the RoCE Network Fabric Switch in Oracle Exadata
Database Machine Maintenance Guide.
f. Download the required RoCE Network Fabric switch configuration files.
This procedure requires specific RoCE Network Fabric switch configuration files, which
you must download from My Oracle Support document 2704997.1.
WARNING:
You must use different switch configuration files depending on whether your
system uses Exadata Secure RDMA Fabric Isolation. Ensure that you
download the correct archive that matches your system configuration.
For system configurations without Secure Fabric, download online_multi-
rack.zip. For system configurations with Secure Fabric, download
online_SF_enabled_multi-rack.zip.
Download and extract the archive containing the required RoCE Network Fabric switch
configuration files. Place the files on a server with access to the management
interfaces on the RoCE Network Fabric switches.
g. Copy the required RoCE Network Fabric switch configuration files to the leaf switches
on both racks.
You can use the following commands to copy the required configuration files to all of
the RoCE Network Fabric switches on a system without Secure Fabric enabled:
2-117
Chapter 2
Extending a Rack by Adding Another Rack
On a system with Secure Fabric enabled, you can use the following commands:
In the above commands, substitute the appropriate IP address or host name where
applicable. For example, in place of R1LL_IP, substitute the management IP address
or host name for the lower leaf switch (LL) on the existing rack (R1).
Note:
The command examples in the rest of this procedure use the configuration
files for a system configuration without Secure Fabric enabled. If required,
adjust the commands to use the Secure Fabric-enabled switch configuration
files.
h. Update the firmware to the latest available release on all of the RoCE Network Fabric
leaf switches (R1LL, R1UL, R2LL, and R2UL).
See Updating RoCE Network Fabric Switch Firmware in Oracle Exadata Database
Machine Maintenance Guide.
i. Examine the RoCE Network Fabric leaf switches (R1LL, R1UL, R2LL, and R2UL) and
confirm the port categories for the cabled ports.
Run the show interface status command on every RoCE Network Fabric leaf
switch:
2-118
Chapter 2
Extending a Rack by Adding Another Rack
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
2-119
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
j. For each rack (R1 and R2), confirm the RoCE Network Fabric cabling by running the
verify_roce_cables.py script.
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each
file, every server or switch must be listed on separate lines. Use fully qualified domain
names or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check that output in the CABLE OK? columns contains the OK status.
The following example shows the expected command results:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
2-120
Chapter 2
Extending a Rack by Adding Another Rack
2-121
Chapter 2
Extending a Rack by Adding Another Rack
k. For each rack (R1 and R2), verify the RoCE Network Fabric operation by using the
infinicheck command.
• Use infinicheck with the -z option to clear the files that were created during the
last run of the infinicheck command.
• Use infinicheck with the -s option to set up user equivalence for password-less
SSH across the RoCE Network Fabric.
• Finally, verify the RoCE Network Fabric operation by using infinicheck with the -
b option, which is recommended on newly imaged machines where it is acceptable
to suppress the cellip.ora and cellinit.ora configuration checks.
In each command, the hosts input file (hosts.rack1 and hosts.rack2) contains a list
of database server RoCE Network Fabric IP addresses (2 RoCE Network Fabric IP
addresses for each database server), and the cells input file (cells.rack1 and
cells.rack2) contains a list of RoCE Network Fabric IP addresses for the storage
servers (2 RoCE Network Fabric IP addresses for each storage server).
i. Use the following recommended command sequence on the existing rack (R1):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack1 -c cells.rack1 -b
ii. Use the following recommended command sequence on the new rack (R2):
i. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -z
ii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -s
iii. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rack2 -c cells.rack2 -b
The following example shows the expected command results for the final command in
the sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.rackN -c cells.rackN -b
2-122
Chapter 2
Extending a Rack by Adding Another Rack
INFINICHECK
[Network Connectivity, Configuration and Performance]
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
2-123
Chapter 2
Extending a Rack by Adding Another Rack
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
# cat spines.lst
R1SS_IP:mspine.201
R2SS_IP:mspine.202
Note:
In the switch list file, R1SS_IP is the management IP address or host name
for the spine switch on the existing rack (R1SS) and R2SS_IP is the
management IP address or host name for the spine switch on the new rack
(R2SS).
d. Connect the RoCE Network Fabric cables to the spine switches (R1SS and R2SS).
WARNING:
At this stage, only connect the cables to the spine switches.
To avoid later complications, ensure that each cable connects to the
correct switch and port.
DO NOT CONNECT ANY OF THE CABLES TO THE LEAF SWITCHES.
Use the cables that you prepared earlier (in step 1.c).
For the required cross-rack cabling information, see Two-Rack Cabling for X8M Racks.
3. Perform the first round of configuration on the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
2-124
Chapter 2
Extending a Rack by Adding Another Rack
Note:
During this step, the lower leaf switch ports are shut down. While the R1LL ports
are down, R1UL exclusively supports the RoCE Network Fabric. During this time,
there is no redundancy in the RoCE Network Fabric, and availability cannot be
maintained if R1UL goes down.
a. Shut down the switch ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
2-125
Chapter 2
Extending a Rack by Adding Another Rack
ii. On R2LL, the switch configuration file name must end with step3_R2_LL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
c. Start the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
d. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
e. Verify the status of the inter-switch ports on the lower leaf switches (R1LL and R2LL) .
Run the show interface status command on each lower leaf switch:
2-126
Chapter 2
Extending a Rack by Adding Another Rack
Examine the output to ensure that the inter-switch ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
2-127
Chapter 2
Extending a Rack by Adding Another Rack
f. Start the storage server ports on the lower leaf switches (R1LL and R2LL) .
i. On R1LL:
2-128
Chapter 2
Extending a Rack by Adding Another Rack
g. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
h. Verify the status of the storage server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the storage server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
2-129
Chapter 2
Extending a Rack by Adding Another Rack
2-130
Chapter 2
Extending a Rack by Adding Another Rack
i. Start the database server ports on the lower leaf switches (R1LL and R2LL).
i. On R1LL:
j. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
k. Verify the status of the database server ports on the lower leaf switches (R1LL and
R2LL).
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that the database server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
2-131
Chapter 2
Extending a Rack by Adding Another Rack
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected trunk full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected trunk full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected trunk full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected trunk full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
2-132
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected trunk full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected trunk full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected trunk full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected trunk full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
Note:
Before proceeding, ensure that you have completed all of the actions in step 3 on
both lower leaf switches (R1LL and R2LL). If not, then ensure that you go back
and perform the missing actions.
4. Perform the first round of configuration on the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
Note:
At the start of this step, the upper leaf switch ports are shut down. While the
R1UL ports are down, R1LL exclusively supports the RoCE Network Fabric on
the existing rack. During this time, there is no redundancy in the RoCE Network
Fabric, and availability cannot be maintained if R1LL goes down.
a. Shut down the upper leaf switch ports (R1UL and R2UL).
i. On R1UL:
2-133
Chapter 2
Extending a Rack by Adding Another Rack
R1UL(config)# <Ctrl-Z>
R1UL#
b. On both racks, remove the inter-switch links between the leaf switches (R1LL to R1UL,
and R2LL to R2UL).
On every leaf switch, remove the cables for the inter-switch links:
i. On R1LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
ii. On R1UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iii. On R2LL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
iv. On R2UL, disconnect the inter-switch links from ports 04, 05, 06, 07, 30, 31, 32,
and 33.
c. On both racks, cable the upper leaf switch to both of the spine switches (R1UL and
R2UL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for X8M Racks:
i. On R1UL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
ii. On R2UL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
2-134
Chapter 2
Extending a Rack by Adding Another Rack
i. On R1UL, the switch configuration file name must end with step4_R1_UL.cfg:
ii. On R2UL, the switch configuration file name must end with step4_R2_UL.cfg:
Note:
This step can take approximately 5 to 8 minutes on each switch.
e. Check the status of the RoCE Network Fabric ports on the upper leaf switches (R1UL
and R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that all of the cabled ports are disabled.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
2-135
Chapter 2
Extending a Rack by Adding Another Rack
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
...
Eth1/4 RouterPort1 disabled routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort2 disabled routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort3 disabled routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort4 disabled routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
...
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 4 on both upper leaf switches (R1UL and R2UL). If not, then
ensure that you go back and perform the missing actions.
2-136
Chapter 2
Extending a Rack by Adding Another Rack
Use a switch list file (ul.lst) to check both upper leaf switches using one patchmgr
command:
# cat ul.lst
R1UL_IP:mleaf.102
R2UL_IP:mleaf.104
On a system with Secure Fabric enabled, use the msfleaf tag in the switch list file:
# cat ul.lst
R1UL_IP:msfleaf.102
R2UL_IP:msfleaf.104
The following shows the recommended command and an example of the expected
results:
In the command output, verify that the switch configuration is good for both upper leaf
switches. You can ignore messages about the ports that are down.
5. Finalize the configuration of the lower leaf switches (R1LL and R2LL).
Perform this step on the lower leaf switches (R1LL and R2LL) only.
a. Reconfigure the lower leaf switch ports (R1LL and R2LL).
Run the following command sequence on both of the lower leaf switches (R1LL and
R2LL).
2-137
Chapter 2
Extending a Rack by Adding Another Rack
You must use the correct switch configuration file, which you earlier copied to the
switch (in step 1.g). In this step, the configuration file name must end with step5.cfg.
i. On R1LL:
Note:
This step can take approximately 5 to 8 minutes on each switch.
b. On both racks, cable the lower leaf switch to both of the spine switches (R1LL and
R2LL to R1SS and R2SS).
Connect the cables from the spine switches that you prepared earlier (in step 2.d).
Cable the switches as described in Two-Rack Cabling for X8M Racks:
i. On R1LL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
ii. On R2LL, cable ports 04, 05, 06, 07, 30, 31, 32, and 33 to R1SS and R2SS.
Note:
Ensure that each cable connects to the correct switch and port at both ends.
In addition to physically checking each connection, you can run the show
lldp neighbors command on each network switch and examine the output
to confirm correct connections. You can individually check each cable
connection to catch and correct errors quickly.
c. On the lower leaf switches, verify that all of the cabled RoCE Network Fabric ports are
connected (R1LL and R2LL).
2-138
Chapter 2
Extending a Rack by Adding Another Rack
Run the show interface status command on each lower leaf switch:
Examine the output to ensure that all of the cabled ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed
Type
------------------------------------------------------------------------
--------
...
Eth1/4 RouterPort1 connected routed full 100G
QSFP-100G-CR4
Eth1/5 RouterPort2 connected routed full 100G
QSFP-100G-CR4
Eth1/6 RouterPort3 connected routed full 100G
QSFP-100G-CR4
Eth1/7 RouterPort4 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 connected 3888 full 100G
2-139
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
...
Note:
Before proceeding, ensure that you have completed all of the actions to this
point in step 5 on both lower leaf switches (R1LL and R2LL). If not, then
ensure that you go back and perform the missing actions.
# cat ll.lst
R1LL_IP:mleaf.101
R2LL_IP:mleaf.103
On a system with Secure Fabric enabled, use the msfleaf tag in the switch list file:
# cat ll.lst
R1LL_IP:msfleaf.101
R2LL_IP:msfleaf.103
The following shows the recommended command and an example of the expected
results:
2-140
Chapter 2
Extending a Rack by Adding Another Rack
In the command output, verify that the switch configuration is good for both lower leaf
switches.
e. Verify that nve is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
At this point, you should see one nve peer with State=Up.
For example:
f. Verify that BGP is up on the lower leaf switches (R1LL and R2LL).
Run the following command on each lower leaf switch and examine the output:
Look for two entries with Up in the rightmost column that are associated with different
IP addresses.
For example:
6. Finalize the configuration of the upper leaf switches (R1UL and R2UL).
Perform this step on the upper leaf switches (R1UL and R2UL) only.
a. Start the inter-switch ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-141
Chapter 2
Extending a Rack by Adding Another Rack
b. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
c. Verify the status of the inter-switch ports on the upper leaf switches (R1UL and R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the inter-switch ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected routed full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected routed full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected routed full 100G
QSFP-100G-CR4
2-142
Chapter 2
Extending a Rack by Adding Another Rack
2-143
Chapter 2
Extending a Rack by Adding Another Rack
d. Start the storage server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
e. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
f. Verify the status of the storage server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the storage server ports are connected.
For example:
------------------------------------------------------------------------
--------
2-144
Chapter 2
Extending a Rack by Adding Another Rack
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected routed full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected routed full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected routed full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/16 adm07 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/17 adm06 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/18 adm05 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/19 adm04 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/20 adm03 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/21 adm02 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/22 adm01 disabled 3888 full 100G
QSFP-100G-CR4
Eth1/23 celadm07 connected 3888 full 100G
QSFP-100G-CR4
Eth1/24 celadm06 connected 3888 full 100G
QSFP-100G-CR4
Eth1/25 celadm05 connected 3888 full 100G
2-145
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/26 celadm04 connected 3888 full 100G
QSFP-100G-CR4
Eth1/27 celadm03 connected 3888 full 100G
QSFP-100G-CR4
Eth1/28 celadm02 connected 3888 full 100G
QSFP-100G-CR4
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 ISL5 connected routed full 100G
QSFP-100G-CR4
Eth1/31 ISL6 connected routed full 100G
QSFP-100G-CR4
Eth1/32 ISL7 connected routed full 100G
QSFP-100G-CR4
Eth1/33 ISL8 connected routed full 100G
QSFP-100G-CR4
Eth1/34 -- xcvrAbsen 1 auto auto --
Eth1/35 -- xcvrAbsen 1 auto auto --
Eth1/36 -- xcvrAbsen 1 auto auto --
Po100 -- connected trunk full 100G --
Lo0 Routing loopback i connected routed auto auto --
Lo1 VTEP loopback inte connected routed auto auto --
Vlan1 -- down routed auto auto --
nve1 -- connected -- auto auto --
g. Start the database server ports on the upper leaf switches (R1UL and R2UL).
i. On R1UL:
2-146
Chapter 2
Extending a Rack by Adding Another Rack
h. Wait for 5 minutes to ensure that the ports you just started are fully operational before
continuing.
i. Verify the status of the database server ports on the upper leaf switches (R1UL and
R2UL).
Run the show interface status command on each upper leaf switch:
Examine the output to ensure that the database server ports are connected.
For example:
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
mgmt0 -- connected routed full 1000 --
------------------------------------------------------------------------
--------
Port Name Status Vlan Duplex Speed Type
------------------------------------------------------------------------
--------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- xcvrAbsen 1 auto auto --
Eth1/4 ISL1 connected routed full 100G
QSFP-100G-CR4
Eth1/5 ISL2 connected routed full 100G
QSFP-100G-CR4
Eth1/6 ISL3 connected routed full 100G
QSFP-100G-CR4
Eth1/7 ISL4 connected routed full 100G
QSFP-100G-CR4
Eth1/8 celadm14 connected 3888 full 100G
QSFP-100G-CR4
Eth1/9 celadm13 connected 3888 full 100G
QSFP-100G-CR4
Eth1/10 celadm12 connected 3888 full 100G
QSFP-100G-CR4
Eth1/11 celadm11 connected 3888 full 100G
QSFP-100G-CR4
Eth1/12 celadm10 connected 3888 full 100G
QSFP-100G-CR4
Eth1/13 celadm09 connected 3888 full 100G
QSFP-100G-CR4
Eth1/14 celadm08 connected 3888 full 100G
QSFP-100G-CR4
Eth1/15 adm08 connected 3888 full 100G
QSFP-100G-CR4
2-147
Chapter 2
Extending a Rack by Adding Another Rack
j. Verify that nve is up on the leaf switches (R1LL, R1UL, R2LL, and R2UL).
Run the following command on each leaf switch and examine the output:
2-148
Chapter 2
Extending a Rack by Adding Another Rack
In the output, you should see three nve peers with State=Up.
For example:
k. Verify that BGP is up on the upper leaf switches (R1UL and R2UL).
Run the following command on each upper leaf switch and examine the output:
In the output, look for two entries with Up in the rightmost column that are associated
with different IP addresses.
For example:
7. For each rack (R1 and R2), confirm the multi-rack cabling by running the
verify_roce_cables.py script.
The verify_roce_cables.py script uses two input files; one for database servers and
storage servers (nodes.rackN), and another for switches (switches.rackN). In each file,
every server or switch must be listed on separate lines. Use fully qualified domain names
or IP addresses for each server and switch.
See My Oracle Support document 2587717.1 for download and detailed usage
instructions.
Run the verify_roce_cables.py script against both of the racks:
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack2 -s switches.rack2
Check the output of the verify_roce_cables.py script against the tables in Two-Rack
Cabling for X8M Racks. Also, check that output in the CABLE OK? columns contains the OK
status.
2-149
Chapter 2
Extending a Rack by Adding Another Rack
# cd /opt/oracle.SupportTools/ibdiagtools
# ./verify_roce_cables.py -n nodes.rack1 -s switches.rack1
SWITCH PORT (EXPECTED PEER) LOWER LEAF (rack1sw-rocea0) : CABLE OK? UPPER LEAF (rack1sw-
roceb0) : CABLE OK?
----------- --------------- --------------------------- : ---------
--------------------------- : ---------
Eth1/4 (ISL peer switch) : rack1sw-roces0 Ethernet1/17 : OK rack1sw-roces0
Ethernet1/9 : OK
Eth1/5 (ISL peer switch) : rack1sw-roces0 Ethernet1/13 : OK rack1sw-roces0
Ethernet1/5 : OK
Eth1/6 (ISL peer switch) : rack1sw-roces0 Ethernet1/19 : OK rack1sw-roces0
Ethernet1/11 : OK
Eth1/7 (ISL peer switch) : rack1sw-roces0 Ethernet1/15 : OK rack1sw-roces0
Ethernet1/7 : OK
Eth1/12 (celadm10) : rack1celadm10 port-1 : OK rack1celadm10
port-2 : OK
Eth1/13 (celadm09) : rack1celadm09 port-1 : OK rack1celadm09
port-2 : OK
Eth1/14 (celadm08) : rack1celadm08 port-1 : OK rack1celadm08
port-2 : OK
...
Eth1/15 (adm08) : rack1dbadm08 port-1 : OK rack1dbadm08
port-2 : OK
Eth1/16 (adm07) : rack1dbadm07 port-1 : OK rack1dbadm07
port-2 : OK
Eth1/17 (adm06) : rack1dbadm06 port-1 : OK rack1dbadm06
port-2 : OK
...
Eth1/30 (ISL peer switch) : rack2sw-roces0 Ethernet1/17 : OK rack2sw-roces0
Ethernet1/9 : OK
Eth1/31 (ISL peer switch) : rack2sw-roces0 Ethernet1/13 : OK rack2sw-roces0
Ethernet1/5 : OK
Eth1/32 (ISL peer switch) : rack2sw-roces0 Ethernet1/19 : OK rack2sw-roces0
Ethernet1/11 : OK
Eth1/33 (ISL peer switch) : rack2sw-roces0 Ethernet1/15 : OK rack2sw-roces0
Ethernet1/7 : OK
8. Verify the RoCE Network Fabric operation across both interconnected racks by using the
infinicheck command.
Use the following recommended command sequence to verify the RoCE Network Fabric
operation across both racks.
In each command, hosts.all contains a list of database server RoCE Network Fabric IP
addresses from both racks (2 RoCE Network Fabric IP addresses for each database
2-150
Chapter 2
Extending a Rack by Adding Another Rack
server), and cells.all contains a list of RoCE Network Fabric IP addresses for the
storage servers from both racks (2 RoCE Network Fabric IP addresses for each storage
server).
a. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -z
b. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -s
c. # cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
See step 1.k for most information about each infinicheck command.
The following example shows the expected command results for the final command in the
sequence:
# cd /opt/oracle.SupportTools/ibdiagtools
# ./infinicheck -g hosts.all -c cells.all -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
At this point, both racks share the RoCE Network Fabric, and the combined system is ready for
further configuration.
See Configuring the New Hardware.
2-151
Chapter 2
Extending a Rack by Adding Another Rack
Related Topics
• Cabling Two Oracle Exadata Database Machine RoCE Network Fabric Racks With No
Downtime (My Oracle Support Doc ID 2704997.1)
• Verify RoCE Cabling on Oracle Exadata Database Machine X8M-2 and X8M-8 Servers
(My Oracle Support Doc ID 2587717.1)
2.3.2.2 Cabling Two RoCE Network Fabric Racks Together with Down Time Allowed
Use this simpler procedure to cable together two racks with RoCE Network Fabric where some
down-time can be tolerated.
This procedure is for systems with RoCE Network Fabric (X8M or later) using Oracle Exadata
System Software Release 20.1.0 or later.
In this procedure, the existing rack is R1, and the new rack is R2.
Use the applicable cabling tables depending on your system configuration:
• Two-Rack Cabling for X9M and Later Model Racks
• Two-Rack Cabling for X8M Racks
• Two-Rack Cabling for a System Combining an X8M Rack and a Later Model Rack
1. Procure and prepare the RoCE Network Fabric cables that you will use to interconnect the
racks.
Ensure you have all the RDMA Network Fabric cables required to interconnect the racks.
Label both ends of every cable.
Do not disconnect any existing cables or connect any new cables until instructed later in
the procedure.
For the required cross-rack cabling information, refer to the applicable cabling tables for
your system configuration. See also:
• Preparing for Multi-Rack Cabling with X9M and Later Model Racks
• Preparing for Multi-Rack Cabling with X8M Racks.
Note that all specified cable lengths assume the racks are physically adjacent. If this is not
the case, you may need longer cables.
2. Ensure the new rack is near the existing rack.
Ensure that the RDMA Network Fabric cables can reach the switches in each rack.
3. Ensure you have a backup of the current switch configuration for each switch in the
existing and new rack.
See Backing Up Settings on the RoCE Network Fabric Switch in Oracle Exadata Database
Machine Maintenance Guide.
4. Shut down all servers on both the new rack (R2) and the existing rack (R1).
The switches should remain available.
5. Update the firmware to the latest available release on all of the RoCE Network Fabric
switches.
For this step, treat all of the switches as if they belong to a single rack system.
See Updating RoCE Network Fabric Switch Firmware in Oracle Exadata Database
Machine Maintenance Guide.
2-152
Chapter 2
Extending a Rack by Adding Another Rack
6. Apply the multi-rack golden configuration settings on the RoCE Network Fabric switches.
Use the procedure described in Applying Golden Configuration Settings on RoCE Network
Fabric Switches, in Oracle Exadata Database Machine Maintenance Guide.
7. Enable the leaf switch server ports.
The leaf switch server ports may be disabled as a consequence of applying the multi-rack
golden configuration settings in the previous step.
To ensure that the leaf switch server ports are enabled, log in to each of the four leaf
switches and run the following commands on each leaf switch:
2-153
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
Eth1/8 RouterPort8 connected routed full 100G
QSFP-100G-SR4
Eth1/9 RouterPort9 connected routed full 100G
QSFP-100G-CR4
Eth1/10 RouterPort10 connected routed full 100G
QSFP-100G-SR4
Eth1/11 RouterPort11 connected routed full 100G
QSFP-100G-CR4
Eth1/12 RouterPort12 connected routed full 100G
QSFP-100G-SR4
Eth1/13 RouterPort13 connected routed full 100G
QSFP-100G-CR4
Eth1/14 RouterPort14 connected routed full 100G
QSFP-100G-SR4
Eth1/15 RouterPort15 connected routed full 100G
QSFP-100G-CR4
Eth1/16 RouterPort16 connected routed full 100G
QSFP-100G-SR4
Eth1/17 RouterPort17 connected routed full 100G
QSFP-100G-CR4
Eth1/18 RouterPort18 connected routed full 100G
QSFP-100G-SR4
Eth1/19 RouterPort19 connected routed full 100G
QSFP-100G-CR4
Eth1/20 RouterPort20 connected routed full 100G
QSFP-100G-SR4
Eth1/21 RouterPort21 xcvrAbsen routed full 100G
--
...
When run from a leaf switch, the output should be similar to the following:
2-154
Chapter 2
Extending a Rack by Adding Another Rack
QSFP-100G-CR4
...
Eth1/29 celadm01 connected 3888 full 100G
QSFP-100G-CR4
Eth1/30 RouterPort5 connected routed full 100G
QSFP-100G-SR4
Eth1/31 RouterPort6 connected routed full 100G
QSFP-100G-SR4
Eth1/32 RouterPort7 connected routed full 100G
QSFP-100G-SR4
Eth1/33 RouterPort8 connected routed full 100G
QSFP-100G-SR4
...
10. Check the neighbor discovery for every switch in racks R1 and R2.
Log in to each switch and use the show lldp neighbors command. Make sure that all
switches are visible and check the switch ports assignment against the applicable cabling
tables.
A spine switch should see the two leaf switches in each rack, but not the other spine
switch. The output for a spine switch should be similar to the following:
Note:
The interfaces output in the Port ID column are different for each switch based
on the applicable cabling tables.
Each leaf switch should see the two spine switches, but not the other leaf switches. The
output for a leaf switch should be similar to the following:
2-155
Chapter 2
Extending a Rack by Adding Another Rack
Note:
The interfaces output in the Port ID column are different for each switch based
on the applicable cabling tables.
12. For each rack, confirm the multi-rack cabling by running the verify_roce_cables.py
script.
Refer to My Oracle Support Doc ID 2587717.1 for download and usage instructions.
Check the output of the verify_roce_cables.py script against the applicable cabling
tables. Also, check that output in the CABLE OK? columns contains the OK status.
When running the script, two input files are used, one for nodes and one for switches.
Each file should contain the servers or switches on separate lines. Use fully qualified
domain names or IP addresses for each server and switch.
The following output is a partial example of the command results:
2-156
Chapter 2
Extending a Rack by Adding Another Rack
13. Verify the RoCE Network Fabric operation by using the infinicheck command.
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -z
• Use infinicheck with the -s option to set up user equivalence for password-less SSH
across the RoCE Network Fabric. For example:
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -s
• Finally, verify the RoCE Network Fabric operation by using infinicheck with the -b
option, which is recommended on newly imaged machines where it is acceptable to
suppress the cellip.ora and cellinit.ora configuration checks. For example:
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
2-157
Chapter 2
Extending a Rack by Adding Another Rack
14. After cabling the racks together, proceed to Configuring the New Hardware to finish the
configuration of the new rack.
Related Topics
• Verify RoCE Cabling on Oracle Exadata Database Machine X8M-2 and X8M-8 Servers
(My Oracle Support Doc ID 2587717.1)
2-158
Chapter 2
Extending a Rack by Adding Another Rack
2. Ensure the new rack is near the existing rack. The RDMA Network Fabric cables must be
able to reach the servers in each rack.
3. Completely shut down the new rack (R2).
4. Cable the two leaf switches R2 IB2 and R2 IB3 in the new rack according to Two-Rack
Cabling with InfiniBand Network Fabric. Note that you need to first remove the seven
existing inter-switch connections between each leaf switch, as well as the two connections
between the leaf switches and the spine switch in the new rack R2, not in the existing rack
R1.
5. Verify both RDMA Network Fabric interfaces are up on all database nodes and storage
cells. You can do this by running the ibstat command on each node and verifying both
interfaces are up.
6. Power off leaf switch R1 IB2. This causes all the database servers and Exadata Storage
Servers to fail over their RDMA Network Fabric traffic to R1 IB3.
7. Disconnect all seven inter-switch links between R1 IB2 and R1 IB3, as well as the one
connection between R1 IB2 and the spine switch R1 IB1.
8. Cable leaf switch R1 IB2 according to Two-Rack Cabling with InfiniBand Network Fabric.
9. Power on leaf switch R1 IB2.
10. Wait for three minutes for R1 IB2 to become completely operational.
To check the switch, log in to the switch and run the ibswitches command. The output
should show three switches, R1 IB1, R1 IB2, and R1 IB3.
11. Verify both RDMA Network Fabric interfaces are up on all database nodes and storage
cells. You can do this by running the ibstat command on each node and verifying both
interfaces are up.
12. Power off leaf switch R1 IB3. This causes all the database servers and storage servers to
fail over their RDMA Network Fabric traffic to R1 IB2.
13. Disconnect the one connection between R1 IB3 and the spine switch R1 IB1.
14. Cable leaf switch R1 IB3 according to Two-Rack Cabling with InfiniBand Network Fabric.
16. Wait for three minutes for R1 IB3 to become completely operational.
To check the switch, log in to the switch and run the ibswitches command. The output
should show three switches, R1 IB1, R1 IB2, and R1 IB3.
17. Power on all the InfiniBand switches in R2.
18. Wait for three minutes for the switches to become completely operational.
To check the switch, log in to the switch and run the ibswitches command. The output
should show six switches, R1 IB1, R1 IB2, R1 IB3, R2 IB1, R2 IB2, and R2 IB3.
19. Ensure the Subnet Manager Master is running on R1 IB1 by running the getmaster
command from any switch.
20. Power on all servers in R2.
21. Log in to spine switch R1 IB1, and lower its priority to 8 as follows:
2-159
Chapter 2
Extending a Rack by Adding Another Rack
22. Ensure Subnet Manager Master is running on one of the spine switches.
After cabling the racks together, proceed to Configuring the New Hardware to configure the
racks.
WARNING:
Take time to read and understand this procedure before implementation. Pay
careful attention to all the instructions, not just the command examples. A
system outage may occur if the instructions are not applied correctly.
In this procedure, the existing racks are R1, R2, … ,Rn, and the new rack is Rn+1.
Note:
Cabling three or more racks together requires no downtime for the existing racks R1,
R2, …, Rn. Only the new rack, Rn+1, is powered down initially.
2-160
Chapter 2
Extending a Rack by Adding Another Rack
2-161
Chapter 2
Extending a Rack by Adding Another Rack
In the example, the output shows seven other leaf switches having loopback octet
values from 102 to 108. This output is consistent with an existing system containing
four racks.
c. Determine the loopback octet for every spine switch.
Use the command shown in the following example.
In the example, the output shows four spine switches having loopback octet values
from 201 to 204. This output is also consistent with an existing system containing four
racks.
d. Validate the configuration of the existing RoCE Network Fabric switches.
Check the information gathered from the existing RoCE Network Fabric switches to
ensure that every switch uses a unique loopback octet value and that all the values are
as expected.
Verify that the information gathered from the existing RoCE Network Fabric switches
conforms to the following conventions:
• On the leaf switches, the overall range of loopback octet values should start with
101 and increase incrementally (by 1) for each leaf switch.
According to the best-practice convention, the loopback octet value for each leaf
switch should be configured as follows:
2-162
Chapter 2
Extending a Rack by Adding Another Rack
Caution:
If the switches in the existing racks (R1, R2, …, Rn) don't conform to the
above conventions, then you must take special care to assign unique
loopback octet values to the switches in the new rack (Rn+1) as part of
applying their golden configuration settings (in the next step).
If multiple switches use the same loopback octet, the RoCE Network Fabric
cannot function correctly, resulting in a system outage.
6. Apply the golden configuration settings on the RoCE Network Fabric switches in the new
rack (Rn+1).
Combine the information about the existing RoCE Network Fabric switches you gathered in
the previous step and the procedure described in Applying Golden Configuration Settings
on RoCE Network Fabric Switches (in Oracle Exadata Database Machine Maintenance
Guide).
Caution:
Take care when performing this step, as misconfiguration of the RoCE Network
Fabric will likely cause a system outage.
For example, every switch in a multi-rack configuration must have a unique
loopback octet. If multiple switches use the same loopback octet, the RoCE
Network Fabric cannot function correctly, resulting in a system outage.
7. Enable the leaf switch server ports on the RoCE Network Fabric leaf switches in the new
rack (Rn+1).
The leaf switch server ports may be disabled as a consequence of applying the multi-rack
golden configuration settings in the previous step.
2-163
Chapter 2
Extending a Rack by Adding Another Rack
To ensure that the leaf switch server ports are enabled, log in to each of the leaf switches
in the new rack and run the following commands on each switch:
8. Perform the physical cabling of the switches in the new rack (Rn+1).
Caution:
Cabling within a live network must be done carefully to avoid potentially serious
disruptions.
a. Remove the eight existing inter-switch connections (ports 4, 5, 6, 7 and 30, 31, 32, 33)
between each leaf switch in the new rack (Rn+1).
b. Cable the leaf switches in the new rack according to the applicable cabling table.
For example, if you are adding a 5th rack to a system using Exadata X9M (or later
model) racks, then use "Table 4-17 Leaf Switch Connections for the Fifth Rack in a
Five-Rack System".
9. Add the new rack to the switches in the existing racks (R1 to Rn).
a. For an existing rack (Rx), cable the lower leaf switch RxLL according to the applicable
cabling table.
b. For the same rack, cable the upper leaf switch RxUL according to the applicable
cabling table.
c. Repeat these steps for each existing rack, R1 to Rn.
10. Confirm each switch is available and connected.
For each switch in racks R1, R2, …, Rn, Rn+1, confirm the output for the switch show
interface status command shows connected and 100G.
When run from a spine switch, the output should be similar to the following:
2-164
Chapter 2
Extending a Rack by Adding Another Rack
When run from a leaf switch, the output should be similar to the following:
2-165
Chapter 2
Extending a Rack by Adding Another Rack
11. Check the neighbor discovery for every switch in racks R1, R2, …, Rn, Rn+1.
Log in to each switch and use the show lldp neighbors command. Make sure that all
switches are visible and check the switch ports assignment (leaf switches: ports Eth1/4 -
Eth1/7, Eth1/30 - Eth1/33; spine switches: ports Eth1/5 - Eth1/20) against the applicable
cabling tables.
Each spine switch should see all the leaf switches in each rack, but not the other spine
switches. The output for a spine switch should be similar to the following:
Note:
The interfaces in the rightmost output column (for example, Ethernet1/5) are
different for each switch based on the applicable cabling tables.
Each leaf switch should see the spine switch in every rack, but not the other leaf switches.
The output for a leaf switch should be similar to the following:
2-166
Chapter 2
Extending a Rack by Adding Another Rack
Note:
The interfaces in the rightmost output column (for example, Ethernet1/13) are
different for each switch based on the applicable cabling tables.
13. For each rack, confirm the multi-rack cabling by running the verify_roce_cables.py
script.
Refer to My Oracle Support Doc ID 2587717.1 for download and usage instructions.
Check the output of the verify_roce_cables.py script against the applicable cabling
tables. Also, check that output in the CABLE OK? columns contains the OK status.
When running the script, two input files are used, one for nodes and one for switches.
Each file should contain the servers or switches on separate lines. Use fully qualified
domain names or IP addresses for each server and switch.
The following output is a partial example of the command results:
2-167
Chapter 2
Extending a Rack by Adding Another Rack
14. Verify the RoCE Network Fabric operation by using the infinicheck command.
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -z
• Use infinicheck with the -s option to set up user equivalence for password-less SSH
across the RoCE Network Fabric. For example:
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -s
• Finally, verify the RoCE Network Fabric operation by using infinicheck with the -b
option, which is recommended on newly imaged machines where it is acceptable to
suppress the cellip.ora and cellinit.ora configuration checks. For example:
# /opt/oracle.SupportTools/ibdiagtools/infinicheck -g hosts.lst -c
cells.lst -b
INFINICHECK
[Network Connectivity, Configuration and Performance]
2-168
Chapter 2
Extending a Rack by Adding Another Rack
15. After cabling the racks together, proceed to Configuring the New Hardware to finish the
configuration of the new rack.
Related Topics
• Exadata Database Machine and Exadata Storage Server Supported Versions (My Oracle
Support Doc ID 888828.1)
# getmaster
20100701 11:46:38 OpenSM Master on Switch : 0x0021283a8516a0a0 ports 36
Sun DCS 36
QDR switch dm01sw-ib1.example.com enhanced port 0 lid 1 lmc 0
If the Subnet Manager Master is not running on a spine switch, then perform the
following steps:
i. Use the getmaster command to identify the current location of the Subnet
Manager Master.
ii. Log in as the root user on the leaf switch that is the Subnet Manager Master.
iii. Disable Subnet Manager on the switch. The Subnet Manager Master relocates to
another switch.
iv. Use the getmaster command to identify the current location of the Subnet
Manager Master. If a spine switch is not the Subnet Manager Master, then repeat
steps 1.b.ii and 1.b.iii until a spine switch is the Subnet Manager Master.
v. Enable Subnet Manager on the leaf switches that were disabled during this
procedure.
c. Log in to the Subnet Manager Master spine switch.
2-169
Chapter 2
Extending a Rack by Adding Another Rack
Caution:
Cabling within a live network must be done carefully in order to avoid potentially
serious disruptions.
The cabling table that you use for your new InfiniBand topology tells you how to
connect ports on the leaf switches to ports on spine switches in order to connect
the racks. Some of these ports on the spine switches might be already in use to
support the existing InfiniBand topology. In these cases, connect only the cable
on the leaf switch in the new rack and stop there for now. Make note of which
cables you were not able to terminate.
Do not unplug any cables on the spine switch in the existing rack at this point.
Step 5 describes how to re-cable the leaf switches on the existing racks (one leaf
switch after the other - while the leaf switch being re-cabled will be powered off),
which will free up these currently in-use ports. At that point, you can connect the
other end of the cable from the leaf switch in the new rack to the spine switch in
the existing rack as indicated in the table.
2-170
Chapter 2
Extending a Rack by Adding Another Rack
ibdiagnet -r
Each spine switch should show as running in the Summary Fabric SM-state-priority
section of the output. If a spine switch is not running, then log in to the switch and enable
the Subnet Manager using the enablesm command.
13. If there are now four or more racks, then log in to the leaf switches in each rack and
disable Subnet Manager using the disablesm command.
2-171
3
Configuring the New Hardware
This section contains the following tasks needed to configure the new hardware:
Note:
The new and existing racks must be at the same patch level for Oracle Exadata
Database Servers and Oracle Exadata Storage Servers, including the operating
system. Refer to Reviewing Release and Patch Levels for additional information.
3-1
Chapter 3
Setting Up New Servers
match the existing server interface names, or change the existing server interface names and
Oracle Cluster Registry (OCR) configuration to match the new servers.
Do the following after changing the interface names:
1. Edit the entries in /etc/sysctl.conf file on the database servers so that the entries for
the RDMA Network Fabric match. The following is an example of the file entries before
editing. One set of entries must be changed to match the other set.
Found in X2 node
net.ipv4.neigh.bondib0.locktime = 0
net.ipv4.conf.bondib0.arp_ignore = 1
net.ipv4.conf.bondib0.arp_accept = 1
net.ipv4.neigh.bondib0.base_reachable_time_ms = 10000
net.ipv4.neigh.bondib0.delay_first_probe_time = 1
Found in V2 node
net.ipv4.conf.bond0.arp_accept=1
net.ipv4.neigh.bond0.base_reachable_time_ms=10000
net.ipv4.neigh.bond0.delay_first_probe_time=1
See Also:
Oracle Exadata Database Machine Maintenance Guide for information about
changing the RDMA Network Fabric information
3-2
Chapter 3
Setting Up New Servers
Note:
In order to configure the servers with Oracle Exadata Deployment Assistant (OEDA),
the new server information must be entered in OEDA, and configuration files
generated.
1. Download the latest release of OEDA listed in My Oracle Support note 888828.1.
2. Enter the new server information in OEDA.
Do not include information for the existing rack.
Note:
• When extending an existing rack that has database servers earlier than
Oracle Exadata X4-2, be sure to deselect the active bonding option for the
InfiniBand Network Fabric so the new database servers are configured with
active-passive bonded interfaces.
• When extending an existing Oracle Exadata X4-2 or later system with active-
active bonding, select the active bonding option to configure the new
database servers for active-active bonding.
Note:
OEDA checks the performance level of Oracle Exadata Storage Servers so it
is not necessary to check them using the CellCLI CALIBRATE command at this
time.
b. Create the cell disks and grid disks as described in Configuring Cells, Cell Disks, and
Grid Disks with CellCLI.
c. Create the flash cache and flash log as described in Creating Flash Cache and Flash
Grid Disks.
Note:
When creating the flash cache, enable write-back flash cache.
5. Ensure the RDMA Network Fabric and bonded client Ethernet interface names are the
same on the new database servers as on the existing database servers.
3-3
Chapter 3
Setting Up New Servers
6. When using the same, earlier style bonding names, such as BOND0, for the new database
servers, then update the /opt/oracle.cellos/cell.conf file to reflect the correct bond
names.
Note:
If the existing servers use the latest bonded interface names, such as BONDIB0,
then this step can be skipped.
See Also:
My Oracle Support note 888828.1 for information about OEDA
8. Copy the configuration files to the first database server of the new servers in the /opt/
oracle.SupportTools/onecommand directory.
This is the information completed in step 2.
9. Run OEDA up to, but not including, the Create Grid Disks step, and then run the
Configure Alerting and Setup ASR Alerting configuration steps.
Note:
10. Configure the storage servers, cell disks and grid disks as described in Configuring Cells,
Cell Disks, and Grid Disks with CellCLI.
Note:
Use the data collected from the existing system, as described in Obtaining
Current Configuration Information to determine the grid disk names and sizes.
3-4
Chapter 3
Setting Up New Servers
12. Verify the time is the same on the new servers as on the existing servers.
This check is performed for the storage servers and database servers.
13. Ensure the NTP settings are the same on the new servers as on the existing servers.
This check is performed for the storage servers and database servers.
14. Configure HugePages on the new servers to match the existing servers.
15. Ensure the values in the /etc/security/limits.conf file for the new database
servers match the existing database servers.
16. Go to Setting User Equivalence to continue the hardware configuration.
Related Topics
• Preparing the Network Configuration
When adding additional servers to your rack, you will need IP address and the current
network configuration settings.
• Exadata Database Machine and Exadata Storage Server Supported Versions (My Oracle
Support Doc ID 888828.1)
3-5
Chapter 3
Setting up a New Rack
Related Topics
• Preparing the Network Configuration
When adding additional servers to your rack, you will need IP address and the current
network configuration settings.
• Exadata Database Machine and Exadata Storage Server Supported Versions (My Oracle
Support Doc ID 888828.1)
Note:
• Only run OEDA up to the Create Grid Disks step, then configure storage
servers as described in Configuring Cells, Cell Disks, and Grid Disks with
CellCLI in Oracle Exadata System Software User's Guide.
• When adding servers with 3 TB High Capacity (HC) disks to existing servers
with 2TB disks, it is recommended to follow the procedure in My Oracle
Support Doc ID 1476336.1 to properly define the grid disks and disk groups.
At this point of setting up the rack, it is only necessary to define the grid
disks. The disk groups are created after the cluster has been extended on to
the new nodes.
• If the existing storage servers are Extreme Flash (EF) and you are adding
High Capacity (HC) storage servers, or if the existing storage servers are HC
and you are adding EF storage servers, then you must place the new disks in
new disk groups. You cannot mix EF and HC disks within the same disk
group.
3-6
Chapter 3
Setting User Equivalence
Related Topics
• How to Add Exadata Storage Servers Using 3 TB (or larger) Disks to an Existing Database
Machine (My Oracle Support Doc ID 1476336.1)
# mkdir /root/new_group_files
# mkdir /root/old_group_files
# mkdir /root/group_files
b. Copy the group files for the new servers to the /root/new_group_files directory.
c. Copy the group files for the existing servers to the /root/old_group_files
directory.
d. Copy the group files for the existing servers to the /root/group_files directory.
e. Update the group files to include the existing and new servers.
f. Make the updated group files the default group files. The updated group files contain
the existing and new servers.
cp /root/group_files/* /root
cp /root/group_files/* /opt/oracle.SupportTools/onecommand
g. Put a copy of the updated group files in the root user, oracle user, and Oracle Grid
Infrastructure user home directories, and ensure that the files are owned by the
respective users.
3. Modify the /etc/hosts file on the existing and new database server to include the
existing RDMA Network Fabric IP addresses for the database servers and storage servers.
The existing and new all_priv_group files can be used for this step.
3-7
Chapter 3
Setting User Equivalence
Note:
Do not copy the /etc/hosts file from one server to the other servers. Edit the
file on each server.
4. Run the setssh-Linux.sh script as the root user on one of the existing database servers
to configure user equivalence for all servers using the following command. Oracle
recommends using the first database server.
# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \
/path_to_file/all_group -n N
In the preceding command, path_to_file is the directory path for the all_group file
containing the names for the existing and new servers.
Note:
For Oracle Exadata Database Machine X2-2 (with X4170 and X4275 servers)
systems, use the setssh.sh command to configure user equivalence.
The command line options for the setssh.sh command differ from the setssh-
Linux.sh command. Run setssh.sh without parameters to see the proper
syntax.
5. Add the known hosts using RDMA Network Fabric. This step requires that all database
servers are accessible by way of their InfiniBand interfaces.
# /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \
/path_to_file/all_priv_group -n N -p password
7. Run the setssh-Linux.sh script as the oracle user on one of the existing database
servers to configure user equivalence for all servers using the following command. Oracle
recommends using the first database server. If there are separate owners for the Oracle
Grid Infrastructure software, then run a similar command for each owner.
$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \
/path_to_file/dbs_group -n N
In the preceding command, path_to_file is the directory path for the dbs_group file. The file
contains the names for the existing and new servers.
3-8
Chapter 3
Starting the Cluster
Note:
• For Oracle Exadata Database Machine X2-2 (with X4170 and X4275
servers) systems, use the setssh.sh command to configure user
equivalence.
• It may be necessary to temporarily change the permissions on the setssh-
Linux.sh file to 755 for this step. Change the permissions back to the
original settings after completing this step.
8. Add the known hosts using RDMA Network Fabric. This step requires that all database
servers are accessible by way of their InfiniBand interfaces.
$ /opt/oracle.SupportTools/onecommand/setssh-Linux.sh -s -c N -h \
/root/group_files/dbs_priv_group -n N
If there is a separate Oracle Grid Infrastructure user, then also run the preceding
commands for that user, substituting the grid user name for the oracle user.
Note:
• Oracle recommends you start one server, and let it come up fully before starting
Oracle Clusterware on the rest of the servers.
• It is not necessary to stop a cluster when extending Oracle Exadata Database
Machine Half Rack to a Full Rack, or a Quarter Rack to a Half Rack or Full Rack.
Run the preceding command until it shows that the first server has started.
3-9
Chapter 3
Adding Grid Disks to Oracle ASM Disk Groups
It may take several minutes for all servers to start and join the cluster.
Note:
1. Ensure the new storage servers are running the same version of software as storage
servers already in use. Run the following command on the first database server:
Note:
If the Oracle Exadata System Software on the storage servers does not match,
then upgrade or patch the software to be at the same level. This could be
patching the existing servers or new servers. Refer to Reviewing Release and
Patch Levels for additional information.
3-10
Chapter 3
Adding Grid Disks to Oracle ASM Disk Groups
When adding Oracle Exadata Storage Server X4-2L servers, the cellip.ora file contains
two IP addresses listed for each cell. Copy each line completely to include the two IP
addresses, and merge the addresses in the cellip.ora file of the existing cluster.
a. From any database server, make a backup copy of the cellip.ora file.
cp /etc/oracle/cell/network-config
cp cellip.ora cellip.ora.orig
cp cellip.ora cellip.ora-bak
b. Edit the cellip.ora-bak file and add the IP addresses for the new storage servers.
c. Copy the edited file to the cellip.ora file on all database nodes using dcli. Use a
file named dbnodes that contains the names of every database server in the cluster,
with each database name on a separate line. Run the following command from the
directory that contains the cellip.ora-bak file.
The following is an example of the cellip.ora file after expanding Oracle Exadata
Database Machine X3-2 Half Rack to Full Rack using Oracle Exadata Storage Server
X4-2L servers:
cell="192.168.10.9"
cell="192.168.10.10"
cell="192.168.10.11"
cell="192.168.10.12"
cell="192.168.10.13"
cell="192.168.10.14"
cell="192.168.10.15"
cell="192.168.10.17;192.168.10.18"
cell="192.168.10.19;192.168.10.20"
cell="192.168.10.21;192.168.10.22"
cell="192.168.10.23;192.168.10.24"
cell="192.168.10.25;192.168.10.26"
cell="192.168.10.27;192.168.10.28"
cell="192.168.10.29;192.168.10.30"
In the preceding example, lines 1 through 7 are for the original servers, and lines 8 through
14 are for the new servers. Oracle Exadata Storage Server X4-2L servers have two IP
addresses each.
3. Ensure the updated cellip.ora file is on all database servers. The updated file must
include a complete list of all storage servers.
4. Verify accessibility of all grid disks from one of the original database servers. The following
command can be run as the root user or the oracle user.
The output from the command shows grid disks from the original and new storage servers.
3-11
Chapter 3
Adding Grid Disks to Oracle ASM Disk Groups
5. Add the grid disks from the new storage servers to the existing disk groups using
commands similar to the following. You cannot have both high performance disks and high
capacity disks in the same disk group.
$ .oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
$ sqlplus / as sysasm
SQL> ALTER DISKGROUP data ADD DISK
2> 'o/*/DATA*dm02*'
3> rebalance power 11;
In the preceding commands, a Full Rack was added to an existing Oracle Exadata Rack.
The prefix for the new rack is dm02, and the grid disk prefix is DATA.
The following is an example in which an Oracle Exadata Database Machine Half Rack was
upgraded to a Full Rack. The cell host names in the original system were named
dm01cel01 through dm01cel07. The new cell host names are dm01cel08 through
dm01cel14.
$ .oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
$ SQLPLUS / AS sysasm
SQL> ALTER DISKGROUP data ADD DISK
2> 'o/*/DATA*dm01cel08*',
3> 'o/*/DATA*dm01cel09*',
4> 'o/*/DATA*dm01cel10*',
5> 'o/*/DATA*dm01cel11*',
6> 'o/*/DATA*dm01cel12*',
7> 'o/*/DATA*dm01cel13*',
8> 'o/*/DATA*dm01cel14*'
9> rebalance power 11;
3-12
Chapter 3
Adding Servers to a Cluster
Note:
6. Monitor the status of the rebalance operation using a query similar to the following from
any Oracle ASM instance:
See Also:
3-13
Chapter 3
Adding Servers to a Cluster
Caution:
If Oracle Clusterware manages additional services that are not yet installed on the
new nodes, such as Oracle GoldenGate, then note the following:
• It may be necessary to stop those services on the existing node before running
the addNode.sh script.
• It is necessary to create any users and groups on the new database servers that
run these additional services.
• It may be necessary to disable those services from auto-start so that Oracle
Clusterware does not try to start the services on the new nodes.
Note:
To prevent problems with transferring files between existing and new nodes, you
need to set up SSH equivalence. See Step 4 in Expanding an Oracle VM Oracle
RAC Cluster on Exadata in for details.
# cd /
# rm -rf /u01/app/*
# mkdir -p /u01/app/12.1.0.2/grid
# mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1
# chown -R oracle:oinstall /u01
3. Ensure the inventory directory and Grid home directories have been created and have
the proper permissions. The directories should be owned by the Grid user and the
OINSTALL group. The inventory directory should have 770 permission, and the Oracle
Grid Infrastructure home directories should have 755.
3-14
Chapter 3
Adding Servers to a Cluster
Note:
If Oracle Exadata Deployment Assistant (OEDA) was used earlier, then these
users and groups should have been created. Check that they do exist, and have
the correct UID and GID values.
ocrconfig -showbackup
7. Verify that the additional database servers are ready to be added to the cluster using
commands similar to following:
3-15
Chapter 3
Adding Servers to a Cluster
Note:
• The second and third commands do not display output if the commands
complete correctly.
• An error about a voting disk, similar to the following, may be displayed:
ERROR:
PRVF-5449 : Check of Voting Disk location "o/192.168.73.102/ \
DATA_CD_00_dm01cel07(o/192.168.73.102/DATA_CD_00_dm01cel07)" \
failed on the following nodes:
Check failed on nodes:
dm01db01
dm01db01:No such file or directory
…
PRVF-5431 : Oracle Cluster Voting Disk configuration check
$ export IGNORE_PREADDNODE_CHECKS=Y
Setting the environment variable does not prevent the error when running the
cluvfy command, but it does allow the addNode.sh script to complete
successfully.
- If you are running Oracle Grid Infrastructure 12c or later, use the following
addnode parameters: -ignoreSysPrereqs -ignorePrereq
In Oracle Grid Infrastructure 12c and later, addnode does not use the
IGNORE_PREADDNODE_CHECKS environment variable.
• If a database server was installed with a certain image and subsequently
patched to a later image, then some operating system libraries may be older
than the version expected by the cluvfy command. This causes the cluvfy
command and possibly the addNode.sh script to fail.
It is permissible to have an earlier version as long as the difference in
versions is minor. For example, glibc-common-2.5-81.el5_8.2 versus
glibc-common-2.5-49. The versions are different, but both are at version
2.5, so the difference is minor, and it is permissible for them to differ.
Set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running
the addNode.sh script or use the addnode parameters -ignoreSysPrereqs -
ignorePrereq with the addNode.sh script to workaround this problem.
8. Ensure that all directories inside the Oracle Grid Infrastructure home on the existing server
have their executable bits set. Run the following commands as the root user.
3-16
Chapter 3
Adding Servers to a Cluster
perm /u+x ! \
-perm /g+x ! -perm o+x
9. Run the following command. It is assumed that the Oracle Grid Infrastructure home is
owned by the Grid user.
10. This step is needed only if you are running Oracle Grid Infrastructure 11g. In Oracle Grid
Infrastructure 12c, no response file is needed because the values are specified on the
command line.
Create a response file, add-cluster-nodes.rsp, as the Grid user to add the new servers
similar to the following:
RESPONSEFILE_VERSION=2.2.1.0.0
CLUSTER_NEW_NODES={dm02db01,dm02db02, \
dm02db03,dm02db04,dm02db05,dm02db06,dm02db07,dm02db08}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm0201-vip,dm0202-vip,dm0203-vip,dm0204-
vip, \
dm0205-vip,dm0206-vip,dm0207-vip,dm0208-vip}
In the preceding file, the host names dm02db01 through db02db08 are the new nodes being
added to the cluster.
3-17
Chapter 3
Adding Servers to a Cluster
Note:
The lines listing the server names should appear on one continuous line. They
are wrapped in the documentation due to page limitations.
$ cd Grid_home/oui/bin
$ ./addNode.sh -silent -responseFile /path/to/add-cluster-nodes.rsp
• If you are running Oracle Grid Infrastructure 12c or later, run the addnode.sh command
with the CLUSTER_NEW_NODES and CLUSTER_NEW_VIRTUAL_HOSTNAMES parameters. The
syntax is:
For example:
$ cd Grid_home/addnode/
$ ./addnode.sh -silent
"CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,
dm02db06,dm02db07,dm02db08}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm02db01-
vip,dm02db02-vip,
dm02db03-vip,dm02db04-vip,dm02db05-vip,dm02db06-vip,dm02db07-
vip,dm02db08-vip}"
-ignoreSysPrereqs -ignorePrereq
14. Verify the grid disks are visible from each of the new database servers.
15. Run the orainstRoot.sh script as the root user when prompted using the dcli utility.
Before running the root.sh script, on each new server, set the HAIP_UNSUPPORTED
environment variable to TRUE.
$ export HAIP_UNSUPPORTED=true
3-18
Chapter 3
Configuring Cell Alerts for New Oracle Exadata Storage Servers
17. Run the Grid_home/root.sh script on each server sequentially. This simplifies the process,
and ensures that any issues can be clearly identified and addressed.
Note:
The node identifier is set in order of the nodes where the root.sh script is run.
Typically, the script is run from the lowest numbered node name to the highest.
18. Check the log file from the root.sh script and verify there are no problems on the server
before proceeding to the next server. If there are problems, then resolve them before
continuing.
19. Check the status of the cluster after adding the servers.
20. Check that all servers have been added and have basic services running.
Note:
It may be necessary to mount disk groups on the new servers. The following
commands must be run as the oracle user.
21. If you are running Oracle Grid Infrastructure releases 11.2.0.2 and later, then perform the
following steps:
a. Manually add the CLUSTER_INTERCONNECTS parameter to the SPFILE for each Oracle
ASM instance.
3-19
Chapter 3
Adding Oracle Database Software to the New Servers
Manually configure cell alerts on the new storage servers. Use the settings on the original
storage servers as a guide. To view the settings on the original storage servers, use a
command similar to the following:
To configure alert notification on the new storage servers, use a command similar to the
following:
Note:
The backslash character (\) is used as an escape character for the dcli utility,
and as a line continuation character in the preceding command.
If you are running Oracle Database release 12c or later, you also have to change
permissions for files ending in uppercase O, in addition to files ending in zero.
2. This step is required for Oracle Database 11g only. If you are running Oracle Database
12c, you can skip this step because the directory has already been created.
3-20
Chapter 3
Adding Oracle Database Software to the New Servers
Create the ORACLE_BASE directory for the database owner, if it is different from the Oracle
Grid Infrastructure software owner (Grid user) using the following commands:
3. Run the following command to set ownership of the emocmrsp file in the Oracle
Database $ORACLE_HOME directory:
4. This step is required for Oracle Database 11g only. If you are running Oracle Database
12c, then you can skip this step because the values are entered on the command line.
Create a response file, add-db-nodes.rsp, as the oracle owner to add the new servers
similar to the following:
RESPONSEFILE_VERSION=2.2.1.0.0
CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05, \
dm02db06,dm02db07,dm02db08}
Note:
The lines listing the server names should appear on one continuous line. The are
wrapped in the document due to page limitations.
5. Add the Oracle Database ORACLE_HOME directory to the new servers by running the
addNode.sh script from an existing server as the database owner user.
• If you are running Oracle Grid Infrastructure 11g:
$ cd $ORACLE_HOME/oui/bin
$ ./addNode.sh -silent -responseFile /path/to/add-db-nodes.rsp
• If you are running Oracle Grid Infrastructure 12c, then you specify the nodes on the
command line. The syntax is:
For example:
$ cd $Grid_home/addnode
$ ./addnode.sh -silent
"CLUSTER_NEW_NODES={dm02db01,dm02db02,dm02db03,dm02db04,dm02db05,
dm02db06,dm02db07,dm02db08}" -ignoreSysPrereqs -ignorePrereq
6. Ensure the $ORACLE_HOME/oui/oraparam.ini file has the memory settings that match
the parameters set in the Oracle Grid Infrastructure home.
3-21
Chapter 3
Adding Database Instance to the New Servers
7. Run the root.sh script on each server when prompted as the root user using the dcli
utility.
In the preceding command, new_db_nodes is the file with the list of new database servers.
8. Verify the ORACLE_HOME directories have been added to the new servers.
The command must be run for all servers and instances, substituting the server name and
instance name, as appropriate.
Note:
If the command fails, then ensure any files that were created, such as redo log
files, are cleaned up. The deleteInstance command does not clean log files or
data files that were created by the addInstance command.
3-22
Chapter 3
Returning the Rack to Service
5. Ensure the original configuration summary report from the original cluster deployment is
updated to include all servers. This document should include the calibrate and network
verifications for the new rack, and the RDMA Network Fabric cable checks.
6. Conduct a power-off test, if possible. If the new Exadata Storage Servers cannot be
powered off, then verify that the new database servers with the new instances can be
powered off and powered on, and that all processes start automatically.
Note:
Ensure the Oracle ASM disk rebalance process has completed for all disk
groups. Connect to an Oracle ASM instance and issue the following command:
3-23
Chapter 3
Returning the Rack to Service
• HugePages settings
8. Incorporate the new cell and database servers into Oracle Auto Service Request (ASR).
9. Update Oracle Enterprise Manager Cloud Control to include the new nodes.
Related Topics
• Verifying InfiniBand Network Configuration
• About Oracle Auto Service Request
• Verify RoCE Cabling on Oracle Exadata Database Machine X8M-2 and X8M-8 Servers
(My Oracle Support Doc ID 2587717.1)
• Oracle Exadata Database Machine Exachk (My Oracle Support Doc ID 1070954.1)
3-24
4
Multi-Rack Cabling Tables for Oracle Exadata
X9M and Later Models
This section contains multi-rack cabling tables for Oracle Exadata X9M and later models,
which use RoCE Network Fabric.
• Understanding Multi-Rack Cabling for X9M and Later Model Racks
Up to 14 racks (X9M and later models) can be cabled together without external RDMA
Network Fabric switches.
• Preparing for Multi-Rack Cabling with X9M and Later Model Racks
• Two-Rack Cabling for X9M and Later Model Racks
• Three-Rack Cabling for X9M and Later Model Racks
• Four-Rack Cabling for X9M and Later Model Racks
• Five-Rack Cabling for X9M and Later Model Racks
• Six-Rack Cabling for X9M and Later Model Racks
• Seven-Rack Cabling for X9M and Later Model Racks
• Eight-Rack Cabling for X9M and Later Model Racks
• Nine-Rack Cabling for X9M and Later Model Racks
• Ten-Rack Cabling for X9M and Later Model Racks
• Eleven-Rack Cabling for X9M and Later Model Racks
• Twelve-Rack Cabling for X9M and Later Model Racks
• Thirteen-Rack Cabling for X9M and Later Model Racks
• Fourteen-Rack Cabling for X9M and Later Model Racks
4-1
Chapter 4
Understanding Multi-Rack Cabling for X9M and Later Model Racks
The procedures in this section assume the racks are adjacent to each other, standard routing
in raised floor is used, and there are no obstacles in the raised floor. If these assumptions are
not correct for your environment, then longer cables may be required for the connections.
Note:
By default, Oracle Exadata Database Machine racks do not include spare cables or a
third RoCE Network Fabric switch. To extend these racks, you must order the
required cables and RoCE Network Fabric switch.
The following diagram shows the default RDMA Network Fabric architecture for a single-rack
system. Each rack has two leaf switches, with eight connections between the leaf switches.
The database servers and storage servers are each connected to both leaf switches. Each
server contains a dual-port RDMA Network Fabric card, with port 1 connected to the lower leaf
switch and port 2 connected to the upper leaf switch.
Single Rack
Inter-Switch Links
8
Lower Leaf Switch Upper Leaf Switch
Database
Server 1
Database
Server n
Storage
Server 1
Storage
Server m
To connect up to 14 racks (X9M and later models) together, use the following general
approach:
1. Remove the eight existing inter-switch connections between the leaf switches on each
rack.
2. From each leaf switch, evenly distribute 14 connections to the spine switches in all of the
interconnected racks.
4-2
Chapter 4
Understanding Multi-Rack Cabling for X9M and Later Model Racks
The 14 connections use the 8 ports that were previously used for the inter-switch
connections and 6 additional free ports on each leaf switch.
Note:
For X9M-8 systems with three database servers and 11 storage servers only, the
database servers and storage server require 23 leaf switch ports, which leaves only
13 inter-switch links on each leaf switch. Consequently, these systems are limited to
a maximum of 13 interconnected racks.
The resulting RoCE Network Fabric for a typical 2 rack system is illustrated in the following
diagram:
Rack 1 Rack 2
7 7 7 7 7 7 7 7
Lower Leaf Switch Upper Leaf Switch Lower Leaf Switch Upper Leaf Switch
Database Database
Server 1 Server 1
Database Database
Server n Server n
Storage Storage
Server 1 Server 1
Storage Storage
Server m Server m
As shown in the preceding diagram, every leaf switch has 7 connections to every spine switch.
The leaf switches are not directly interconnected with other leaf switches, and the spine
switches are not directly interconnected with each other.
As the number of racks increases, the inter-switch connections from every leaf switch are
evenly distributed to all of spine switches.
4-3
Chapter 4
Preparing for Multi-Rack Cabling with X9M and Later Model Racks
4.2 Preparing for Multi-Rack Cabling with X9M and Later Model
Racks
Racks can be added together to increase system capacity and performance. When cabling
racks together, note the following:
• The cable lengths shown in this document assume the racks are adjacent to each other. If
the racks are not adjacent, or there are obstacles in the raised floor, or if you use overhead
cabling, then longer cables may be required. For optical cables, the maximum supported
cable length is 100 meters. For copper cables, the maximum supported cable length is 5
meters.
• Oracle recommends that the names for the servers include the rack unit number. This
helps identify the server during diagnostics.
• When completing Oracle Exadata Deployment Assistant (OEDA) for the additional rack,
you are prompted for SCAN addresses. However, these SCAN addresses are not used
because the SCAN address from the original rack are used. Manually remove the new
SCAN addresses from the generated installation files.
• The software owner account names and group names, as well as their identifiers, must
match the names and identifiers of the original rack.
• If the additional grid disks are used with existing disk groups, then ensure the grid disk
sizes for the new rack are the same as the original rack.
• For multi-rack configurations containing up to 14 racks, a spine switch must exist in
each rack in order to interconnect the RoCE Network Fabric.
Perform the following tasks before cabling racks together:
1. Determine the number of racks that will be cabled together.
2. Order the parts needed to connect the racks.
To extend Oracle Exadata racks with RoCE Network Fabric, for each rack being added you
must order extra cables, transceivers for longer cables, and a RoCE Network Fabric spine
switch, if one is required.
When connecting four or more racks, or if you need longer cables for your environment,
you must purchase additional 10 meter or 20 meter fiber cables with two QSFP28 SR
transceivers to connect each end. The QSFP28 SR transceivers are needed for fiber
cables over 5 meters in length.
For multi-rack configurations containing up to 14 racks, the following table outlines the
cables needed to interconnect the racks:
4-4
Chapter 4
Two-Rack Cabling for X9M and Later Model Racks
4-5
Chapter 4
Two-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
two-rack system.
Table 4-1 Leaf Switch Connections for the First Rack in a Two-Rack System
4-6
Chapter 4
Two-Rack Cabling for X9M and Later Model Racks
Table 4-1 (Cont.) Leaf Switch Connections for the First Rack in a Two-Rack System
Table 4-2 Leaf Switch Connections for the Second Rack in a Two-Rack System
4-7
Chapter 4
Two-Rack Cabling for X9M and Later Model Racks
Table 4-2 (Cont.) Leaf Switch Connections for the Second Rack in a Two-Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-3 Two-Rack System Connections Sorted By The Spine Switch Port Location
4-8
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
Table 4-3 (Cont.) Two-Rack System Connections Sorted By The Spine Switch Port
Location
4-9
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
three-rack system.
Table 4-4 Leaf Switch Connections for the First Rack in a Three-Rack System
4-10
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
Table 4-4 (Cont.) Leaf Switch Connections for the First Rack in a Three-Rack System
Table 4-5 Leaf Switch Connections for the Second Rack in a Three-Rack System
4-11
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
Table 4-5 (Cont.) Leaf Switch Connections for the Second Rack in a Three-Rack
System
Table 4-6 Leaf Switch Connections for the Third Rack in a Three-Rack System
4-12
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-7 Three-Rack System Connections Sorted By The Spine Switch Port Location
4-13
Chapter 4
Three-Rack Cabling for X9M and Later Model Racks
Table 4-7 (Cont.) Three-Rack System Connections Sorted By The Spine Switch Port
Location
4-14
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-7 (Cont.) Three-Rack System Connections Sorted By The Spine Switch Port
Location
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
four-rack system.
4-15
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-8 Leaf Switch Connections for the First Rack in a Four-Rack System
Table 4-9 Leaf Switch Connections for the Second Rack in a Four-Rack System
4-16
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-9 (Cont.) Leaf Switch Connections for the Second Rack in a Four-Rack System
Table 4-10 Leaf Switch Connections for the Third Rack in a Four-Rack System
4-17
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-10 (Cont.) Leaf Switch Connections for the Third Rack in a Four-Rack System
Table 4-11 Leaf Switch Connections for the Fourth Rack in a Four-Rack System
4-18
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-11 (Cont.) Leaf Switch Connections for the Fourth Rack in a Four-Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-12 Four-Rack System Connections Sorted By The Spine Switch Port Location
4-19
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-12 (Cont.) Four-Rack System Connections Sorted By The Spine Switch Port
Location
4-20
Chapter 4
Four-Rack Cabling for X9M and Later Model Racks
Table 4-12 (Cont.) Four-Rack System Connections Sorted By The Spine Switch Port
Location
4-21
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
five-rack system.
Table 4-13 Leaf Switch Connections for the First Rack in a Five-Rack System
4-22
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-13 (Cont.) Leaf Switch Connections for the First Rack in a Five-Rack System
Table 4-14 Leaf Switch Connections for the Second Rack in a Five-Rack System
4-23
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-14 (Cont.) Leaf Switch Connections for the Second Rack in a Five-Rack
System
Table 4-15 Leaf Switch Connections for the Third Rack in a Five-Rack System
4-24
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-15 (Cont.) Leaf Switch Connections for the Third Rack in a Five-Rack System
Table 4-16 Leaf Switch Connections for the Fourth Rack in a Five-Rack System
4-25
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-17 Leaf Switch Connections for the Fifth Rack in a Five-Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-18 Five-Rack System Connections Sorted By The Spine Switch Port Location
4-26
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-18 (Cont.) Five-Rack System Connections Sorted By The Spine Switch Port
Location
4-27
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-18 (Cont.) Five-Rack System Connections Sorted By The Spine Switch Port
Location
4-28
Chapter 4
Five-Rack Cabling for X9M and Later Model Racks
Table 4-18 (Cont.) Five-Rack System Connections Sorted By The Spine Switch Port
Location
4-29
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-18 (Cont.) Five-Rack System Connections Sorted By The Spine Switch Port
Location
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
4-30
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
six-rack system.
Table 4-19 Leaf Switch Connections for the First Rack in a Six-Rack System
Table 4-20 Leaf Switch Connections for the Second Rack in a Six-Rack System
4-31
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-20 (Cont.) Leaf Switch Connections for the Second Rack in a Six-Rack System
Table 4-21 Leaf Switch Connections for the Third Rack in a Six-Rack System
4-32
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-21 (Cont.) Leaf Switch Connections for the Third Rack in a Six-Rack System
Table 4-22 Leaf Switch Connections for the Fourth Rack in a Six-Rack System
4-33
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-22 (Cont.) Leaf Switch Connections for the Fourth Rack in a Six-Rack System
Table 4-23 Leaf Switch Connections for the Fifth Rack in a Six-Rack System
4-34
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-24 Leaf Switch Connections for the Sixth Rack in a Six-Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-25 Six-Rack System Connections Sorted By The Spine Switch Port Location
4-35
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-25 (Cont.) Six-Rack System Connections Sorted By The Spine Switch Port
Location
4-36
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-25 (Cont.) Six-Rack System Connections Sorted By The Spine Switch Port
Location
4-37
Chapter 4
Six-Rack Cabling for X9M and Later Model Racks
Table 4-25 (Cont.) Six-Rack System Connections Sorted By The Spine Switch Port
Location
4-38
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-25 (Cont.) Six-Rack System Connections Sorted By The Spine Switch Port
Location
4-39
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
seven-rack system.
Table 4-26 Leaf Switch Connections for the First Rack in a Seven-Rack System
4-40
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-26 (Cont.) Leaf Switch Connections for the First Rack in a Seven-Rack System
Table 4-27 Leaf Switch Connections for the Second Rack in a Seven-Rack System
4-41
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-27 (Cont.) Leaf Switch Connections for the Second Rack in a Seven-Rack
System
Table 4-28 Leaf Switch Connections for the Third Rack in a Seven-Rack System
4-42
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-28 (Cont.) Leaf Switch Connections for the Third Rack in a Seven-Rack System
Table 4-29 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System
Table 4-30 Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
4-43
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-30 (Cont.) Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
Table 4-31 Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
4-44
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-31 (Cont.) Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
Table 4-32 Leaf Switch Connections for the Seventh Rack in a Seven-Rack System
4-45
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-32 (Cont.) Leaf Switch Connections for the Seventh Rack in a Seven-Rack
System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-33 Seven-Rack System Connections Sorted By The Spine Switch Port Location
4-46
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-33 (Cont.) Seven-Rack System Connections Sorted By The Spine Switch Port
Location
4-47
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-33 (Cont.) Seven-Rack System Connections Sorted By The Spine Switch Port
Location
4-48
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-33 (Cont.) Seven-Rack System Connections Sorted By The Spine Switch Port
Location
4-49
Chapter 4
Seven-Rack Cabling for X9M and Later Model Racks
Table 4-33 (Cont.) Seven-Rack System Connections Sorted By The Spine Switch Port
Location
4-50
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-33 (Cont.) Seven-Rack System Connections Sorted By The Spine Switch Port
Location
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in
an eight-rack system.
4-51
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-34 Leaf Switch Connections for the First Rack in a Eight-Rack System
Table 4-35 Leaf Switch Connections for the Second Rack in a Eight-Rack System
4-52
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-35 (Cont.) Leaf Switch Connections for the Second Rack in a Eight-Rack
System
Table 4-36 Leaf Switch Connections for the Third Rack in a Eight-Rack System
4-53
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-36 (Cont.) Leaf Switch Connections for the Third Rack in a Eight-Rack System
Table 4-37 Leaf Switch Connections for the Fourth Rack in a Eight-Rack System
4-54
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-38 Leaf Switch Connections for the Fifth Rack in a Eight-Rack System
Table 4-39 Leaf Switch Connections for the Sixth Rack in a Eight-Rack System
4-55
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-39 (Cont.) Leaf Switch Connections for the Sixth Rack in a Eight-Rack System
Table 4-40 Leaf Switch Connections for the Seventh Rack in a Eight-Rack System
4-56
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-40 (Cont.) Leaf Switch Connections for the Seventh Rack in a Eight-Rack
System
Table 4-41 Leaf Switch Connections for the Eighth Rack in a Eight-Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
4-57
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-42 Eight-Rack System Connections Sorted By The Spine Switch Port Location
4-58
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-42 (Cont.) Eight-Rack System Connections Sorted By The Spine Switch Port
Location
4-59
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-42 (Cont.) Eight-Rack System Connections Sorted By The Spine Switch Port
Location
4-60
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-42 (Cont.) Eight-Rack System Connections Sorted By The Spine Switch Port
Location
4-61
Chapter 4
Eight-Rack Cabling for X9M and Later Model Racks
Table 4-42 (Cont.) Eight-Rack System Connections Sorted By The Spine Switch Port
Location
4-62
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-42 (Cont.) Eight-Rack System Connections Sorted By The Spine Switch Port
Location
4-63
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, for these systems only, ignore the connections to
port number 30 on every leaf switch in the following tables. This adjustment
leaves only 13 inter-switch links on each leaf switch and only applies to X9M-8
systems with three database servers and 11 storage servers.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
9 rack system.
Table 4-43 Leaf Switch Connections for the First Rack in a 9 Rack System
4-64
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-43 (Cont.) Leaf Switch Connections for the First Rack in a 9 Rack System
Table 4-44 Leaf Switch Connections for the Second Rack in a 9 Rack System
4-65
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-44 (Cont.) Leaf Switch Connections for the Second Rack in a 9 Rack System
Table 4-45 Leaf Switch Connections for the Third Rack in a 9 Rack System
Table 4-46 Leaf Switch Connections for the Fourth Rack in a 9 Rack System
4-66
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-46 (Cont.) Leaf Switch Connections for the Fourth Rack in a 9 Rack System
Table 4-47 Leaf Switch Connections for the Fifth Rack in a 9 Rack System
4-67
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-47 (Cont.) Leaf Switch Connections for the Fifth Rack in a 9 Rack System
Table 4-48 Leaf Switch Connections for the Sixth Rack in a 9 Rack System
4-68
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-48 (Cont.) Leaf Switch Connections for the Sixth Rack in a 9 Rack System
Table 4-49 Leaf Switch Connections for the Seventh Rack in a 9 Rack System
Table 4-50 Leaf Switch Connections for the Eighth Rack in a 9 Rack System
4-69
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-50 (Cont.) Leaf Switch Connections for the Eighth Rack in a 9 Rack System
Table 4-51 Leaf Switch Connections for the Ninth Rack in a 9 Rack System
4-70
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-51 (Cont.) Leaf Switch Connections for the Ninth Rack in a 9 Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-52 Nine-Rack System Connections Sorted By The Spine Switch Port Location
4-71
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-72
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-73
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-74
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-75
Chapter 4
Nine-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-76
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-52 (Cont.) Nine-Rack System Connections Sorted By The Spine Switch Port
Location
4-77
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, interconnecting 10 or more of these racks
requires modification to the following cabling tables. Contact Oracle for further
details.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
10 rack system.
Table 4-53 Leaf Switch Connections for the First Rack in a 10 Rack System
4-78
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-53 (Cont.) Leaf Switch Connections for the First Rack in a 10 Rack System
Table 4-54 Leaf Switch Connections for the Second Rack in a 10 Rack System
4-79
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-54 (Cont.) Leaf Switch Connections for the Second Rack in a 10 Rack System
Table 4-55 Leaf Switch Connections for the Third Rack in a 10 Rack System
Table 4-56 Leaf Switch Connections for the Fourth Rack in a 10 Rack System
4-80
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-56 (Cont.) Leaf Switch Connections for the Fourth Rack in a 10 Rack System
Table 4-57 Leaf Switch Connections for the Fifth Rack in a 10 Rack System
4-81
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-57 (Cont.) Leaf Switch Connections for the Fifth Rack in a 10 Rack System
Table 4-58 Leaf Switch Connections for the Sixth Rack in a 10 Rack System
4-82
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-58 (Cont.) Leaf Switch Connections for the Sixth Rack in a 10 Rack System
Table 4-59 Leaf Switch Connections for the Seventh Rack in a 10 Rack System
Table 4-60 Leaf Switch Connections for the Eighth Rack in a 10 Rack System
4-83
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-60 (Cont.) Leaf Switch Connections for the Eighth Rack in a 10 Rack System
Table 4-61 Leaf Switch Connections for the Ninth Rack in a 10 Rack System
4-84
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-61 (Cont.) Leaf Switch Connections for the Ninth Rack in a 10 Rack System
Table 4-62 Leaf Switch Connections for the Tenth Rack in a 10 Rack System
4-85
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-62 (Cont.) Leaf Switch Connections for the Tenth Rack in a 10 Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-63 Ten-Rack System Connections Sorted By The Spine Switch Port Location
4-86
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-87
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-88
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-89
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-90
Chapter 4
Ten-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-91
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-63 (Cont.) Ten-Rack System Connections Sorted By The Spine Switch Port
Location
4-92
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, interconnecting 10 or more of these racks
requires modification to the following cabling tables. Contact Oracle for further
details.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
11 rack system.
Table 4-64 Leaf Switch Connections for the First Rack in a 11 Rack System
4-93
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-64 (Cont.) Leaf Switch Connections for the First Rack in a 11 Rack System
Table 4-65 Leaf Switch Connections for the Second Rack in a 11 Rack System
4-94
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-65 (Cont.) Leaf Switch Connections for the Second Rack in a 11 Rack System
Table 4-66 Leaf Switch Connections for the Third Rack in a 11 Rack System
Table 4-67 Leaf Switch Connections for the Fourth Rack in a 11 Rack System
4-95
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-67 (Cont.) Leaf Switch Connections for the Fourth Rack in a 11 Rack System
Table 4-68 Leaf Switch Connections for the Fifth Rack in a 11 Rack System
4-96
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-68 (Cont.) Leaf Switch Connections for the Fifth Rack in a 11 Rack System
Table 4-69 Leaf Switch Connections for the Sixth Rack in a 11 Rack System
4-97
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-69 (Cont.) Leaf Switch Connections for the Sixth Rack in a 11 Rack System
Table 4-70 Leaf Switch Connections for the Seventh Rack in a 11 Rack System
Table 4-71 Leaf Switch Connections for the Eighth Rack in a 11 Rack System
4-98
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-71 (Cont.) Leaf Switch Connections for the Eighth Rack in a 11 Rack System
Table 4-72 Leaf Switch Connections for the Ninth Rack in a 11 Rack System
4-99
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-72 (Cont.) Leaf Switch Connections for the Ninth Rack in a 11 Rack System
Table 4-73 Leaf Switch Connections for the Tenth Rack in a 11 Rack System
4-100
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-73 (Cont.) Leaf Switch Connections for the Tenth Rack in a 11 Rack System
Table 4-74 Leaf Switch Connections for the Eleventh Rack in a 11 Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
4-101
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-102
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-103
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-104
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-105
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-106
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-107
Chapter 4
Eleven-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-108
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-75 (Cont.) Eleven-Rack System Connections Sorted By The Spine Switch Port
Location
4-109
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, interconnecting 10 or more of these racks
requires modification to the following cabling tables. Contact Oracle for further
details.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
12 rack system.
Table 4-76 Leaf Switch Connections for the First Rack in a 12 Rack System
4-110
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-76 (Cont.) Leaf Switch Connections for the First Rack in a 12 Rack System
Table 4-77 Leaf Switch Connections for the Second Rack in a 12 Rack System
4-111
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-77 (Cont.) Leaf Switch Connections for the Second Rack in a 12 Rack System
Table 4-78 Leaf Switch Connections for the Third Rack in a 12 Rack System
Table 4-79 Leaf Switch Connections for the Fourth Rack in a 12 Rack System
4-112
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-79 (Cont.) Leaf Switch Connections for the Fourth Rack in a 12 Rack System
Table 4-80 Leaf Switch Connections for the Fifth Rack in a 12 Rack System
4-113
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-80 (Cont.) Leaf Switch Connections for the Fifth Rack in a 12 Rack System
Table 4-81 Leaf Switch Connections for the Sixth Rack in a 12 Rack System
4-114
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-81 (Cont.) Leaf Switch Connections for the Sixth Rack in a 12 Rack System
Table 4-82 Leaf Switch Connections for the Seventh Rack in a 12 Rack System
Table 4-83 Leaf Switch Connections for the Eighth Rack in a 12 Rack System
4-115
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-83 (Cont.) Leaf Switch Connections for the Eighth Rack in a 12 Rack System
Table 4-84 Leaf Switch Connections for the Ninth Rack in a 12 Rack System
4-116
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-84 (Cont.) Leaf Switch Connections for the Ninth Rack in a 12 Rack System
Table 4-85 Leaf Switch Connections for the Tenth Rack in a 12 Rack System
4-117
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-85 (Cont.) Leaf Switch Connections for the Tenth Rack in a 12 Rack System
Table 4-86 Leaf Switch Connections for the Eleventh Rack in a 12 Rack System
Table 4-87 Leaf Switch Connections for the Twelfth Rack in a 12 Rack System
4-118
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-87 (Cont.) Leaf Switch Connections for the Twelfth Rack in a 12 Rack System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-88 Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-119
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-120
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-121
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-122
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-123
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-124
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-125
Chapter 4
Twelve-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-126
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-88 (Cont.) Twelve-Rack System Connections Sorted By The Spine Switch Port
Location
4-127
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, port
30 on the leaf switches is connected to a database server and is not used as an
inter-switch link. Consequently, interconnecting 10 or more of these racks
requires modification to the following cabling tables. Contact Oracle for further
details.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
13 rack system.
Table 4-89 Leaf Switch Connections for the First Rack in a 13 Rack System
4-128
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-89 (Cont.) Leaf Switch Connections for the First Rack in a 13 Rack System
Table 4-90 Leaf Switch Connections for the Second Rack in a 13 Rack System
4-129
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-90 (Cont.) Leaf Switch Connections for the Second Rack in a 13 Rack System
Table 4-91 Leaf Switch Connections for the Third Rack in a 13 Rack System
Table 4-92 Leaf Switch Connections for the Fourth Rack in a 13 Rack System
4-130
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-92 (Cont.) Leaf Switch Connections for the Fourth Rack in a 13 Rack System
Table 4-93 Leaf Switch Connections for the Fifth Rack in a 13 Rack System
4-131
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-93 (Cont.) Leaf Switch Connections for the Fifth Rack in a 13 Rack System
Table 4-94 Leaf Switch Connections for the Sixth Rack in a 13 Rack System
4-132
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-94 (Cont.) Leaf Switch Connections for the Sixth Rack in a 13 Rack System
Table 4-95 Leaf Switch Connections for the Seventh Rack in a 13 Rack System
Table 4-96 Leaf Switch Connections for the Eighth Rack in a 13 Rack System
4-133
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-96 (Cont.) Leaf Switch Connections for the Eighth Rack in a 13 Rack System
Table 4-97 Leaf Switch Connections for the Ninth Rack in a 13 Rack System
4-134
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-97 (Cont.) Leaf Switch Connections for the Ninth Rack in a 13 Rack System
Table 4-98 Leaf Switch Connections for the Tenth Rack in a 13 Rack System
4-135
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-98 (Cont.) Leaf Switch Connections for the Tenth Rack in a 13 Rack System
Table 4-99 Leaf Switch Connections for the Eleventh Rack in a 13 Rack System
Table 4-100 Leaf Switch Connections for the Twelfth Rack in a 13 Rack System
4-136
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-100 (Cont.) Leaf Switch Connections for the Twelfth Rack in a 13 Rack System
Table 4-101 Leaf Switch Connections for the Thirteenth Rack in a 13 Rack System
4-137
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-101 (Cont.) Leaf Switch Connections for the Thirteenth Rack in a 13 Rack
System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-102 Thirteen-Rack System Connections Sorted By The Spine Switch Port
Location
4-138
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-139
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-140
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-141
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-142
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-143
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-144
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-145
Chapter 4
Thirteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-146
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-102 (Cont.) Thirteen-Rack System Connections Sorted By The Spine Switch
Port Location
Note:
• The following conventions are used in the cabling notation for connecting multiple
racks together:
– The abbreviation for the first rack is R1, the second rack is R2, and so on.
– LL identifies a lower leaf switch and UL identifies an upper leaf switch.
– SS identifies the spine switch, which is located in U1 on all racks.
– A specific switch is identified by combining abbreviations. For example, R1LL
identifies the lower leaf switch (LL) on the first rack (R1).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in 2-socket systems (Oracle Exadata
Rack X9M-2 and later models).
– At rack unit 21 (U21) and rack unit 23 (U23) in 8-socket systems (Oracle
Exadata X9M-8).
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X9M-8 systems with three database servers and 11 storage servers only, the
database servers and storage server require 23 leaf switch ports, which leaves
only 13 inter-switch links on each leaf switch. Consequently, these systems
cannot support 14 interconnected racks.
The following tables contain details for all of the RoCE Network Fabric cabling connections in a
14 rack system.
4-147
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-103 Leaf Switch Connections for the First Rack in a 14 Rack System
Table 4-104 Leaf Switch Connections for the Second Rack in a 14 Rack System
4-148
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-104 (Cont.) Leaf Switch Connections for the Second Rack in a 14 Rack System
Table 4-105 Leaf Switch Connections for the Third Rack in a 14 Rack System
4-149
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-105 (Cont.) Leaf Switch Connections for the Third Rack in a 14 Rack System
Table 4-106 Leaf Switch Connections for the Fourth Rack in a 14 Rack System
Table 4-107 Leaf Switch Connections for the Fifth Rack in a 14 Rack System
4-150
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-107 (Cont.) Leaf Switch Connections for the Fifth Rack in a 14 Rack System
Table 4-108 Leaf Switch Connections for the Sixth Rack in a 14 Rack System
4-151
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-108 (Cont.) Leaf Switch Connections for the Sixth Rack in a 14 Rack System
Table 4-109 Leaf Switch Connections for the Seventh Rack in a 14 Rack System
4-152
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-109 (Cont.) Leaf Switch Connections for the Seventh Rack in a 14 Rack System
Table 4-110 Leaf Switch Connections for the Eighth Rack in a 14 Rack System
Table 4-111 Leaf Switch Connections for the Ninth Rack in a 14 Rack System
4-153
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-111 (Cont.) Leaf Switch Connections for the Ninth Rack in a 14 Rack System
Table 4-112 Leaf Switch Connections for the Tenth Rack in a 14 Rack System
4-154
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-112 (Cont.) Leaf Switch Connections for the Tenth Rack in a 14 Rack System
Table 4-113 Leaf Switch Connections for the Eleventh Rack in a 14 Rack System
4-155
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-113 (Cont.) Leaf Switch Connections for the Eleventh Rack in a 14 Rack
System
Table 4-114 Leaf Switch Connections for the Twelfth Rack in a 14 Rack System
Table 4-115 Leaf Switch Connections for the Thirteenth Rack in a 14 Rack System
4-156
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-115 (Cont.) Leaf Switch Connections for the Thirteenth Rack in a 14 Rack
System
Table 4-116 Leaf Switch Connections for the Fourteenth Rack in a 14 Rack System
4-157
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-116 (Cont.) Leaf Switch Connections for the Fourteenth Rack in a 14 Rack
System
The following table contains all of the RoCE Network Fabric cabling connections from the
previous tables. In this table, the connections are sorted by the spine switch port location.
Table 4-117 Fourteen-Rack System Connections Sorted By The Spine Switch Port
Location
4-158
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-159
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-160
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-161
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-162
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-163
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-164
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-165
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-166
Chapter 4
Fourteen-Rack Cabling for X9M and Later Model Racks
Table 4-117 (Cont.) Fourteen-Rack System Connections Sorted By The Spine Switch
Port Location
4-167
5
Multi-Rack Cabling Tables for Oracle Exadata
X8M Models
This section contains multi-rack cabling tables for Oracle Exadata X8M models, which use
RoCE Network Fabric.
• Understanding Multi-Rack Cabling for X8M Racks
Up to eight racks can be cabled together without external RDMA Network Fabric switches.
• Two-Rack Cabling for X8M Racks
This section provides the cabling tables to connect two X8M racks together, both of which
use RoCE Network Fabric.
• Three-Rack Cabling for X8M Racks
This section provides the cabling tables to connect three X8M racks together using RoCE
Network Fabric.
• Four-Rack Cabling for X8M Racks
This section provides the cabling tables to connect four X8M racks together, all of which
use RoCE Network Fabric.
• Five-Rack Cabling for X8M Racks
This section provides the cabling tables to connect five (5) X8M racks together, all of which
use RoCE Network Fabric.
• Six-Rack Cabling for X8M Racks
This section provides the cabling tables to connect six (6) X8M racks together, all of which
use RoCE Network Fabric.
• Seven-Rack Cabling for X8M Racks
This section provides the cabling tables to connect seven (7) X8M racks together, all of
which use RoCE Network Fabric.
• Eight-Rack Cabling for X8M Racks
This section provides the cabling tables to connect eight (8) X8M racks together, all of
which use RoCE Network Fabric.
5-1
Chapter 5
Understanding Multi-Rack Cabling for X8M Racks
The procedures in this section assume the racks are adjacent to each other, standard routing
in raised floor is used, and there are no obstacles in the raised floor. If these assumptions are
not correct for your environment, then longer cables may be required for the connections.
Note:
By default, Oracle Exadata Database Machine X8M racks do not include spare
cables or a third RoCE Network Fabric switch. To extend these racks, you must order
the required cables and RoCE Network Fabric switch.
The following diagram shows the default RDMA Network Fabric architecture for a single-rack
system. Each rack has two leaf switches, with eight connections between the leaf switches.
The database servers and storage servers are each connected to both leaf switches. Each
server contains a dual-port RDMA Network Fabric card, with port 1 connected to the lower leaf
switch and port 2 connected to the upper leaf switch.
Single Rack
Inter-Switch Links
8
Lower Leaf Switch Upper Leaf Switch
Database
Server 1
Database
Server n
Storage
Server 1
Storage
Server m
When connecting up to eight racks together, remove the eight existing inter-switch connections
between each leaf switch on each rack. From each leaf switch, distribute eight connections
over the spine switches in all racks. In multi-rack environments, the leaf switches inside a rack
are no longer directly interconnected, as shown in the following graphic:
5-2
Chapter 5
Understanding Multi-Rack Cabling for X8M Racks
Rack 1 Rack 2
4 4 4 4 4 4 4 4
Lower Leaf Switch Upper Leaf Switch Lower Leaf Switch Upper Leaf Switch
Database Database
Server 1 Server 1
Database Database
Server n Server n
Storage Storage
Server 1 Server 1
Storage Storage
Server m Server m
As shown in the preceding graphic, each leaf switch in rack 1 has the following connections:
• Four connections to its internal spine switch
• Four connections to the spine switch in rack 2
The spine switch in rack 1 has the following connections:
• Four connections to each leaf switch in rack 1
• Four connections to each leaf switch in rack 2
As the number of racks increases from two to eight, the pattern continues as shown in the
following graphic:
5-3
Chapter 5
Understanding Multi-Rack Cabling for X8M Racks
Figure 5-1 Connections Between Spine Switches and Leaf Switches for up to 8 Racks
Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2
As shown in the preceding graphic, each leaf switch has eight inter-switch connections
distributed over all spine switches. Each spine switch has 16 inter-switch connections
distributed over all leaf switches. The leaf switches are not directly interconnected with other
leaf switches, and the spine switches are not directly interconnected with the other spine
switches.
• Preparing for Multi-Rack Cabling with X8M Racks
• Cabling Multiple Oracle Exadata X8M Racks
5-4
Chapter 5
Understanding Multi-Rack Cabling for X8M Racks
Note:
Oracle Exadata Eighth Racks, Quarter Racks, and Elastic Configurations are
connected to other racks in the same fashion as larger racks are connected to each
other. In other words, a spine switch must exist in each rack in order to
interconnect with other racks.
5-5
Chapter 5
Understanding Multi-Rack Cabling for X8M Racks
• New Oracle Exadata Racks are added to an existing cluster: The new rack
configuration should require unique host names and IP addresses for the new Oracle
Exadata. The IP addresses on the same subnet cannot conflict with the existing
systems.
• Two existing Oracle Exadata Racks are clustered together: You can assign host
names and IP addresses only if Oracle Exadata racks are already assigned unique
host names and IP addresses, or the entire cluster must be reconfigured. The
machines must be on the same subnet and not have conflicting IP addresses.
6. Ensure the IP addresses for the new servers are in the same subnet, and do not overlap
with the currently-installed servers.
7. Check that the firmware on the original switches is at the same level as the new switches
by using the sh ver command.
It is highly recommended, though not mandatory, to use the same firmware version on all
of the switches. If the firmware is not at the same level, you can apply a firmware patch to
bring the switches up to the same firmware level.
Note:
To extend Oracle Exadata racks with RoCE Network Fabric, you must order cables,
transceivers if needed, and a RoCE Network Fabric switch, if they are not already
available.
In the following steps, the number in parentheses also indicates the number of cables required.
1. Split each leaf switch uplink bundle by the number of spine switches (or racks) in such a
way that the total count adds up to 8. This split is represented in parentheses for each
example.
Example 1: For two racks, take 8 uplinks from each leaf switch and split evenly by 2. Four
uplinks from each leaf switch go to rack1-spine switch, and four uplinks from each leaf
switch go to rack2-spine switch ( 4 + 4 for each leaf switch).
Example 2: For three racks, take 8 uplinks from each leaf switch and split evenly by 3.
Three uplinks go to rack1-spine switch, three uplinks go to rack2-spine switch, two
uplinks go to rack3-spine switch ( 3 + 3 + 2 for each leaf switch).
2. Starting from the first available port on a different spine switch, round-robin the above split
scheme for each leaf switch and spine switch.
For example, for three racks:
• rack1-leaf1 switch starts from rack1-spine switch for first split, rack2-spine switch
for second split, rack3-spine switch for third split
• rack2-leaf1 switch starts from rack2-spine switch for first split, rack3-spine switch
for second split, rack1-spine for third split
• rack3-leaf1 switch starts from rack3-spine switch for first split, rack1-spine switch
for second split, rack2-spine switch for third split
5-6
Chapter 5
Two-Rack Cabling for X8M Racks
• and so on...
3. After walking through all leaf switch uplinks in each case, you will have used all spine
switch ports between port 5 and 20 inclusive.
The remaining topics in this section provide detailed cabling information for cabling up to 8
racks together.
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following illustration shows the cable connections for the two spine switches (R1 SS and
R2 SS) when cabling two racks together:
5-7
Chapter 5
Two-Rack Cabling for X8M Racks
R1 R1 R1 R1 R1 R1 R1 R1
UL UL UL UL LL LL LL LL
P5 P7 P4 P6 P5 P7 P4 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R2 R2 R2 R2 R2
UL UL UL UL LL LL LL LL
P5 P7 P4 P6 P5 P7 P4 P6
R1 R1 R1 R1 R1 R1 R1 R1
UL UL UL UL LL LL LL LL
P31 P33 P30 P32 P31 P33 P30 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R2 R2 R2 R2 R2
UL UL UL UL LL LL LL LL
P31 P33 P30 P32 P31 P33 P30 P32
The following table describes the cable connections for the first spine switch (R1 SS) when
cabling two racks together:
Table 5-1 Leaf Switch Connections for the First Rack in a Two-Rack System
The following table describes the cable connections for the second spine switch (R2 SS) when
cabling two racks together:
5-8
Chapter 5
Three-Rack Cabling for X8M Racks
Table 5-2 Leaf Switch Connections for the Second Rack in a Two-Rack System
5-9
Chapter 5
Three-Rack Cabling for X8M Racks
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following illustration shows the cable connections for the three spine switches (Rack1-
spine, Rack2-spine, and Rack3-spine) when cabling three racks together:
5-10
Chapter 5
Three-Rack Cabling for X8M Racks
R1 R1 R1 R3 R1 R1 R3 R3
UL UL UL UL LL LL LL LL
P5 P7 P4 P5 P5 P7 P5 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R3 R2 R2 R2 R3
UL UL UL UL LL LL LL LL
P5 P7 P4 P7 P5 P7 P4 P4
R1 R1 R1 R3 R1 R1 R1 R3
UL UL UL UL LL LL LL LL
P31 P33 P30 P6 P31 P33 P30 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R3 R3 R2 R2 R2 R3
UL UL UL UL LL LL LL LL
P31 P33 P4 P31 P31 P33 P30 P31
R1 R1 R3 R3 R1 R1 R1 R3
UL UL UL UL LL LL LL LL
P6 P32 P33 P30 P4 P6 P32 P30
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R2 R3 R2 R2 R3 R3
UL UL UL UL LL LL LL LL
P6 P30 P32 P32 P6 P32 P33 P32
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling three racks together:
Table 5-3 Leaf Switch Connections for the First Rack in a Three-Rack System
5-11
Chapter 5
Three-Rack Cabling for X8M Racks
Table 5-3 (Cont.) Leaf Switch Connections for the First Rack in a Three-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling three racks together:
Table 5-4 Leaf Switch Connections for the Second Rack in a Three-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling three racks together:
Table 5-5 Leaf Switch Connections for the Third Rack in a Three-Rack System
5-12
Chapter 5
Four-Rack Cabling for X8M Racks
Table 5-5 (Cont.) Leaf Switch Connections for the Third Rack in a Three-Rack System
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following illustration shows the cable connections for the four spine switches (Rack1-
spine, Rack2-spine, Rack3-spine, and Rack4-spine) when cabling two racks together:
5-13
Chapter 5
Four-Rack Cabling for X8M Racks
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P5 P7 P5 P5 P5 P7 P5 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R3 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P5 P7 P7 P7 P5 P7 P7 P4
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P31 P33 P4 P6 P31 P33 P4 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R3 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P31 P33 P6 P31 P31 P33 P6 P31
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P6 P32 P31 P30 P4 P6 P31 P30
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R3 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P6 P30 P33 P32 P6 P32 P33 P32
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P4 P30 P30 P4 P30 P32 P30 P5
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R3 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P4 P32 P32 P33 P4 P30 P32 P33
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling four racks together:
Table 5-6 Leaf Switch Connections for the First Rack in a Four-Rack System
5-14
Chapter 5
Four-Rack Cabling for X8M Racks
Table 5-6 (Cont.) Leaf Switch Connections for the First Rack in a Four-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling four full racks together:
Table 5-7 Leaf Switch Connections for the Second Rack in a Four-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling four full racks together:
Table 5-8 Leaf Switch Connections for the Third Rack in a Four-Rack System
5-15
Chapter 5
Five-Rack Cabling for X8M Racks
Table 5-8 (Cont.) Leaf Switch Connections for the Third Rack in a Four-Rack System
The following table describes the cable connections for the fourth spine switch (R4-SS) when
cabling four full racks together:
Table 5-9 Leaf Switch Connections for the Fourth Rack in a Four-Rack System
5-16
Chapter 5
Five-Rack Cabling for X8M Racks
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following illustration shows the cable connections for the five spine switches when cabling
five racks together:
5-17
Chapter 5
Five-Rack Cabling for X8M Racks
R1 R1 R5 R3 R1 R5 R4 R3
UL UL UL UL LL LL LL LL
P5 P7 P7 P5 P5 P5 P5 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R3 R2 R2 R5 R3
UL UL UL UL LL LL LL LL
P5 P5 P7 P7 P5 P7 P7 P4
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P31 P33 P4 P6 P31 P33 P4 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R3 R2 R5 R5 R3
UL UL UL UL LL LL LL LL
P31 P4 P6 P31 P31 P4 P6 P31
R1 R1 R4 R3 R1 R1 R4 R3
UL UL UL UL LL LL LL LL
P6 P32 P31 P30 P4 P6 P31 P30
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R5 R2 R5 R4 R3
UL UL UL UL LL LL LL LL
P6 P30 P33 P6 P6 P31 P33 P32
R1 R5 R4 R3 R1 R1 R4 R5
UL UL UL UL LL LL LL LL
P4 P31 P30 P4 P30 P32 P30 P33
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R5 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P4 P32 P32 P33 P4 P30 P32 P33
R1 R5 R5 R3 R1 R5 R4 R5
UL UL UL UL LL LL LL LL
P30 P30 P32 P32 P7 P30 P7 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R3 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P7 P33 P5 P33 P33 P32 P6 P5
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling five racks together:
5-18
Chapter 5
Five-Rack Cabling for X8M Racks
Table 5-10 Leaf Switch Connections for the First Rack in a Five-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling five full racks together:
Table 5-11 Leaf Switch Connections for the Second Rack in a Five-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling five full racks together:
5-19
Chapter 5
Five-Rack Cabling for X8M Racks
Table 5-12 Leaf Switch Connections for the Third Rack in a Five-Rack System
The following table describes the cable connections for the fourth spine switch (R4-SS) when
cabling five full racks together:
Table 5-13 Leaf Switch Connections for the Fourth Rack in a Five-Rack System
The following table describes the cable connections for the fifth spine switch (R5-SS) when
cabling five full racks together:
5-20
Chapter 5
Six-Rack Cabling for X8M Racks
Table 5-14 Leaf Switch Connections for the Fifth Rack in a Five-Rack System
5-21
Chapter 5
Six-Rack Cabling for X8M Racks
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following illustration shows the cable connections for the 6 spine switches when cabling
six racks together:
5-22
Chapter 5
Six-Rack Cabling for X8M Racks
R1 R1 R6 R3 R1 R6 R4 R3
UL UL UL UL LL LL LL LL
P5 P7 P5 P5 P5 P7 P5 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P5 P5 P7 P5 P5 P7 P7 P4
R1 R1 R4 R3 R1 R1 R4 R6
UL UL UL UL LL LL LL LL
P31 P33 P4 P6 P31 P33 P4 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P31 P4 P6 P4 P31 P4 P6 P31
R1 R6 R4 R3 R1 R1 R4 R6
UL LL UL UL LL LL LL UL
P6 P31 P31 P30 P4 P6 P31 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R5 R2 R5 R4 R3
UL UL UL UL LL LL LL LL
P6 P30 P33 P6 P6 P31 P33 P32
R1 R5 R6 R3 R1 R6 R4 R5
UL UL LL UL LL UL LL LL
P4 P31 P33 P4 P30 P31 P30 P33
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R5 R2 R2 R4 R3
UL UL UL UL LL LL LL LL
P4 P32 P32 P33 P4 P30 P32 P33
R1 R5 R5 R3 R1 R5 R4 R5
UL UL UL UL LL LL LL LL
P30 P30 P32 P32 P7 P30 P7 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R3 R2 R2 R6 R3
UL LL UL UL LL LL UL LL
P7 P30 P5 P33 P33 P32 P33 P5
R1 R5 R5 R3 R1 R5 R4 R3
UL UL LL UL LL LL LL LL
P32 P7 P5 P7 P32 P4 P6 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R3 R2 R6 R6 R3
UL LL UL UL LL UL UL LL
P33 P32 P30 P31 P7 P30 P32 P30
5-23
Chapter 5
Six-Rack Cabling for X8M Racks
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling six racks together:
Table 5-15 Leaf Switch Connections for the First Rack in a Six-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling six full racks together:
Table 5-16 Leaf Switch Connections for the Second Rack in a Six-Rack System
5-24
Chapter 5
Six-Rack Cabling for X8M Racks
Table 5-16 (Cont.) Leaf Switch Connections for the Second Rack in a Six-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling six full racks together:
Table 5-17 Leaf Switch Connections for the Third Rack in a Six-Rack System
The following table describes the cable connections for the fourth spine switch (R4-SS) when
cabling six full racks together:
Table 5-18 Leaf Switch Connections for the Fourth Rack in a Six-Rack System
5-25
Chapter 5
Six-Rack Cabling for X8M Racks
Table 5-18 (Cont.) Leaf Switch Connections for the Fourth Rack in a Six-Rack System
The following table describes the cable connections for the fifth spine switch (R5-SS) when
cabling six full racks together:
Table 5-19 Leaf Switch Connections for the Fifth Rack in a Six-Rack System
The following table describes the cable connections for the sixth spine switch (R6-SS) when
cabling six full racks together:
Table 5-20 Leaf Switch Connections for the Sixth Rack in a Six-Rack System
5-26
Chapter 5
Seven-Rack Cabling for X8M Racks
Table 5-20 (Cont.) Leaf Switch Connections for the Sixth Rack in a Six-Rack System
5-27
Chapter 5
Seven-Rack Cabling for X8M Racks
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following diagrams show the cable connections for the 7 spine switches when cabling
seven racks together:
5-28
Chapter 5
Seven-Rack Cabling for X8M Racks
R1 R1 R7 R3 R1 R6 R4 R7
UL UL UL UL LL LL LL LL
P5 P7 P5 P5 P5 P7 P5 P5
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P5 P5 P7 P5 P5 P7 P7 P4
R1 R7 R4 R3 R1 R1 R4 R7
UL UL UL UL LL LL LL LL
P31 P7 P4 P6 P31 P33 P4 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P31 P4 P6 P4 P31 P4 P6 P31
R1 R6 R7 R3 R1 R7 R4 R6
UL LL UL UL LL LL LL UL
P6 P31 P4 P30 P4 P4 P31 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R2 R4 R5 R2 R5 R4 R3
UL UL UL UL LL LL LL LL
P6 P30 P33 P6 P6 P31 P33 P32
R1 R5 R6 R3 R1 R6 R4 R5
UL UL LL UL LL UL LL LL
P4 P31 P33 P4 P30 P31 P30 P33
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R7 R4 R5 R2 R2 R7 R3
UL UL UL UL LL LL LL LL
P4 P6 P32 P33 P4 P30 P6 P33
5-29
Chapter 5
Seven-Rack Cabling for X8M Racks
R1 R5 R7 R3 R1 R5 R4 R5
UL UL UL UL LL LL LL LL
P30 P30 P31 P32 P7 P30 P7 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R3 R2 R7 R6 R3
UL LL UL UL LL LL UL LL
P7 P30 P5 P33 P33 P31 P33 P5
R1 R5 R7 R3 R1 R5 R4 R3
UL UL UL UL LL LL LL LL
P32 P7 P33 P7 P32 P4 P6 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R7 R2 R6 R6 R3
UL LL UL LL LL UL UL LL
P33 P32 P30 P33 P7 P30 P32 P30
R1 R5 R7 R3 R1 R5 R4 R7
UL UL UL UL LL LL LL LL
P33 P32 P30 P31 P6 P5 P32 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R7 R2 R6 R7 R3
UL LL UL UL LL UL LL LL
P32 P6 P31 P32 P32 P5 P30 P7
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling seven racks together:
Table 5-21 Leaf Switch Connections for the First Rack in a Seven-Rack System
5-30
Chapter 5
Seven-Rack Cabling for X8M Racks
Table 5-21 (Cont.) Leaf Switch Connections for the First Rack in a Seven-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling seven full racks together:
Table 5-22 Leaf Switch Connections for the Second Rack in a Seven-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling seven full racks together:
Table 5-23 Leaf Switch Connections for the Third Rack in a Seven-Rack System
5-31
Chapter 5
Seven-Rack Cabling for X8M Racks
Table 5-23 (Cont.) Leaf Switch Connections for the Third Rack in a Seven-Rack System
The following table describes the cable connections for the fourth spine switch (R4-SS) when
cabling seven full racks together:
Table 5-24 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System
The following table describes the cable connections for the fifth spine switch (R5-SS) when
cabling seven full racks together:
Table 5-25 Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
5-32
Chapter 5
Seven-Rack Cabling for X8M Racks
Table 5-25 (Cont.) Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
The following table describes the cable connections for the sixth spine switch (R6-SS) when
cabling seven full racks together:
Table 5-26 Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
The following table describes the cable connections for the seventh spine switch (R7-SS) when
cabling seven full racks together:
5-33
Chapter 5
Eight-Rack Cabling for X8M Racks
Table 5-27 Leaf Switch Connections for the Seventh Rack in a Seven-Rack System
5-34
Chapter 5
Eight-Rack Cabling for X8M Racks
Note:
The following conventions were used in the cabling notation for connecting multiple
racks together.
• The spine switch (also referred to as SS) is in U1 for all racks.
• The leaf switches are referred to as Lower Leaf (LL) and Upper Leaf (UL).
• The leaf switches are located as follows:
– At rack unit 20 (U20) and 22 (U22) in Oracle Exadata X8M-2 or Storage
Expansion Rack X8M-2
– At rack unit 21 (U21) and rack unit 23 (U23) in Oracle Exadata X8M-8
• The cable lengths shown in the following lists assume that the racks are adjacent
to each other, the cables are routed through a raised floor, and there are no
obstacles in the routing between the racks. If the racks are not adjacent, or use
overhead cabling trays, then they may require longer cable lengths. Cable
lengths up to 100 meters are supported.
• Only optical cables (with additional transceivers) are supported for lengths
greater than 5 meters.
• For X8M-8 systems with three database servers and 11 storage servers only, you
must adjust the following multi-rack cabling information. On such systems only,
port 30 on the leaf switches is connected to a database server and is not used as
an inter-switch link. Consequently, in the following tables, any connection to port
number 30 on any leaf switch must instead connect to port number 34 on the
same leaf switch. For example, R1-UL-P30 must change to R1-UL-P34, R1-LL-
P30 must change to R1-LL-P34, and so on. These changes only apply to X8M-8
systems with three database servers and 11 storage servers.
The following diagrams show the cable connections for the 8 spine switches when cabling
eight racks together:
5-35
Chapter 5
Eight-Rack Cabling for X8M Racks
R1 R8 R7 R3 R1 R8 R4 R7
UL LL UL UL LL UL LL LL
P5 P5 P5 P5 P5 P5 P5 P5
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P5 P5 P7 P5 P5 P7 P7 P4
R1 R7 R8 R3 R1 R8 R4 R7
UL UL LL UL LL UL LL LL
P31 P7 P7 P6 P31 P7 P4 P7
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R6 R2 R6 R5 R3
UL UL UL LL LL UL LL LL
P31 P4 P6 P4 P31 P4 P6 P31
R1 R6 R7 R3 R1 R7 R4 R6
UL LL UL UL LL LL LL UL
P6 P31 P4 P30 P4 P4 P31 P6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R5 R4 R8 R2 R5 R8 R3
UL UL UL LL LL LL UL LL
P6 P6 P33 P4 P6 P31 P4 P32
R1 R8 R6 R3 R1 R6 R4 R5
UL LL LL UL LL UL LL LL
P4 P6 P33 P4 P30 P31 P30 P33
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R7 R4 R5 R2 R8 R7 R3
UL UL UL UL LL UL LL LL
P4 P6 P32 P33 P4 P6 P6 P33
5-36
Chapter 5
Eight-Rack Cabling for X8M Racks
R1 R5 R7 R3 R1 R5 R4 R8
UL UL UL UL LL LL LL UL
P30 P30 P31 P32 P7 P30 P7 P31
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R8 R2 R7 R6 R3
UL LL UL LL LL LL UL LL
P7 P30 P5 P31 P33 P31 P33 P5
R1 R5 R7 R3 R1 R5 R4 R8
UL UL UL UL LL LL LL UL
P32 P7 P33 P7 P32 P4 P6 P33
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R8 R2 R6 R7 R3
UL LL UL LL LL UL LL LL
P33 P32 P30 P33 P7 P30 P33 P30
R1 R5 R7 R3 R1 R5 R4 R8
UL UL UL UL LL LL LL UL
P33 P32 P30 P31 P6 P5 P32 P30
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R8 R2 R6 R7 R3
UL LL UL LL LL UL LL LL
P32 P6 P31 P30 P32 P5 P30 P7
R1 R5 R7 R3 R1 R5 R4 R8
UL UL UL UL LL LL LL UL
P7 P31 P32 P33 P33 P32 P33 P32
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
R2 R6 R4 R7 R2 R6 R8 R3
UL LL UL LL LL UL LL LL
P30 P7 P4 P32 P30 P32 P32 P6
The following table describes the cable connections for the first spine switch (R1-SS) when
cabling eight racks together:
Table 5-28 Leaf Switch Connections for the First Rack in a Eight-Rack System
5-37
Chapter 5
Eight-Rack Cabling for X8M Racks
Table 5-28 (Cont.) Leaf Switch Connections for the First Rack in a Eight-Rack System
The following table describes the cable connections for the second spine switch (R2-SS) when
cabling eight full racks together:
Table 5-29 Leaf Switch Connections for the Second Rack in a Eight-Rack System
The following table describes the cable connections for the third spine switch (R3-SS) when
cabling eight full racks together:
5-38
Chapter 5
Eight-Rack Cabling for X8M Racks
Table 5-30 Leaf Switch Connections for the Third Rack in a Eight-Rack System
The following table describes the cable connections for the fourth spine switch (R4-SS) when
cabling eight full racks together:
Table 5-31 Leaf Switch Connections for the Fourth Rack in a Eight-Rack System
5-39
Chapter 5
Eight-Rack Cabling for X8M Racks
The following table describes the cable connections for the fifth spine switch (R5-SS) when
cabling eight full racks together:
Table 5-32 Leaf Switch Connections for the Fifth Rack in a Eight-Rack System
The following table describes the cable connections for the sixth spine switch (R6-SS) when
cabling eight full racks together:
Table 5-33 Leaf Switch Connections for the Sixth Rack in a Eight-Rack System
5-40
Chapter 5
Eight-Rack Cabling for X8M Racks
Table 5-33 (Cont.) Leaf Switch Connections for the Sixth Rack in a Eight-Rack System
The following table describes the cable connections for the seventh spine switch (R7-SS) when
cabling eight full racks together:
Table 5-34 Leaf Switch Connections for the Seventh Rack in a Eight-Rack System
The following table describes the cable connections for the eighth spine switch (R8-SS) when
cabling eight full racks together:
Table 5-35 Leaf Switch Connections for the Eighth Rack in a Eight-Rack System
5-41
Chapter 5
Eight-Rack Cabling for X8M Racks
Table 5-35 (Cont.) Leaf Switch Connections for the Eighth Rack in a Eight-Rack
System
5-42
6
Multi-Rack Cabling Tables for Oracle Exadata
Rack Models with InfiniBand Network Fabric
(X2 to X8)
This section contains multi-rack cabling tables for Oracle Exadata Rack models that use
InfiniBand Network Fabric. This includes Oracle Exadata Rack models from X2 to X8.
• Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Up to eight racks can be cabled together without external RDMA Network Fabric switches.
• Two-Rack Cabling with InfiniBand Network Fabric
Review this information before cabling two racks together with InfiniBand Network Fabric.
• Three-Rack Cabling with InfiniBand Network Fabric
• Four-Rack Cabling with InfiniBand Network Fabric
• Five-Rack Cabling with InfiniBand Network Fabric
• Six-Rack Cabling with InfiniBand Network Fabric
• Seven-Rack Cabling with InfiniBand Network Fabric
• Eight-Rack Cabling with InfiniBand Network Fabric
Note:
Only for InfiniBand Network Fabric (X8 and earlier).
• For Eighth or Quarter Racks, which are the smallest Elastic Configurations,
follow the instructions in "Cabling Oracle Exadata Quarter Racks and Oracle
Exadata Eighth Racks with InfiniBand Network Fabric" for direct connection
without spine switch.
• For other racks (Half Rack, Full Rack, Elastic Configurations larger than Eighth or
Quarter Rack) install a spine switch and follow the standard N-rack cabling for
two or more interconnected racks, for example "Two-Rack Cabling with
InfiniBand Network Fabric."
6-1
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Note:
Oracle Exadata Database Machine X4-2 and later racks or Oracle Exadata
Database Machine X3-8 Full Racks with Exadata Storage Server X4-2L Servers
do not include spare cables or a third Sun Datacenter InfiniBand Switch 36
switch. To extend Oracle Exadata Database Machine X4-2 and later racks or
Oracle Exadata Database Machine X3-8 Full Racks with Exadata Storage Server
X4-2L Servers, you must order cables and a Sun Datacenter InfiniBand Switch
36 switch.
In a single rack, the two leaf switches are interconnected using seven connections. In addition,
each leaf switch has one connection to the spine switch. The leaf switches connect to the
spine switch as shown in the following graphic:
Figure 6-1 Connections Between Spine Switch and Leaf Switches in a Single Rack
Spine Switch
Oracle Database Server OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
Server IB Ports 1 1
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
The Oracle Database servers and Exadata Storage Servers connect to the leaf switches as
shown in the following graphic:
6-2
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Figure 6-2 Connections Between Database Servers and Storage Servers and Leaf
Switches
Port A Port A
Oracle Database Servers (1, 2, 3, 4) Oracle Database Servers (5, 6, 7, 8)
Exadata Storage Servers (1, 2, .., 7) Exadata Storage Servers (8, 9, .., 14)
11 11
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
11 11
Port B Port B
Oracle Database Servers (5, 6, 7, 8) Oracle Database Servers (1, 2, 3, 4)
Exadata Storage Servers (8, 9, .., 14) Exadata Storage Servers (1, 2, .., 7)
When connecting up to eight racks together, remove the seven existing inter-switch
connections between each leaf switch, as well as the two connections between the leaf
switches and the spine switch. From each leaf switch, distribute eight connections over the
spine switches in all racks. In multi-rack environments, the leaf switches inside a rack are no
longer directly interconnected, as shown in the following graphic:
Figure 6-3 Connections Between Spine Switches and Leaf Switches Across Two Racks
Rack 1 Rack 2
Spine Switch
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
4 4 4 4 4 4 4 4
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
Leaf Switch 2
As shown in the preceding graphic, each leaf switch in rack 1 connects to the following
switches:
• Four connections to its internal spine switch
• Four connections to the spine switch in rack 2
The spine switch in rack 1 connects to the following switches:
6-3
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Figure 6-4 Connections Between Spine Switches and Leaf Switches for up to 8 Racks
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
...
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2
As shown in the preceding graphic, each leaf switch has eight inter-switch connections
distributed over all spine switches. Each spine switch has 16 inter-switch connections
distributed over all leaf switches. The leaf switches are not directly interconnected with other
leaf switches, and the spine switches are not directly interconnected with the other spine
switches.
Note:
Cable lengths are specified for racks 1 through 8.
6-4
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
• When completing Oracle Exadata Deployment Assistant for the additional rack, you are
prompted for SCAN addresses. However, these SCAN addresses are not used because
the SCAN address from the original rack are used. Manually remove the new SCAN
addresses from the generated installation files.
• The software owner account names and group names, as well as their identifiers, must
match the names and identifiers of the original rack.
• If the additional grid disks are used with existing disk groups, then ensure the grid disk
sizes for the new rack are the same as the original rack.
• If the InfiniBand network consists of four or more racks cabled together, then disable the
Subnet Manager on the leaf switches.
• Verify the Master Subnet Manager is located on the spine switch.
• Oracle Exadata Database Machine Quarter Racks can be extended as follows:
– Connect two Oracle Exadata Database Machine Quarter Racks together. At least four
of the six ports reserved for external connectivity are open on each leaf switch. The six
ports are 5B, 6A, 6B, 7A, 7B, and 12A in each leaf switch. Maintain the existing seven
inter-switch links between the leaf switches within each rack. Connect the leaf
switches between the racks with two links each, using the ports reserved for external
connectivity.
– Connect one Oracle Exadata Database Machine Quarter Rack with one Oracle
Exadata Database Machine Half Rack or one Oracle Exadata Database Machine Full
Rack. At least four ports reserved for external connectivity are open on each leaf
switch. The spine switch in the Oracle Exadata Database Machine Half Rack or Oracle
Exadata Database Machine Full Rack remains as the spine switch. Maintain the
existing seven inter-switch links between the leaf switches within each rack. Connect
the leaf switches between the racks with two links each, using the ports reserved for
external connectivity.
– Connect one Oracle Exadata Database Machine Quarter Rack with two or more
Oracle Exadata Database Machine Half Racks or Oracle Exadata Database Machine
Full Racks. The racks are interconnected using a fat-tree topology. Connect each leaf
switch in the quarter rack to the spine switch of each half rack or full rack using two
links each. If there are more than four racks, then use one link instead of two. The
seven inter-switch links between the leaf switches in the quarter rack are removed.
Note:
To connect more than one quarter rack to additional racks, it is necessary to
purchase Sun Datacenter InfiniBand Switch 36 switches for the quarter
racks.
• If you are extending Oracle Exadata Database Machine X4-2 or later, or Oracle Exadata
Database Machine X3-8 Full Rack, or Oracle Exadata Database Machine X2-2 (with
X4170 and X4275 servers) half rack, then order the expansion kit that includes a Sun
Datacenter InfiniBand Switch 36 switch.
Perform the following tasks before cabling racks together:
1. Determine the number of racks that will be cabled together.
2. Count the spare cables from the kit, and existing inter-switch cables.
6-5
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Note:
Oracle Exadata Database Machine X4-2 and later racks or Oracle Exadata
Database Machine X3-8 Full Racks with Exadata Storage Server X4-2L Servers
do not include spare cables or a third Sun Datacenter InfiniBand Switch 36
switch. To extend Oracle Exadata Database Machine X4-2 and later racks or
Oracle Exadata Database Machine X3-8 Full Racks with Exadata Storage Server
X4-2L Servers, you must order cables and a Sun Datacenter InfiniBand Switch
36 switch.
For Oracle Exadata Racks earlier than Oracle Exadata Database Machine X4-2, no
additional InfiniBand cables need to be purchased when connecting up to three Oracle
Exadata Database Machine Full Racks. The following table lists the spare cables for the
switch:
6-6
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
4. Determine a naming method for the rack prefixes. For example, if the original rack has the
prefix dbm01, then use the prefix dbm02 for the second rack, the prefix dbm03 for the third
rack, and so on.
5. Verify the racks have unique host names and IP addresses. All servers interconnected in
the racks must have unique names and IP addresses.
Server names and IP addresses conventions may differ in the following cases:
• Initial installation of all Oracle Exadata Database Machine Full Racks: System address
assignments and host names should be complete.
• New Oracle Exadata Database Machine Full Racks are added to an existing cluster:
The new rack configuration should require unique host names and IP addresses for
the new Oracle Exadata Database Machines. The IP addresses on the same subnet
cannot conflict with the existing systems.
• Two existing Oracle Exadata Database Machine Full Racks are clustered together:
You can assign host names and IP addresses only if Oracle Exadata Database
Machines are already assigned unique host names and IP addresses, or the entire
cluster must be reconfigured. The machines must be on the same subnet and not have
conflicting IP addresses.
6. Ensure the IP addresses for the new servers are in the same subnet, and do not overlap
with the currently-installed servers.
7. Ensure the firmware on the original switches are at the same level as the new switches
using the nm2version command. If the firmware is not at the same level, then apply a
firmware patch.
6.1.2 Cabling Oracle Exadata Quarter Racks and Oracle Exadata Eighth
Racks with InfiniBand Network Fabric
Oracle Exadata Quarter Racks and Oracle Exadata Eighth Racks with InfiniBand Network
Fabric can be cabled as follows:
• Oracle Exadata Quarter Rack to Oracle Exadata Quarter Rack
• Oracle Exadata Quarter Rack to Oracle Exadata Half Rack, or multiple Oracle Exadata
Half Racks or Oracle Exadata Full Racks
• Oracle Exadata Quarter Rack to Oracle Exadata Full Rack, or multiple Oracle Exadata Full
Racks or Oracle Exadata Half Racks
• Oracle Exadata Eighth Rack to Oracle Exadata Eighth Rack
• Oracle Exadata Eighth Rack to Oracle Exadata Half Rack, or multiple Oracle Exadata Half
Racks or Oracle Exadata Full Racks
• Oracle Exadata Eighth Rack to Oracle Exadata Full Rack, or multiple Oracle Exadata Full
Racks or Oracle Exadata Half Racks
6-7
Chapter 6
Understanding Multi-Rack Cabling for Racks with InfiniBand Network Fabric
Note:
The following graphic shows the cable connections for two Oracle Exadata Quarter Racks. The
leaf switches within each rack maintain their existing seven connections. The leaf switches
interconnect between the racks with two links each using the ports reserved for external
connectivity.
2
1 2
1
7
1 7
1
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
The following graphic shows the cable connections from Oracle Exadata Quarter Rack to
Oracle Exadata Half Rack or Oracle Exadata Full Rack. The leaf switches within each rack
maintain their existing seven connections. The leaf switches interconnect between the racks
with two links each using the ports reserved for external connectivity.
6-8
Chapter 6
Two-Rack Cabling with InfiniBand Network Fabric
Figure 6-6 Leaf and Spine Switch Connections Between a Quarter Rack and a Half or
Full Rack
2
1 2
1 1
Spine Switch 1
(optional)
(optional)
nal)
7
1 7
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
The following graphic shows the cable connections from Oracle Exadata Quarter Rack to two
or more racks. The racks that connect to Oracle Exadata Quarter Rack must be all Oracle
Exadata Half Racks or Oracle Exadata Full Racks, interconnected using a fat-tree topology.
Each leaf switch in Oracle Exadata Quarter Rack connects to the spine switches in the other
half racks or full racks with two links each. If there are more than four racks, then use one link
instead of two. The seven inter-switch links between the leaf switches in the quarter rack are
removed.
Figure 6-7 Leaf and Spine Switch Connections for a Quarter Rack Connected to One or More Half or
Full Racks
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
...
OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A OA 1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A 12A 13A 14A 15A OA 1A
0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B 0B 1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B 12B 13B 14B 15B
Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2 Leaf Switch 1 Leaf Switch 2
6-9
Chapter 6
Two-Rack Cabling with InfiniBand Network Fabric
• In the following lists, the leaf switches are referred to as IB2 and IB3. Use the rack unit that
is appropriate for your system.
• In Oracle Exadata Database Machine X2-2 racks and Oracle Exadata Storage Expansion
Racks with Exadata Storage Server with Sun Fire X4270 M2 Servers, the leaf switches are
in U20 and U24, also referred to as IB2 and IB3, respectively.
• In Oracle Exadata Database Machine 8-socket (X8-8, X7-8, X6-8, X5-8, X4-8, X3-8 and
X2-8) Full Rack systems, the leaf switches are in U21 and U23, also referred to as IB2 and
IB3, respectively.
• In Oracle Exadata Database Machine X4-2 and later racks, Oracle Exadata Database
Machine X3-2 Full Racks and Oracle Exadata Storage Expansion Rack X3-2 racks, the
leaf switches are in U20 and U22, also referred to as IB2 and IB3, respectively.
• The cable lengths shown in the following lists assume that the racks are adjacent to each
other. If the racks are not adjacent or use overhead cabling trays, then they may require
longer cables lengths. Cables lengths up to 100 meters are supported.
The following table shows the cable connections for the first spine switch (R1-U1) when
cabling two full racks together.
Table 6-1 Leaf Switch Connections for the First Rack in a Two-Rack System
The following table shows the cable connections for the second spine switch (R2-U1) when
cabling two full racks together.
6-10
Chapter 6
Three-Rack Cabling with InfiniBand Network Fabric
Table 6-2 Leaf Switch Connections for the Second Rack in a Two-Rack System
Related Topics
• Cabling Two Racks Together
Choose from the available methods based on your system specifications and operational
requirements.
6-11
Chapter 6
Three-Rack Cabling with InfiniBand Network Fabric
Table 6-3 shows the cable connections for the first spine switch (R1-U1) when cabling three
racks together:
Table 6-3 Leaf Switch Connections for the First Rack in a Three-Rack System
Table 6-4 shows the cable connections for the second spine switch (R2-U1) when cabling three
racks together:
Table 6-4 Leaf Switch Connections for the Second Rack in a Three-Rack System
6-12
Chapter 6
Four-Rack Cabling with InfiniBand Network Fabric
Table 6-5 shows the cable connections for the third spine switch (R3-U1) when cabling three
full racks together:
Table 6-5 Leaf Switch Connections for the Third Rack in a Three-Rack System
6-13
Chapter 6
Four-Rack Cabling with InfiniBand Network Fabric
Table 6-6 Leaf Switch Connections for the First Rack in a Four-Rack System
Table 6-7 shows the cable connections for the second spine switch (R2-U1) when cabling four
full racks together:
Table 6-7 Leaf Switch Connections for the Second Rack in a Four-Rack System
Table 6-8 shows the cable connections for the third spine switch (R3-U1) when cabling four full
racks together:
6-14
Chapter 6
Four-Rack Cabling with InfiniBand Network Fabric
Table 6-8 Leaf Switch Connections for the Third Rack in a Four-Rack System
Table 6-9 shows the cable connections for the fourth spine switch (R4-U1) when cabling four
full racks together:
Table 6-9 Leaf Switch Connections for the Fourth Rack in a Four-Rack System
6-15
Chapter 6
Five-Rack Cabling with InfiniBand Network Fabric
Table 6-10 Leaf Switch Connections for the First Rack in a Five-Rack System
Table 6-11 shows the cable connections for the second spine switch (R2-U1) when cabling five
full racks together:
6-16
Chapter 6
Five-Rack Cabling with InfiniBand Network Fabric
Table 6-11 Leaf Switch Connections for the Second Rack in a Five-Rack System
Table 6-12 shows the cable connections for the third spine switch (R3-U1) when cabling five
full racks together:
Table 6-12 Leaf Switch Connections for the Third Rack in a Five-Rack System
Table 6-13 shows the cable connections for the fourth spine switch (R4-U1) when cabling five
full racks together:
6-17
Chapter 6
Five-Rack Cabling with InfiniBand Network Fabric
Table 6-13 Leaf Switch Connections for the Fourth Rack in a Five-Rack System
Table 6-14 shows the cable connections for the fifth spine switch (R5-U1) when cabling five full
racks together:
Table 6-14 Leaf Switch Connections for the Fifth Rack in a Five-Rack System
6-18
Chapter 6
Six-Rack Cabling with InfiniBand Network Fabric
Table 6-15 Leaf Switch Connections for the First Rack in a Six-Rack System
Table 6-16 shows the cable connections for the second spine switch (R2-U1) when cabling six
full racks together:
6-19
Chapter 6
Six-Rack Cabling with InfiniBand Network Fabric
Table 6-16 Leaf Switch Connections for the Second Rack in a Six-Rack System
Table 6-17 shows the cable connections for the third spine switch (R3-U1) when cabling six full
racks together:
Table 6-17 Leaf Switch Connections for the Third Rack in a Six-Rack System
6-20
Chapter 6
Six-Rack Cabling with InfiniBand Network Fabric
Table 6-18 shows the cable connections for the fourth spine switch (R4-U1) when cabling six
full racks together:
Table 6-18 Leaf Switch Connections for the Fourth Rack in a Six-Rack System
Table 6-19 shows the cable connections for the fifth spine switch (R5-U1) when cabling six full
racks together:
Table 6-19 Leaf Switch Connections for the Fifth Rack in a Six-Rack System
6-21
Chapter 6
Seven-Rack Cabling with InfiniBand Network Fabric
Table 6-20 shows the cable connections for the sixth spine switch (R6-U1) when cabling six full
racks together:
Table 6-20 Leaf Switch Connections for the Sixth Rack in a Six-Rack System
6-22
Chapter 6
Seven-Rack Cabling with InfiniBand Network Fabric
Table 6-21 Leaf Switch Connections for the First Rack in a Seven-Rack System
Table 6-22 shows the cable connections for the second spine switch (R2-U1) when cabling
seven full racks together:
Table 6-22 Leaf Switch Connections for the Second Rack in a Seven-Rack System
6-23
Chapter 6
Seven-Rack Cabling with InfiniBand Network Fabric
Table 6-23 shows the cable connections for the third spine switch (R3-U1) when cabling seven
full racks together:
Table 6-23 Leaf Switch Connections for the Third Rack in a Seven-Rack System
Table 6-24 shows the cable connections for the fourth spine switch (R4-U1) when cabling
seven full racks together:
Table 6-24 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System
6-24
Chapter 6
Seven-Rack Cabling with InfiniBand Network Fabric
Table 6-24 (Cont.) Leaf Switch Connections for the Fourth Rack in a Seven-Rack
System
Table 6-25 shows the cable connections for the fifth spine switch (R5-U1) when cabling seven
full racks together:
Table 6-25 Leaf Switch Connections for the Fifth Rack in a Seven-Rack System
Table 6-26 shows the cable connections for the sixth spine switch (R6-U1) when cabling seven
full racks together:
Table 6-26 Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
6-25
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
Table 6-26 (Cont.) Leaf Switch Connections for the Sixth Rack in a Seven-Rack System
Table 6-27 shows the cable connections for the seventh spine switch (R7-U1) when cabling
seven full racks together:
Table 6-27 Leaf Switch Connections for the Seventh Rack in a Seven-Rack System
6-26
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
• In Oracle Exadata Database Machine X2-2 racks and Oracle Exadata Storage Expansion
Racks with Exadata Storage Server with Sun Fire X4270 M2 Servers, the leaf switches are
in U20 and U24, also referred to as IB2 and IB3, respectively.
• In Oracle Exadata Database Machine X2-8 and later racks, the leaf switches are in U21
and U23, also referred to as IB2 and IB3, respectively.
• In Oracle Exadata Database Machine X4-2 and later racks, or Oracle Exadata Database
Machine X3-2 Full Racks and Oracle Exadata Storage Expansion Rack X3-2 racks, the
leaf switches are in U20 and U22, also referred to as IB2 and IB3, respectively.
• The cable lengths shown in the tables assume the racks are adjacent to each other. If the
racks are not adjacent or use overhead cabling trays, then they may require longer cables
lengths. Up to 100 meters is supported.
• Only optical cables are supported for lengths greater than 5 meters.
Table 6-28 shows the cable connections for the first spine switch (R1-U1) when cabling eight
racks together:
Table 6-28 Leaf Switch Connections for the First Rack in a Eight-Rack System
Table 6-29 shows the cable connections for the second spine switch (R2-U1) when cabling
eight full racks together:
Table 6-29 Leaf Switch Connections for the Second Rack in a Eight-Rack System
6-27
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
Table 6-29 (Cont.) Leaf Switch Connections for the Second Rack in a Eight-Rack
System
Table 6-30 shows the cable connections for the third spine switch (R3-U1) when cabling eight
full racks together:
Table 6-30 Leaf Switch Connections for the Third Rack in a Eight-Rack System
Table 6-31 shows the cable connections for the fourth spine switch (R4-U1) when cabling eight
full racks together:
6-28
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
Table 6-31 Leaf Switch Connections for the Fourth Rack in a Eight-Rack System
Table 6-32 shows the cable connections for the fifth spine switch (R5-U1) when cabling eight
full racks together:
Table 6-32 Leaf Switch Connections for the Fifth Rack in a Eight-Rack System
6-29
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
Table 6-33 shows the cable connections for the sixth spine switch (R6-U1) when cabling eight
full racks together:
Table 6-33 Leaf Switch Connections for the Sixth Rack in a Eight-Rack System
Table 6-34 shows the cable connections for the seventh spine switch (R7-U1) when cabling
eight full racks together:
Table 6-34 Leaf Switch Connections for the Seventh Rack in a Eight-Rack System
6-30
Chapter 6
Eight-Rack Cabling with InfiniBand Network Fabric
Table 6-34 (Cont.) Leaf Switch Connections for the Seventh Rack in a Eight-Rack
System
Table 6-35 shows the cable connections for the eighth spine switch (R8-U1) when cabling eight
full racks together:
Table 6-35 Leaf Switch Connections for the Eighth Rack in a Eight-Rack System
6-31