0% found this document useful (0 votes)
388 views

Dell EMC 1000 Enterprise)

Uploaded by

abdulmuizz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
388 views

Dell EMC 1000 Enterprise)

Uploaded by

abdulmuizz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 69

iDRAC

An integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller is


embedded in every Dell PowerEdge server. It provides functionality that helps you
deploy, update, monitor, and maintain Dell PowerEdge servers with or without a
systems management software agent. Because it is embedded within each server from
the factory, the Dell iDRAC with Lifecycle Controller requires no operating system
or hypervisor to work.
It enables streamlined local and remote server management, and it reduces or
eliminates the need for administrators to physically visit the server—even if the
server is not operational.

PRC
Peripheral riser cards (PRCs) are considered extensions of the system board and
allow attachment of additional circuit boards, such as host bus adapters (HBAs) or
network interface cards (NICs) to the system board. The riser board plugs into the
riser connector on the system board.

PERC
The PowerEdge Expandable RAID Controller (PERC) is Dell’s line of RAID controllers.
RAID controllers manage RAID arrays, which are physical disk drives presented to
computer as logical units. This allows data to be divided and replicated among
multiple drives for increased performance and redundancy.
PERC can be located on the system board in addition to a stand-alone PCI card. The
RAID controllers can also be managed within the BIOS and through Dell OpenManage™
software within the operating system.

PDB
The Power Distribution Board (PDB) provides hot-plug logic and power distribution
for the system. Each hot-pluggable power supply has its own AC power input to
provide AC power redundancy. A PDB does not spread the power load between multiple
active power supplies. The PDB provides redundancy and hot swapping only, allowing
a backup power supply to give power if the active supply fails.

What is the Service Tag for the server? 1234567


What is the current iDRAC MAC Address? 84:7b:eb:d5:04:90
What type license is the server using? iDRAC9 Enterprise License
How many physical disks are available in the system? 116
What is the average fan speed? 100% PWM
How do I turn on the system lockdown mode? Go to Dashboard > More Actions > Turn on
the System Lockdown Mode
Are there any jobs in the Job Queue? (Choose only ONE answer) yes
Are there any jobs in the Job Queue? (Choose only ONE answer) yes

A processor is the logic circuitry that responds to and processes the basic
instructions that drive a computer. The term processor has generally replaced the
term central processing unit (CPU). The processor in a personal computer or
embedded in small devices is often called a microprocessor.
The Dell™ PowerEdge™ 13G server features the Intel® Xeon® processor v3 (Haswell) or
v4 (Broadwell) product family while 14G server features the Intel® Xeon® Processor
Scalable Family, offering an ideal combination of performance, power efficiency,
and cost.

NOTE: The processor for the PowerEdge C6420 offers an Intel® Xeon® Processor
Scalable Family with Omni-path fabric solution.
To learn more about different Intel processors, you can visit http://ark.intel.com
and browse for the processor that you want. Information such as scalability, memory
supported, speeds and bandwidth availability and many more can be found on this
site.

The Dell PowerEdge 14G servers support DDR4 registered dual in-line memory modules
(RDIMMs), load-reduced DIMMs (LRDIMMs) and non-volatile dual in-line DIMM-Ns
(NVDIMM-Ns).

System memory holds the instructions that the processor executes and the data with
which instructions work. System memory is an important part of the main processing
subsystem of the PC with the processor, cache, system board, and chipset.

Please see http://www.dell.com/support/article/us/en/04/SLN294489 for more


information about the general memory module installation guidelines (For 13G
servers and below only)

Types of Memory

DDR4
DDR4 is the next evolution in dynamic RAM (DRAM), bringing even higher performance
and more robust control features while improving energy economy for enterprise,
micro-server, tablet, and ultrathin client applications.
-Reduced Memory Power Demand at only 1.2V
-Larger DIMM capacities with 4Gb to 16Gb
-Higher data rates running at 667MHz–1.6GHz

DDR3L
At only 1.35V, DDR3L is the low-voltage version of DDR3.
-Reduces power and cost
-Low voltage does not mean lower performance

RDIMM
RDIMMs provide better signal integrity, population of more DIMM channels, and
better performance for heavier workloads.
-Higher latency than UDIMM
-Lower electrical load on the controller
-For customers who want large amounts of memory and high reliability, availability,
and serviceability (RAS)
-Better signal integrity
-Population of more DIMMs
-Better performance

UDIMM
Unregistered DIMMs (UDIMMs) are unbuffered, low-density, and low-latency DIMMs that
do not include a register or a buffer chip.
-Its lower price and slightly lower power are suited for customers who want cost
and power savings.
-Smaller capacity limits the system memory to 24GB.

LRDIMM
-Allow for greater bandwidth density
-Use a buffer chip in addition to a register

NVDIMM
Non-volatile dual in-line memory module (NVDIMM)
-Random-access memory.
-Allow the data to remain in place when power is removed from the system.

iDRAC
An integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller is
embedded in every Dell PowerEdge server. It provides functionality that helps you
deploy, update, monitor, and maintain Dell PowerEdge servers with or without a
systems management software agent. Because it is embedded within each server from
the factory, the Dell iDRAC with Lifecycle Controller requires no operating system
or hypervisor to work.
It enables streamlined local and remote server management, and it reduces or
eliminates the need for administrators to physically visit the server—even if the
server is not operational.

iDRAC9 with Lifecycle Controller

Dell Lifecycle Controller provides advanced embedded systems management to perform


systems management tasks such as deploying, configuring, updating, maintaining, and
diagnosing using a graphical user interface (GUI). It is delivered as part of the
iDRAC out-of-band solution and embedded Unified Extensible Firmware Interface
(UEFI) applications in the latest Dell servers.
The key features of Lifecycle Controller are:
-Provisioning
-Deploying
-Downloading drivers for OS installation
-Patching or updating
-Servicing
-System erases
-Security
-Restoring the server
-Creating logs for troubleshooting
-Hardware inventory checking

A proper license for Lifecycle Controller is required for full functionality.

Please refer to iDRAC9 with Lifecycle Controller reference page for more
information

PERC

The PowerEdge Expandable RAID Controller (PERC) is Dell’s line of RAID controllers.
RAID controllers manage RAID arrays, which are physical disk drives presented to
computer as logical units. This allows data to be divided and replicated among
multiple drives for increased performance and redundancy.
PERC can be located on the system board in addition to a stand-alone PCI card. The
RAID controllers can also be managed within the BIOS and through Dell OpenManage™
software within the operating system.

PERC 9
The PERC 9 is a refresh of the Dell PERC portfolio in support of Dell’s 13G
PowerEdge servers. It encompasses both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers. The PERC 9 series of cards includes:
-PERC H330
-PERC H730
-PERC H730P
-PERC H830
-PERC FD33xD/FD33xS
See the User Guide to view the PERC 9 series and their specifications.
You can also check out PERC 9.2 new features here (Updates > PERC 9.2).

PERC 10
The PERC 10 is a refresh of the Dell PERC portfolio in support of Dell’s 14G
PowerEdge servers, encompassing both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers.

The PowerEdge RAID Controller (PERC) 10 Series of cards consist of:


H740P
-The PERC H740P is the performance RAID solution card consisting of 8 GB Non-
Volatile Cache and is available in the Adapter (low profile and full height) and
Mini Monolithic form factors for internal storage.
H840
-The PERC H840 is similar to the H740P solution, except that it supports external
storage. The PERC H840 is only available in the Adapter (low profile and full
height) form factor.

SAS Backplane

The SAS backplane is the interface between the hard-disk drives and controller. It
consist of a group of connectors into which other drives can connect. The actual
location of a hard drive is determined by the physical address on the backplane.
All SAS backplanes are designed and validated to meet 12 Gbps SAS and 6 Gbps SATA
requirements.

Power Supply Unit


The Dell 14G server Energy Smart power supply units (PSUs) have intelligent
features, such as the ability to dynamically optimize efficiency while maintaining
availability and redundancy. Also featured are enhanced power-consumption reduction
technologies (such as high-efficiency power conversion and advanced thermal-
management techniques) and embedded power-management features, including high-
accuracy power monitoring.

14G PSUs include:


-New, higher wattage power supply options are now available in 14G. (1600 W, 2000
W, 2400 W)
-Uses existing 13G power supplies with updated firmware and part numbers. (495 W,
750 W, 1100 W)

There is no mixing of PSU in a single 14G system. If a customer attempts to plug in


a 12G or 13G power supply on a 14G server, it will produce an error log in SEL and
the power supply will not function.

Hot Spare Feature

The system supports the Hot Spare feature, which significantly reduces the power
overhead associated with power supply redundancy.
When the Hot Spare feature is enabled, one of the redundant power supplies is
switched to a sleep state. The active power supply supports 100% of the load, thus
operating at higher efficiency.
The power supply in the sleep state monitors output voltage of the active power
supply. If the output voltage of the active power supply drops, the power supply in
the sleep state returns to an active output state.
If having both power supplies active is more efficient than having one power supply
in a sleep state, the active power supply can also activate a sleeping power
supply.
You can configure the Hot Spare feature using the iDRAC settings. For more
information on iDRAC settings, see the iDRAC User’s Guide at
dell.com/support/manuals.

PDB
The Power Distribution Board (PDB) provides hot-plug logic and power distribution
for the system. Each hot-pluggable power supply has its own AC power input to
provide AC power redundancy. A PDB does not spread the power load between multiple
active power supplies. The PDB provides redundancy and hot swapping only, allowing
a backup power supply to give power if the active supply fails.

PCIe
Peripheral Component Interconnect Express (PCI Express), officially abbreviated as
PCIe, is a high-speed serial computer expansion bus standard, designed to replace
the older PCI, PCI-X, and AGP bus standards.
PCIe has numerous improvements over the older standards, including higher maximum
system bus throughput, lower I/O pin count, smaller physical footprint, better
performance scaling for bus devices, and a more detailed error detection and
reporting mechanism.

PRC
Peripheral riser cards (PRCs) are considered extensions of the system board and
allow attachment of additional circuit boards, such as host bus adapters (HBAs) or
network interface cards (NICs) to the system board. The riser board plugs into the
riser connector on the system board.

NDC

Beginning with 12G servers, Dell uses a custom Network Daughter Card (NDC) to house
the complete LAN on Motherboard (LOM) subsystem. In these systems, the LOM is
provided on the NDC as part of Dell PowerEdge Select Network Adapters family.
There are two form factors of Select Network Adapters, one for blade servers and
one for rack servers. The Blade Select Network Adapter provides dual-port 10GbE
from various suppliers. The Rack Select Network Adapter provides a selection of
1GbE and 10GbE port options, such as 1000BASE-T, 10GBASE-T, and 10Gb SFP+.
The Select Network Adapter form factor continues to deliver the value of LOM
integration with the system, including BIOS integration and shared ports for
manageability while providing flexibility.

GPGPU
GPGPU stands for General-Purpose Graphics Processing Units, also known as GPU
Computing. Graphics Processing Units (GPUs) are high-performance many-core
processors capable of very high computation and data throughput.
GPU vs. CPU
A simple way to understand the difference between a GPU and a CPU is to compare how
they process tasks.
A CPU consists of a few cores optimized for sequential serial processing. A GPU has
a massively parallel architecture consisting of thousands of smaller, more
efficient cores designed for handling multiple tasks simultaneously.

Flash BIOS Memory


An embedded 8MB ROM holds the system BIOS code, system configuration data, and the
Intel® Node Manager firmware. The BIOS and Node Manager firmware are upgradeable as
a single entity in the field via methods like Unified Server Configurator (USC) or
Dell Update Package (DUP). The Flash BIOS is write-protected by BIOS and not
accessible to host software.

IDSDM

The Internal Dual SD Module (IDSDM) provides the same feature set as the previous
embedded hypervisor SD card solution, with the added feature of a mirrored SD card.
Starting from 13G, the IDSDM is now optional rather than default.
In 14G, the Internal Dual microSD Module (IDSDM) and vFlash card are combined into
a single card module.
The IDSDM contains two SD ports for fail-safe purpose. When one fails, the other
will serve as a backup to reduce system downtime. There is also an internal USB
port on the system board for a USB key.
The IDSDM card provides the following major functions:
-Dual SD interface, maintained in a mirrored configuration (primary and secondary
SD)
-Provides full RAID-1 functionality
-Dual SD cards are not required; the module can operate with only one card but will
provide no redundancy
-Enables support for Secure Digital eXtended Capacity (SDXC) cards
-USB interface to host system
-I2C interface to host system and onboard Electrically Erasable Programmable ROM
(EEPROM) for out-of-band status reporting
-Onboard LEDs show status of each SD card

TPM
The Trusted Platform Module (TPM) is used to generate/store keys,
protect/authenticate passwords, and create/store digital certificates, specifically
to shield unencrypted keys and platform authentication information (secrets) from
software-based attacks. TPM can also be used to enable the BitLocker™ hard drive
encryption feature in Windows Server 2008. TPM is also used for Intel® Trusted
Execution Technology (Intel® TXT).

Field engineers need to take note of the following:


-The TPM chip on the Plug-In Module (PIM) may ship separately from the system
board.
-Once installed on a system board, it cannot be installed in another system board.
-Field engineers need to install a new TPM chip after system board replacement.
-To install a new TPM chip, insert the connector, and press down on the security
rivet until it clicks.
-It is not required to remove the TPM before returning the system board.

TPM Notes

Caution: Do not attempt to remove the TPM plug-in module from the old motherboard.
Once the TPM plug-in module is installed, it is cryptographically bound to that
specific motherboard. Any attempt to remove an installed TPM plug-in module breaks
the cryptographic binding, and it cannot be re-installed or installed on another
motherboard.

Customers may upgrade a system board's TPM module from one version to another. In
this case, the old TPM module is removed and discarded.

If the system is configured with a TPM chip, a new TPM chip has to be dispatched. A
field engineer needs to install a new TPM chip after system board replacement.

By default, TPM is turned off in the BIOS.

If you are using the TPM with an encryption key, you may be prompted to create a
recovery key during program or System Setup. Be sure to create and safely store
this recovery key. If you replace this system board, you must supply the recovery
key when you restart your system or program before you can access the encrypted
data on your hard drives.

How to Install the TPM Chip


Locate the TPM slot on the system board.
Align the edge connectors on the TPM with the slot on the TPM connector.
Insert the TPM into the TPM connector so that the plastic bolt aligns with the slot
on the system board.
Press the plastic bolt until the bolt snaps into place.

Re-Enabling the TPM


a.BitLocker users must initialize the TPM and change the status to Enabled,
Activated. For more information on initializing the TPM, please refer to Microsoft
TechNet.
b.Intel TXT users must perform the following steps:
-While booting your system, press <F2> to enter System Setup.
-In the System Setup Main Menu, click System BIOS > System Security Settings.
-In the TPM Security option, select On with Pre-boot Measurements.
-In the TPM Command option, select Activate.
-Save the settings.
-Reboot your system.
-Enter System Setup again.
-In the System Setup Main Menu, click System BIOS > System Security Settings.
-In the Intel TXT option, select On.

iDRAC with Lifecycle Controller

What Is iDRAC?

The integrated Dell Remote Access Controller (iDRAC), which is embedded within each
server from the factory, is designed to make server administrators more productive
and improve the overall availability of Dell servers. iDRAC alerts administrators
to server issues, helps them perform remote server management, and reduces the need
for physical access to the server.

iDRAC with Lifecycle Controller technology is part of a larger data-center solution


that helps keep business-critical applications and workloads available always. The
technology allows administrators to deploy, monitor, manage, configure, update,
troubleshoot, and remediate Dell servers from any location, and without the use of
agents. It accomplishes this regardless of operating system, hypervisor presence,
or state.
Several products work with the iDRAC and Lifecycle Controller to simplify and
streamline IT operations, such as:
-Dell Management Plug-In for VMware® vCenter™
-Dell Repository Manager
-Dell Management Packs for Microsoft® System Center Operations Manager (SCOM) and
Microsoft® System Center Configuration Manager (SCCM)
-BMC BladeLogic
-Dell OpenManage™ Essentials
-Dell OpenManage Power Center

iDRAC License Options


iDRAC features are available based on iDRAC Express (default) or iDRAC Enterprise
(can be purchased).
Only licensed features are available in the interfaces that allow you to configure
or use iDRAC.
Please refer to iDRAC9 with Lifecycle Controller reference page (Manuals > User
guide > Licensed features in iDRAC8 and iDRAC9) for more information.

iDRAC Key Features


The key features in iDRAC include:
-Inventory and monitoring
-Deployment
-Update
-Maintenance and troubleshooting
-Secure connectivity
Please refer to iDRAC9 with Lifecycle Controller reference page
(Manuals > iDRAC9 User Guide > Overview > Key features) to view the complete list
of iDRAC features
To view what is new in iDRAC9, please refer to iDRAC9 with Lifecycle Controller
reference page
(Manuals > iDRAC9 User Guide > Overview > New in this release)

Lifecycle Controller
The Dell Lifecycle Controller provides advanced embedded systems management to
perform systems management tasks such as deployment, configuration, updating,
maintenance, and diagnosis using a GUI.

It is delivered as part of the iDRAC out-of-band solution and embedded UEFI


applications in the latest Dell servers. iDRAC works with the UEFI firmware to
access and manage every aspect of the hardware, including component and subsystem
management that is beyond the traditional Baseboard Management Controller (BMC)
capabilities.
Lifecycle Controller has the following components:
-The GUI is an embedded configuration utility that resides on an embedded flash
memory card. It is similar to the BIOS utility that is started during the boot
sequence, and it can function in a pre-operating system environment. It enables
server and storage management tasks from an embedded environment throughout the
lifecycle of the server.
-Remote Services (WS-Man) simplifies end-to-end server lifecycle management using
the one-to-many method. It interfaces for remote deployment integrated with Dell
OpenManage Essentials and partner consoles.
Please refer to iDRAC9 with Lifecycle Controller reference page (Manuals >
Lifecycle Controller User Guide) to learn more about the Lifecycle Controller.

Benefits of Using iDRAC with Lifecycle Controller

The benefits of using iDRAC with Lifecycle Controller include:


-Increased availability — Early notification of potential or actual failures helps
prevent a server failure or reduce recovery time after failure.
-This reduces potential for error and helps maximize server update and
performance.
-This offers agent-free configuration management.
-Improved productivity and lower Total Cost of Ownership (TCO) — Extending the
reach of administrators to larger numbers of distant servers can make IT staff more
productive while driving down operational costs such as travel.
-Secure environment — By providing secure access to remote servers, administrators
can perform critical management functions while maintaining server and network
security.
-Enhanced embedded management through Lifecycle Controller – Lifecycle Controller
provides deployment and simplified serviceability through the following:
-Lifecycle Controller GUI for local deployment.
-Remote Services (WS-Management) interfaces for remote deployment integrated
with Dell OpenManage Essentials and partner consoles.

iDRAC – Quick Sync Feature


Requirements to Set up Quick Sync:
-Near Field Communication (NFC)-enabled mobile device
-OpenManage Mobile 1.1 or greater
-Available on R630, R730, and R730xD
-New NFC-enabled server and bezel
-Close proximity (< 5 cm)
This feature is currently supported on mobile devices with the Android™ operating
system.

The Quick Sync-enabled bezel must be ordered at time of purchase. After point of
sale, upgrade is not available.

iDRAC – Quick Sync Feature (Bezel Back View)


-Bezel connects to the special Quick Sync right ear of the rack server.
-OpenManage Mobile App can be downloaded using the QR code on the back of the
bezel.

Quick Sync 2 module


The Quick Sync 2 module offers improved performance and usability over 13G NFC
technology with the Wi-Fi connection.
The Quick Sync 2 Wi-Fi module offers support for:
-SupportAssist Collection and crash video/screen download/transfer.
-Remote RACADM.
-VNC remote console connectivity.
-Access to the iDRAC GUI via Wi-Fi connection.

OpenManage Mobile (OMM) 2.0 application is required to interact with the Quick Sync
2 module.
The wireless capability is enabled by an external button referred to as an
Activation Button, it is deactivated by pressing the button again (or upon
disconnect/timeout). The activation button is located on the front of the
mechanical assembly and when pressed starts transmitting and receiving.

Quick Sync vs Quick Sync 2


Quick Sync (13G)
-Located in Server Bezel.
-Secure, physical access required.
-Works wirelessly; supports NFC enabled phones.
-Only available on PowerEdge R730 and R630.
-Low bandwidth.
-“Tap and hold” to connect.
-Android supported. No iOS support.

Quick Sync 2 (14G)


-Embedded in Left rack ear.
-Highly Secured, physical access required.
-Works wirelessly; supports Bluetooth 4.0+ and WIFI enabled mobile devices.
-Available on all 14G Poweredge platforms.
-Higher bandwidth.
-Supports 1 to many connections.
-iOS (iOS 9.0+) and Android (Android 5.0+) supported.

Quick Sync 2 Features


Read of Information via Mobile Device (Over Bluetooth)
-Inventory (CPU, Memory, OS, system Model, Service tag, Asset Tag).
-Firmware Information.
-Networking (IPv4, IPv6, MAC Addresses).
-Health Information(Global, Sub-System Health).
-Monitoring(Inlet & Exhaust temperatures, Power, CPU Temperature).
-Logs (SEL, LC Log).
-Asset Location.
Feature constraints:
-Requires Authentication when “Read Authentication” configuration is enabled

Quick Sync 2 Features


Write of Information via Mobile Device (Over Bluetooth)
-iDRAC networking.
-Root Credentials.
-First Boot Device.
-Power Control.
-BIOS Attributes(System Profiles, Boot Modes, USB & COM ports config).
-Asset Location.
Feature constraints:
-“Quick Sync Access” must be in Read-Write mode.
-Requires Authentication with iDRAC configuration privilege.

Quick Sync 2 Features


Operation available via Mobile Device (Over WIFI)
-Support Assist (TSR).
-VNC.
-RACADM(Limited).
Feature constraints:
-Quick Sync WiFi configuration must be enabled.
-Requires Authentication with iDRAC configuration privilege.
Dependencies
-Quick Sync enabled Left Control Panel on 14g platforms.
-OMM 2.0 Android or IOS Mobile Application for read/write of Information.

Quick Sync Upgrade Options


-Quick Sync 1.0 (Hardware) cannot be upgraded to Quick Sync 2.0.
-Older OpenManage Mobile (OMM) is upgradable to OMM2.0 to support Quick Sync 2.0.
-14G Status LED Control panel is upgradable to Quick Sync Control Panel.

iDRAC and Lifecycle Controller Videos


Please refer to iDRAC and Lifecycle controller reference page for more information.

You can also view the list of videos below that will show you how to configure
using iDRAC8 and Lifecycle Controller.
iDRAC Simulator (Desktop version)
To run the simulator:
1.Ensure you have administrator rights on your computer
2.Download the file: iDRAC9_sim.zip
3.Unzip iDRAC9_sim.zip to root (C:\) of your OS installation and run the node-
v6.11.1-x64.msi installer
4.Move/copy the provided shortcuts to your desired working location
5.Go to the Start menu and open the node.js command prompt. The first line in the
command windows should be:
Your environment has been set up for using Node.js 6.11.1 (x64) and npm
6.From the c:\ prompt, type cd c:\iDRAC_Simulator then run start.bat. In the
command windows you will see:
c:\iDRAC_Simulator>node simulate.js
Simulator running @port: 3000
Do NOT close/exit the command prompt
7.Launch iDRAC9 Simulator using the provided shortcut and login with the following
credentials:
Username: root
Password: calvin

Dell Baseboard Management Controller

What Is Dell BMC?


The Dell system's Baseboard Management Controller (BMC) monitors the system for
critical events by communicating with various sensors on the system board and sends
alerts and logs events when certain parameters exceed their preset thresholds. A
BMC is comparable to an iDRAC as another out-of-band management tool but with fewer
features.

The BMC supports the industry-standard Intelligent Platform Management Interface


(IPMI) specification, enabling the customer to configure, monitor, and recover
systems remotely.

The BMC can be accessed by standard, off-the-shelf terminal or terminal emulator


utilities that allow access to sensor status information and power control.
BMC Key Features:
Access through the system’s serial port and integrated NIC
Fault logging and SNMP alerting
Access to the system event log (SEL) and sensor status
Control of system functions including power on and power off
Support that is independent of the system’s power or operating state
Text console redirection for system setup, text-based utilities, and operating
system consoles
Access to Linux Enterprise server serial console interfaces by using Serial Over
LAN (SOL)

BMC Interfaces
The following BMC interfaces allow you to configure and manage the system through
the BMC:
The BMC Management Utility allows remote, out-of-band LAN, and/or serial port power
control, event log access, and console redirection.
The Remote Access Configuration Utility in x9xx systems enables configuring BMC in
a pre-operating system environment.
The Dell OpenManage Deployment Toolkit SYSCFG utility provides a powerful command-
line configuration tool.
Dell OpenManage Server Administrator allows remote, in-band access to event logs,
power control, and sensor status information and provides the ability to configure
the BMC.
Command-Line Interface (CLI) tools provide a command-line tool for sensor status
information, System Event Log (SEL) access, and power control.

Supported Systems and Operating Systems


The BMC Management Utility supports new Dell systems running on supported Windows
and Linux systems by implementing the new IPMItool commands to monitor your
system’s power and view and set the LCD status.

For the complete list of supported systems and operating systems, see the
readme.txt file in the root installation folder.

Dell BMC vs. iDRAC


Dell BMC is based on iDRAC, which supports a set of features from iDRAC that are
optimized for Hyperscale.

Specifically, Dell BMC supports Virtual KVM & Media, while the Lifecycle Controller
Pre-Boot environment has been removed.

Can Dell BMC Be Upgraded to iDRAC?


There is no upgrade path from Dell BMC to iDRAC Basic/Express/Enterprise.

Installing the BMC Management Utility


You can only install and uninstall the BMC Management Utility on supported Windows
and Linux operating systems.

Visit the Dell Support website to select the BMC version that you are working on.
Then, download the User Guide > Using the BMC Management Utility > Installation
Procedures section to learn how to install and uninstall the BMC Management
Utility.

Installation Prerequisites
Before using or installing the BMC Management Utility, there are few procedures the
user needs to perform:
Configuring the BIOS
Configuring the Baseboard Management Controller
Configuring the BMC with the Dell OpenManage Deployment ToolKit (DTK) SYSCFG
utility
Configuring the BMC with Dell OpenManage Server Administrator

User Guide
1.Access to the User's Guide from the Dell Support website to learn how to perform
the above configurations.
2.Go to the Dell Support website.
Click View Product > Software & Security > Remote Enterprise Systems Management >
Baseboard Management Controller Management Utilities > BMC Version.
3.Click the Manuals drop-down button.
4.Click the available hyperlink to download or view the user manual in a web
browser.
5.Look under the Configuring Your Managed System section.

Dell PowerEdge RAID Controller (PERC)


What Is PERC?

A RAID controller is used for controlling a RAID array. It manages the physical
disk drives as logical units.
A PowerEdge RAID Controller (PERC) is Dell's proprietary RAID controller.

PERCs exist as either:


1.Interface cards (PCIe)
2.Integrated on-board controllers

PERC Key Features


Hot spares: Automatically replace failed drives in a redundant array
Hot swapping: Allows physical exchange or addition of disk drives without bringing
down the server
Drive roaming: Identifies drives that have been moved to a different slot
Ability to expand an existing array to include new drives
RAID-level migration of an existing array, with some limitations

PERC 9
The PERC 9 is a refresh of the Dell PERC portfolio in support of Dell’s 13G
PowerEdge servers. It encompasses both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers. The PERC 9 series of cards includes:
PERC H330
PERC H730
PERC H730P
PERC H830
PERC FD33xD/FD33xS
See the User Guide to view the PERC 9 series and their specifications.

You can also check out PERC 9.2 new features (Updates > PERC 9.2).

PERC 10
The PERC 10 is a refresh of the Dell PERC portfolio in support of Dell’s 14G
PowerEdge servers, encompassing both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers.

The PowerEdge RAID Controller (PERC) 10 Series of cards consist of:


H740P
The PERC H740P is the performance RAID solution card consisting of 8 GB Non-
Volatile Cache and is available in the Adapter (low profile and full height) and
Mini Monolithic form factors for internal storage.
H840
The PERC H840 is similar to the H740P solution, except that it supports external
storage. The PERC H840 is only available in the Adapter (low profile and full
height) form factor.

Please refer to PowerEdge RAID Controller Card (PERC) 10 reference page (Manuals >
PERC 10 User Guide) for more information.

Installing and Removing PERC 9 Card


See the User's Guide to review a set of high-level installation and removal
instructions for the following PERC 9 series:
PERC H330 (Adapter/Mini-Monolithic/Slim Card/Mini-Blade)
PERC H730 (Adapter/Mini-Monolithic/Slim Card/Mini-Blade)
PERC H730P (Adapter/Mini-Monolithic/Slim Card/Mini-Blade)
PERC H830 (Adapter)
PERC FD33xS Card
PERC FD33xD Card

Installing and Removing PERC 10 Card


Please refer to PowerEdge RAID Controller Card (PERC) 10 User guide (under
Deploying the PERC card) to review a set of high-level installation and removal
instructions for the following PERC 10 series:
PERC H740P (Adapter/Mini-Monolithic)
PERC H840 (Adapter)

What to Do after PERC Card Replacement

When you replace a failed PERC card, you need to ensure successful recovery of
existing storage arrays and validate with the customer that the configuration is
being rebuilt.

Follow the procedure below to import the configuration of an existing HDD RAID
array to a replacement PERC card through the PERC BIOS utility.
-Power the system on and press <Ctrl> + <R> to enter the PERC BIOS.
-Press <F2> with the new card selected.
-Use the Up and Down Arrow keys to highlight the Foreign Configuration option.
-Use the Up and Down Arrow keys to navigate to the Import option and press <Enter>
again.
-Choose OK when asked to confirm. The BIOS will state that the import has completed
successfully.
-Confirm that the imported virtual disk has been imported under the VD Management
section.

PowerEdge System BIOS and Platform Personality Flash Update

Introduction
Dell leverages PowerEdge server hardware and rebrands, or changes the identity to a
Storage product. Leveraging allows the use of the same hardware and replacement
parts across several platforms.

Rebranded systems use a Personality Module file to update the system with the
appropriate brand identity (or personality).

Examples of systems that use a Personality Module file include:


Dell Storage systems that use PowerEdge servers
PowerEdge leveraged Storage systems that use third-party software (for example:
Nutanix or Nexenta)
OEM systems that use a Personality or Identity Module file to remove all Dell
branding and/or replace it with customer-specified branding
This lesson covers when, why, and how to execute the Personality Module file to
restore a leveraged system's branding.

Why This Update Is Important

If a leveraged, rebranded system is not properly updated:


Windows will boot, and the storage software may not recognize the system.
The branding will be incorrect, and the customer's data may not be accessible.
The customer will escalate.
You may need to return to the customer's site on a repeat dispatch.
When replacing a system board in any system, always follow the After Service
instructions on the System Board page in the Educate Dell reference materials. Any
critical updates, such as the Personality Module, are included.

Identify System Model


Every time you replace a system board, confirm that the model name and Service Tag
on your labor ticket matches the model name and Service Tag on the storage hardware
and on the BIOS/POST screen.

The labor ticket contains the system model name, Service Tag, and, in most cases, a
URL to the system's product reference page in Educate Dell.

Because some parts are used in multiple products, do not rely on a part number
description to confirm your model name and type.

Product Reference Materials


The best way to identify that a system needs to be rebranded with a Personality
Module is in the product reference materials portal.
Educate Dell has evolved over the years. Here is the most common reference material
areas to find Personality Module information:
Field Service Information > Disassembly and Reassembly > System Board > Update the
Personality Module
Field Service Information > Need to Know > Critical Callouts
Field Service Information > Rebrand the System Board
Installation and Service > Rebrand a ...

Manual Restore or Automated Easy Restore


This section will help you determine if you need to update a rebranded storage
appliance using a manual or automatic method.
After you replace the system board on a leveraged system, you need to rebrand the
system. Depending on the product you are servicing, this update may require a
manual download and update, or the product may have the automated Easy Restore
option.
12th Generation and Earlier
If your product is based on an 11th, 12th, or earlier generation server, manually
download and install the Personality Module file to rebrand the system. Visit the
reference materials for the Personality Module file download and rebranding
procedures.
13th Generation and Later
Starting with 13th generation servers (ex: R730, T630), Dell introduced Easy
Restore for some systems. During the first boot after replacing a system board, the
system prompts you to restore the previous system content. Easy Restore can
automatically restore the Personality Module file, Service Tag, license, and UEFI
diagnostics.

Verify that Easy Restore properly restored the backup information. If not, manually
restore the Personality Module file using product reference material instructions.

Automated Easy Restore Process

PowerEdge 14th generation server with iDRAC uses Easy Restore to automate the
system board replacement process.
After replacement, the boot screen displays. This allows the restoration of the:
Service Tag.
Licenses.
Enterprise UEFI diagnostics.
System configuration settings—BIOS, iDRAC, and NIC.

Always verify that Easy Restore properly rebranded the system. If Easy Restore
fails, manually update using the product's reference material.

Easy Restore does not back up other data such as firmware images, vFlash data, or
add-in cards data.

Verify Updates
After replacing the system board, updating the Personality Module, and rebooting,
verify the branding changes on the BIOS/POST screen. You can also verify the
branding changes in the iDRAC.
If you do not see the correct system name, take the following corrective actions:
Ensure you completed all after-service processes described in the product reference
materials.
Manually install the Personality Module file (if you used Easy Restore).
Call Dell Technical Support for assistance.

Enhanced Pre-Boot System Assessment (ePSA) (system utilities)


The Dell Embedded System Diagnostics is also known as Enhanced Pre-boot System
Assessment (ePSA) diagnostics. It allows you to:
Run tests automatically or in an interactive mode.
Repeat tests.
Display or save test results.
Run thorough tests to provide extra information about the failed device(s).
View status messages that inform you if tests are completed successfully.
View error messages that inform you of problems encountered during testing.

What Is Support Live Image?


Support Live Image (SLI) is a CentOS 7.0 image that packages a collection of
utilities and diagnostic tools for Dell PowerEdge servers, Dell PowerEdge C
servers, and Dell PowerVault storage systems. It provides an environment to run the
tools, troubleshoot hardware-related issues, and gather system configuration
information. The results of the diagnostic tests and configuration information are
sent manually to the technical support team to identify and resolve an issue.

Common scenarios where the DSP needs to use SLI to troubleshoot the customer's
system include:
Running an unsupported or corrupt operating system
Having misconfigured RAID
Experiencing an inability to install the tools into your existing operating system
Operating system not booting and a backup is needed
Flashing BIOS outside operating system
Resetting Service Tag information
The need to run additional Dell tools outside of the operating system

Tools Access via Support Live Image


Additionally, Support Live Image allows us to access many Dell tools including the
following:
Asset Tag Utility: The Asset Tag Utility provides the ability to read and display
the Asset Tag, Service Tag, and PPID. It also provides the capability to update the
Asset and Service Tag field.
OpenManage Server Administrator (OMSA): OMSA allows system administrators to manage
individual servers from an integrated, web-browser-based GUI and from a CLI through
the operating system. Server Administrator is designed for system administrators to
manage systems locally and remotely on a network.
Dell System E-support Tool (DSET): DSET provides the ability to collect hardware,
storage, and operating system information from a Dell PowerEdge server.
32-bit Diagnostics: 32-bit Diagnostics is designed to verify proper operation of
the hardware in your Dell PowerEdge or Dell PowerEdge C system outside of a high-
level operating system environment.
Server Update Utility (SUU): SUU is an application for identifying and applying
updates to your Dell PowerEdge system or to view the updates available for any
system supported by SUU.
iDRAC Evaluation License Utility: This utility is for customers who have recently
replaced the motherboard and need to reinstall the iDRAC7 Enterprise license
locally (with no network connectivity) and activate the dedicated NIC. This utility
installs a 30-day trial iDRAC7 Enterprise license and allows the user to reset the
iDRAC from shared NIC to dedicated NIC.

Where to Obtain Support Live Image


It is highly recommended to use the latest version of Support Live Image (SLI) when
creating a bootable SLI USB Key. The user can obtain the latest image from the Dell
Support website by looking at a 13G server's Downloads and Drivers section (for
instance, PowerEdge R730 Downloads and Drivers).

Before Creating the Bootable USB

Before creating a bootable USB key for SLI, please make sure:
- You are using a minimum of 4GB of free space on the USB key.
- You have administrative rights.

How to Create the SLI USB Key


Use your preferred and trusted USB creator program to create a bootable USB or DVD.
Note: Creating bootable USB keys is a common IT industry practice. An example of
free program is Rufus ; however, Dell EMC cannot be responsible for USB creators,
nor can Dell EMC support issues encountered with USB/DVD ISO creators.
Specify the ISO - Support Live Image downloaded from Dell Support Website
Once it is done, your bootable USB key with SLI file is ready to use.

Scenario:
DSP Tony is assigned to replace a failed HDD on a server. Once the HDD is replaced,
Tony logs in to the operating system but notices the replacement HDD is not online.
Since an unsupported operating system is used, Tony needs to shut down the server
and boot into the SLI USB key in order to run DSET and retrieve the log to
troubleshoot the issue with the replacement HDD.

Play the video and learn how to boot into the USB Key and retrieve the log.

NOTE: This video includes audio. Please enable your system's audio playback.
Retrieving Logs Using Lasso
Dell Lasso is a Windows-based client and server utility that automates the
collection of hardware, software, and storage logs and configuration from servers,
storage arrays, Fibre Channel switches, Ethernet and FCoE switches, attached hosts,
enclosures, management and monitoring software, and wireless controllers.

After Lasso collects the data, it parses the data into Extensible Markup Language
(XML) and HyperText Markup Language (HTML) formats.
The data is then packaged along with the collected data and encrypted. The
collected data is saved as a .zip file on the local system.

Optionally, you can enable Lasso to automatically upload the report to Dell
Technical Support. Dell uses this data as part of the Systems Maintenance Service
(SMS) to determine hardware, software, and firmware versions for compatibility,
troubleshoot problems in storage devices, and upgrade the existing equipment. Lasso
tracks and waits for completion of each process and notifies the user of any
failures during collection.

Scenario:
In this scenario, DSP Tony will approach a Dell EqualLogic PS6000 storage unit and
use the diagnostic tool Lasso to verify the status of the RAID array and identify
the faulty hard drive.

Dell Lasso is available to download at the Dell Support website.

Play the video and learn how.

NOTE: This video includes audio. Please enable your system's audio playback.

Module 4: Dell EMC Racks and Rack Peripherals Product Service


Racks Overview and Reference Material

Dell EMC PowerEdge Racks Enclosures Overview

The Dell EMC PowerEdge™ rack enclosures are designed to hold and protect server,
network, and data storage equipment.
Dell EMC racks have installation procedures that are listed in the reference
material or user manual. For field technicians, it is important to access or
download rack reference material prior to an installation.

2420, 4220, and 4820 Rack Enclosures

The Dell EMC PowerEdge rack enclosures are offered in three height options: 24U
(2420), 42U (4220), and 48U (4820). Each of these racks is available in the
standard 600 mm x 1070-mm dimensions to fit within a 2-tile floor plan layout.
Wide versions of each rack will be marketed with a W (4820W, 4220W), and deep
versions with a D (4820D, 4220D).
The W and D features include:
-Deeper chassis (1070 mm deep)
-Increased cabling space
-Easier access to IT equipment
-Improved thermal attributes
-Higher door perforation to 80%
-Improved hot/cold air separation
-More power distribution schemes
Accessories:
-Rackmount 17" consoles
-32-port digital KVM console switch

Energy Smart Rack Enclosures

These racks were designed to hold and protect server, network, and data storage
equipment and provide efficient cooling. The Energy Smart rack is a design that is
targeted for raised-floor data centers.
Key features of Dell EMC Energy Smart rack enclosures include:
-600 mm wide x 1200 mm deep-form factor
-Solid polycarbonate panel in the front door with metal extension forms and
vertical plenum to direct air evenly to installed equipment
-Exceptional design facilitates more cabling and power distribution options through
exceptional design
-Static load rating of 2,500 lb. for 46U/40U
-Large open base for airflow and cable entry and exit
-Rack-top cable exits with adjustable sliding door and removable tail-bar
-Reinforced frame for stability

Rack Cabling

Best Practices for Rack Cabling

There is no perfect or correct method for routing cables. However, there are some
best practices for optimal organization and access.
Dell EMC racks have features that simplify cable routing, including:
-Power distribution unit (PDU) channels in each rack flange route power cables to
each system.
-Cable clips are mounted in the PDU channels to keep cables out of the way and
prevent cords from becoming tangled.
Cables are routed out of the rack in two ways on a standard configuration:
-Through the cable exit at the bottom of the back doors
-Through the adjustable cable slot at the top of the rack and into a cable raceway

Opening and Closing Top Cable Slot

Follow the instructions below to open the top cable slot before routing cables
through the slot.

This section of the page outlines the steps to open and close the top cable slot.

1.Open the back doors.


2.Loosen the wingnuts underneath the cable slot cover.
3.Slide the cable slot cover toward the front of the rack.
4.After routing the cables through the top of the rack, close any air gaps in the
cable slot by pulling the slot cover toward the back of the rack. Use the wingnuts
to secure the cover.

Removing and Installing Back-Door Stabilizer Bars

The top and bottom bars that are used to stabilize the back doors can be removed,
making it easier to route cables through the top and bottom of the rack.

To remove and install the stabilizer bars:


1.Open the back doors.
2.Pull and hold the plungers on each side of the bar, and pull the bar up and away
from the rack.
3.After routing cables, replace the bars by aligning the tabs on the bars with the
corresponding holes in the rack. Then, push in and down until the plungers lock
into place.

Dell EMC Rails and Rail Conversion (Dell VersaRails, RapidRails, ReadyRails, and
Cloud Rails)

Overview

Dell EMC rail kits were designed for racks with square mounting holes for tool-less
installation. They are also compatible with racks from other vendors, including
square-hole, round-hole, and threaded-hole racks. Dell EMC provides a rail-sizing
matrix for different rails according to mounting interface, rail type, rack types
supported, rail adjustability range, and rail depth.

Rail Types

Sliding Rails
Sliding rails enable users to fully extend a system out of the rack for service. To
help manage the numerous cables associated with rack-mounted servers, users can use
the cable management arm (CMA) with the sliding rails. The rails can be attached on
either the right or left side without tools.
Stab-in/Drop-in sliding rails for 4-post racks (New for 14G systems)
-Supports drop-in or stab-in installation of the chassis to the rails
-Support for tool-less installation in 19" EIA-310-E compliant square, unthreaded
round hole racks including all generations of the Dell racks. Also supports tool-
less installation in threaded round hole 4-post racks
-Required for installing R640 in a Dell EMC Titan or Titan-D rack
-Support full extension of the system out of the rack to enable serviceability of
key internal components
-Support for optional cable management arm (CMA)
-Minimum rail mounting depth without the CMA: 714 mm
-Minimum rail mounting depth with the CMA: 845 mm
-Square-hole rack adjustment range: 603 to 915 mm
-Round-hole rack adjustment range: 603 to 915 mm
-Threaded-hole rack adjustment range: 603 to 915 mm

Rail Types

Static Rails
The static rails support a wider variety of racks than sliding rails. However, they
do not support serviceability in the rack and are thus not compatible with the CMA.
Static rails features summary
-Supports Stab-in installation of the chassis to the rail
-Support tool-less installation in 19" EIA-310-E compliant square or unthreaded
round hole 4-post racks including all generations of Dell racks
-Support tooled installation in 19" EIA-310-E compliant threaded hole 4-post and 2-
post racks
-Minimum rail mounting depth: 622 mm
-Square-hole rack adjustment range: 608 to 879 mm
-Round-hole rack adjustment range: 594 through 872 mm
-Threaded-hole rack adjustment range: 608 to 890 mm

If installing to 2-Post (Telco) racks, the ReadyRails Static rails (B4) must be
used. Both sliding rails support mounting in 4-post racks only.

if installing to Titan or Titan-D racks, the Stab-in/Drop-in Sliding rails (B13)


must be used. This rail collapses down sufficiently to fit in racks with mounting
flanges spaced about 24 inches apart from front to back. The Stab-in/Drop-in
Sliding rail allows bezels of the servers and storage systems to be in alignment
when installed in these racks.

Cable Management Arm


The optional cable management arm (CMA) organizes and secures the cords and cables
exiting the back of a server. It unfolds to enable the server to extend out of the
rack without having to detach the cables.
The CMA mounts to either side of the sliding rails without the use of tools or the
need for conversion. However, It is recommend that it should be mounted on the side
opposite power supplies for easier access to the power supplies and rear hard
drives (if applicable) for service.

Mounting Interface: Versa Rails


How to Install Dell EMC Versa Rails
1.Determine where in the rack to install the system.
2.Align the VersaRail with the rack's front vertical rail. Each unit on the
vertical rail consists of one 0.125-inch spacing and two 0.250-inch spacings. The
0.125-inch spacing is the beginning of a unit. Align the bottom of the VersaRail
with the 0.125-inch spacing on the vertical rails of the rack. The first hole in
the VersaRail aligns with second hole from the 0.125-inch spacing on the vertical
rail. The vertical rails in the rack may have round or square holes.
3.Position the VersaRail's mounting flange against the inner side of the rack's
front vertical rail so the bottom of the VersaRail is aligned with the 0.125-inch
spacing on the rack's front vertical rail. (See the picture.) Use the mounting
holes, as shown, to secure the front of the VersaRail to the rack. Use the second
mounting hole from the bottom of the VersaRail to secure the system to the rack.
4.ecure the front of the VersaRail to the rack using three Phillips-head screws and
three conical washers.
5.Slide the back VersaRail's mounting flange backward until the back mounting
flange fits snugly against the inner side of the back vertical rail.
6.With the VersaRail level, secure the back of the VersaRail to the rack using
three Phillips-head screws and three conical washers.
7.Repeat step 2 through step 6 to install the second VersaRail.

Mounting Interface: Rapid Rails


How to Install Dell EMC Rapid Rails: Step 1

1.At the front of the rack cabinet, position one of the RapidRails slide assemblies
so that its mounting-bracket flange fits between the marks on the rack.
The top mounting hook on the slide assembly's front mounting bracket flange should
enter the top hole between the marks on the vertical rails.
2.Push the slide assembly forward until the top mounting hook enters the top square
hole on the vertical rail. Then push down on the mounting-bracket flange until the
mounting hooks seat in the square holes and the push button pops out and clicks.
3.At the back of the cabinet, pull back on the mounting-bracket flange until the
top mounting hook is in the top square hole. Then push down on the flange until the
mounting hooks seat in the square holes and the push button pops out and clicks.
4.Repeat step 1 through step 3 for the slide assembly on the other side of the
rack.
Although the racking and cable management steps are listed separately, for ease in
cable management users will want to begin the cable management installation process
after this step in the racking instructions.
5.Pull the two slide assemblies out of the rack until they lock in the fully
extended position.
If the user is installing more than one system, install the first system in the
lowest available position in the rack.
Never pull more than one component out of the rack at a time. Also, due to the size
and weight of the system, it is recommended to have more than one person install
the system in the slide assemblies.
6.Place one hand on the front-bottom of the system and the other hand on the back-
bottom of the system. Then lift the system into position in front of the extended
slides.
7.Tilt the back of the system down while aligning the back shoulder screws on the
sides of the system with the back slots on the slide assemblies, and engage the
back shoulder screws into their slots.
8.Lower the front of the system and engage the front and middle shoulder screws in
their slots. (The middle slot is just behind the yellow system-release latch.) When
all the shoulder screws are properly seated, the yellow latch on each slide
assembly will click and lock the system into the slide assembly.
9.Press up on the green slide-release latch at the side of each slide to move the
system completely into the rack.
10.Push in and turn the captive thumbscrews on each side of the front chassis panel
to secure the system to the rack.
This completes the Dell EMC Rapid Rail installation.

Mounting Interface: Ready Rails


How to Install Dell EMC Ready Rails: Step 1

If possible, remove both rack sides to allow for easier access during installation
of the rails.
1.Choose an appropriate U space in which to install the rails.
2.Take one of the rails (right rail shown), align the front rail pegs with the
corresponding holes in the appropriate rack U space, and press inward to snap the
front of the rail into place.
3.Repeat step 2 with the rear attachment of the rail.
4.Repeat step 1 through step 4 for the rail assembly on the other side of the rack.
Although the racking and cable management steps are listed separately, for ease in
cable management the user will want to begin the cable management installation
process after this step.
5.Pull the two slide assemblies out of the rack until they lock in the fully
extended position.
Never pull more than one component out of the rack at a time. Also, due to the size
and weight of the system, it is recommended to have more than one person install
the system in the slide assemblies.
6.Lower the system in between the rails until the system's shoulder screws are all
engaged in their respective rail slots. Then slide the system backward to lock the
shoulder screws into the rails.
Never pull more than one component out of the rack at a time.
Due to the size and weight of the system, it is recommended to have more than one
person install the system in the slide assemblies.
7.Press the side-release latches
8.Slide the system inward on the rails until it locks into place in the rack.
This completes the Dell EMC Ready Rail installation steps.

Mounting Interface: Ready Rails II


The ReadyRails™ II mounting interface supports tool-less installation in 4-post
square-hole and unthreaded round-hole racks and native support for tooled
installation in threaded-hole racks. Installing this mounting interface in a
square-hole rack enables the bracket to be placed flush against the mounting post,
while installation in a round-hole rack results in a slight offset of approx. 6 mm
from the mounting post, which also results in an approx. 6-mm bezel offset.

NOTE: Double-click image to view on full screen mode.


ReadyRails II in Square-Hole Rack

ReadyRails II in Round-Hole Rack

Mounting Interface: Cloud Rails


How to Install Dell EMC Cloud Rails: Step 1
1.Use two screws to secure the rails to the front of the rack and four screws for
the back of the rack.
2.Align the keyhole slot on the chassis rails with the corresponding pin on either
side of the system. Slide the chassis rails toward the front of the system until it
locks into place. Secure the chassis rails to the system using screws.
3.Align and insert the ends of the chassis rails into the ends of the cloud rails.
Push the system inward until the chassis rails lock into place.
4.Tighten the thumbscrews to secure the ears of system to the rack.
The connectors on the back of the system have icons indicating which cable to plug
into each connector (keyboard, mouse, or monitor).
This completes the Dell EMC Cloud Rail installation.

Rail Conversion: RapidRails to VersaRails


This page explains how to convert the ninth-generation PowerEdge™ rails from Rapid
Rails to Versa Rails. Converting from Versa Rails to Rapid Rails follows the same
procedure in reverse order.
Rail Conversion: Step 1
1.Locate the rail conversion bracket on the end of the rail.
2.Lift the release lever on the rotating mounting bracket.
3.Rotate the bracket 90 degrees counterclockwise.
4.Rotate the bracket 180 degrees counterclockwise.
5.Align the bracket with the two posts, and then place it on the posts.
6.Rotate the bracket 90 degrees clockwise until it snaps into place.

Dell EMC Power Distribution Units (PDUs)


Dell EMC Basic Rack Power Distribution Unit

Dell EMC Basic Rack Power Distribution Unit (PDU) products are stand-alone power
distribution devices that provide power for devices within a rack.
Dell EMC PDUs provide reliable power distribution throughout a server rack
enclosure without inhibiting access to the components within the enclosure.

Dell EMC Metered Rack Power Distribution Unit


Dell EMC Metered Rack Power Distribution Unit (rPDU) products are stand-alone,
network-manageable power distribution devices that monitor the current, voltage,
and power for the Rack PDU.
Customers can manage a rack PDU through its web interface, CLI, or Simple Network
Management Protocol (SNMP).
It also features LED and LCD display options. LED versions provide a 2-digit
display on the PDU while LCD versions have a fully featured, dual-color backlit LCD
for more detailed information at a glance.
Dell EMC rack PDUs have the following features:
-Voltage, current, and power monitoring for the device and each phase as applicable
-Configurable alarm thresholds that provide network and visual alarms to help avoid
overloading circuits
-Three levels of user access accounts: Administrator, Device User, and Read-Only
User
-Event and data logging. The event log is accessible by Telnet, Secure CoPy (SCP),
File Transfer Protocol (FTP), serial connection, or web browser (using HTTPS access
with SSL or using HTTP access). The data log is accessible by web browser, SCP, or
FTP.

Toolless Mounting

To install the rack PDU using the toolless mounting method, first install it in the
rear of an enclosure, in the cable channel directly behind the rear vertical
mounting rails.
Follow the next steps to perform a toolless PDU installation into any standard EIA-
310 rack or enclosure:
1.Slide both mounting pegs into the holes located in the channel in the rear panel
of the enclosure.
2.Snap the rack PDU into place by pushing it downward until it locks into position.
Two PDUs can be mounted on one side of the enclosure, or one on each side, by using
the toolless mounting method.

Vertical Mount and Half-Height PDU Installation


To install a rack PDU vertically using the mounting brackets, first install it on a
vertical mounting rail on your rack or enclosure.
(Not compatible with 4210 and 2410 racks.)

Standard Orientation
The standard mounting orientation for the PDU is 180 degrees. This is a snap-in,
toolless installation. Two factory-installed mounting pegs are inserted in the
mounting keyholes on the wall of the rack.

90-Degree Orientation
The PDU can also be mounted in a 90-degree mounting orientation. For this
configuration, two deep mounting pegs (provided with the PDU) are user installed
before mounting the PDU into the rack.

Horizontal Mounting

To install a rack PDU using the horizontal mounting brackets, first install the
brackets on the rack PDU and then attach the PDU to the rack using caged nuts
(provided with your enclosure).
Perform the following steps to mount the Rack PDU horizontally in any standard EIA-
310 rack or enclosure:
1.Identify the location for the PDU in the horizontal rail position that is closest
to the systems to which it supplies power.
2.Position the PDU so that the mounting hooks enter the square holes on the
horizontal rail.
3.Push down the PDU until the mounting hooks seat in the square holes and the
release button pops out and clicks.

Installing Strain Relief


Dell rPDUs have the option to install strain-relief brackets. Models with a high
density of outlets only offer side-mounted brackets. The remaining models offer
both side- and front-mounted brackets.
Installing the rPDU

Standard Mount
Dell EMC Metered Rack Power Distribution Unit (rPDUs) can be installed into Dell
EMC PowerEdge Racks by inserting the preinstalled mounting pegs into the key-shaped
slots of the PDU tray.

Deep Mount
Sometimes, customers may want to mount their rPDUs at 90 degrees to the tray. The
PDU comes with extra mounting pegs and screws to install to the PDU's side to
enable them to do so. Dell recommends the customers remove factory-installed pegs
from the back of the PDU in this scenario.

Grounding Wire
The PDU should be grounded to the rack with the grounding kit. Connect the
grounding wire to the PDU's ground bonding point connection and an exposed metal
component of the rack.

Frame Connection
M5 X 12 Pan Head Screw(Black) Star WasherGround Bonding Point Connection 10-32 X
0.5" Pan Head Screw (Silver) Star Washer
Dell EMC recommends grounding the rPDU to the rack frame with the ground wire
provided in the grounding wire kit.

Dell EMC Uninterrupted Power Supply


Dell EMC UPS backup systems protect equipment from downtime, damage, and data loss
due to power problems. During a power outage, the Dell EMC UPS backup power
supplies enable customers to maintain power long enough to save data and shut down
PCs, servers, and other equipment properly. Dell EMC Uninterruptible Power Supplies
also protect against power surges and disruptive line noise.

Dell EMC UPS products provide signaling through RS-232 cable or USB cable. A
network interface card can be added to provide browser accessibility, email
notifications, access security, event logging, and server shut-down controls.

Features
Providing outstanding performance and reliability, UPS's unique benefits include:
-4U size that fits any standard 48 cm (19") rack
-"Start-on battery" capability for turning on the UPS without utility power
-Replacement of batteries without turning off
-Extended run time with an Extended Battery Module (EBM)
-Two standard communication ports (USB and DB-9)
-Optional Dell Network Management Card with enhanced communication capabilities
-Dell UPS Management Software for power monitoring and graceful shutdowns

Dell EMC Keyboard/Video/Mouse (KVM) Switches


Racking: Rack Mounting Dell EMC Switches and KVMs
Things to Note:
-Before installing the remote console switch and other components in the rack,
stabilize the rack in a permanent location.
-Start rack mounting the equipment at the bottom of the rack and then work to the
top.
-Avoid uneven loading of or overloading racks.
-Connect only to the power source specified on the unit. When multiple electrical
components are installed in a rack, ensure that the total component power ratings
do not exceed circuit capabilities. Overloaded power sources and extension cords
present fire and shock hazards.

Dell EMC 1U Rackmount LED Console

The Dell EMC 1U Rackmount LED Console enables users to mount a data center
administrator's control station directly into a data center rack without
sacrificing rack space that is needed for servers and other peripherals. A
Rackmount LED Console is typically attached to a keyboard/video/mouse (KVM) console
switch to manage the setup, administration, and maintenance of rack-mounted
servers.

The 1U Rackmount LED Console is deployed in one of two configurations:


-Connected directly to a server in the rack
-Connected directly to a KVM that is connected to one or more servers in the rack

Dell EMC 1U Rackmount LED Console Key Benefits


Key benefits of the existing Dell EMC KMM rack consoles include:
-Support for USB2 and USB3 HID peripherals at the local console
-18.5-inch widescreen, LED-backlit LCD panel
-Toolless installation for Dell racks and non-Dell unthreaded round-holed racks
-Custom cable management arm
-Capability of integrating Dell EMC KVMs (such as small-form factor)
-Allowance of mounting designated Dell EMC branded KVMs in the same 1U space of KMM

Module 5: Dell EMC Networking Product Service


This topic briefs about basic networking-related terms with which you should be
familiar. It is not intended to be an exhaustive review of all terms. Rather, it
highlights some of the most common and important ones you encounter when working
with networking equipment.
For general networking industry terms, go to commonly used websites like
TechTarget.com's "What Is" database that is found at
http://whatis.techtarget.com.

Networking Terminology
The following tabs illustrate the terms and vocabulary that is typically found in
the networking environment:

NIC - Network Interface Card. A network adapter on a circuit board that plugs into
a computer's internal bus architecture.

10base-T - Similar to the standard telephone cabling and also known as Twisted Pair
Ethernet, 10BASE-T is a 10 MBps CSMA/CD (carrier sense multiple access with
collision detection) Ethernet LAN that works on Category 3 or better twisted-pair
cables. 10BASE-T cables can be up to 100 m in length.
1GbaseT - 1GbaseT is also known as 1000BASE-T or 802.3z/802.3ab. It is a later
Ethernet technology that utilizes all four copper wires in a Category 5 (Cat 5 and
Cat 5e) capable of transferring 1 Gbps.
10GbaseT - 10GbaseT is also known as 802.3ae. It is a new standard that was
published in 2002 and supports up to 10 Gbps transmissions. 10 GbE defines only
full-duplex point-to-point links that are connected by network switches, unlike
previous Ethernet standards. Half-duplex operation, CSMA/CD (carrier sense multiple
access with collision detection), and hubs do not exist in 10GbE.
RJ-45 - An RJ-45 is an 8-pin connection used for Ethernet network adapters.
This connector is most commonly connected to the end of Cat 5 or Cat 6 cable, which
is connected between a computer network card and a network device such as a network
router.

SFP - Small form-factor pluggable (SFP) is a specification for a new generation of


optical modular transceivers. The devices are designed for use with small form
factor (SFF) connectors, and offer high speed and physical compactness. They are
hot-swappable.
SFP+ - SFP+ is an enhanced version of the SFP that supports data rates up to 10
Gbps. SFP+ supports 8 Gbps Fibre Channel, 10 GbE, and Optical Transport Network
standard OTU2. It is a popular industry format supported by many network component
vendors.
QSFP - Quad SFP (QSFP) is a compact, hot-pluggable transceiver also used for data
communications applications.
QSFP+ - QSFP+ evolved as the standard to support 4x10Gbps or 40 Gbps data rates per
SFF-8436. Compared with QSFP+, QSFP products support Quad Small Form-factor
Pluggable with the different data rate so that there is no change in the product
solution. Nowadays, QSFP+ gradually replaces QSFP and is widely used by people as
it can provide higher bandwidth.

Broadcast - A broadcast describes a message or data sent to more than one person or
device.
Broadcast Domain - A broadcast domain is a logical division of a computer network,
in which all nodes can reach each other by broadcast at the data link layer. A
broadcast domain can be within the same LAN segment, or it can be bridged or
switched to other LAN segments.

Collision - An instance when one or more networking devices attempt to send a


carrier signal at the same time onto a shared network. When collisions are
encountered, the network device will stop sending, wait and then try again.
Collision Domain - A collision domain is a section of a network connected by a
shared medium or through repeaters where data packets can collide with one another
when being sent, particularly when using early versions of Ethernet.

Hub - A common connection point for devices in a network. Hubs are commonly used to
connect segments of a LAN. A hub contains multiple ports. When a packet arrives at
one port, it is copied to the other ports so that all segments of the LAN can see
all packets.
Switch - In networks, a switch is a device that filters and forwards packets
between LAN segments. Switches operate at the data link layer (layer 2) and
sometimes the network layer (layer 3) of the OSI Reference Model, and therefore
support any packet protocol. LANs that use switches to join segments are called
switched LANs or, in the case of Ethernet networks, switched Ethernet LANs.

Layer 2 Switch (MAC) - An L2 switch does switching only. This means that it uses
MAC addresses to switch the packets from the incoming port to the destination port
(and only the destination port). It maintains a MAC address table so that it can
remember which ports have which MAC address associated.
Layer 3 Switch (IP) - An L3 switch also does switching exactly like a L2 switch.
The L3 means that it has an identity from the L3 layer. Practically this means that
an L3 switch is capable of having IP addresses and routing. For intra-VLAN
communication, it uses the MAC address table. For extra-VLAN communication, it uses
the IP routing table.

Wireless Access Point - A wireless access point (WAP) is a networking hardware


device that allows a WiFi-compliant device to connect to a wired network. The WAP
usually connects to a router (via a wired network) as a stand-alone device, but it
can also be an integral component of the router itself.

MAC - A medium access control (MAC) address is a physical address and hardware
address whose number is uniquely formatted in hexadecimal format and given to each
network interface device on a computer network. The addresses are usually assigned
by the hardware manufacturer, and these IDs are considered burned into the firmware
of the network access hardware.
MAC Table - A MAC address table, sometimes called a Content Addressable Memory
(CAM) table, is used on Ethernet switches to determine where to forward traffic on
a LAN.

IP Address - A unique string of numbers separated by periods that identifies each


computer using the Internet Protocol (IP) to communicate over a network.
Subnet Mask - Short for subnetwork mask, a subnet mask is data used for bitwise
operations on a network of IP addresses that has been divided into two or more
groups. This process, know as subnetting, enables each device within a subnetwork
to communicate, while still allowing the exchange of information between subnets
via the use of a network router. Dividing a network into subnets can improve
security and balance overall network traffic.
Gateway Address - A gateway address is an address used as an entry point into
another network. For example, 192.168.0.1 could be used as a gateway. The gateway
is commonly the address of a network device such as a network router.

Firewall - A firewall is a software utility or hardware device that limits outside


network access to a computer or local network by blocking or restricting network
ports. Firewalls are a great step for helping prevent unauthorized access to a
company or home network.
Proxy Host - A proxy host is a computer that offers a computer network service to
allow clients to make indirect network connections to other network services. A
client connects to the proxy host, and then requests a connection, file, or other
resource available on a different server. The proxy provides the resource either by
connecting to the specified server or by serving it from a cache.

Multi-Homed - Multi-homed describes a computer host that has multiple IP addresses


to connected networks. A multi-homed host is physically connected to multiple data
links that can be on the same or different networks. For example, a computer with a
Windows Server and multiple IP addresses can be referred to as "multi-homed" and
may serve as an IP router.

Computer Network
Devices - Communication devices - Transmission media
A computer network is a collection of computers and devices that are connected
together through communication devices and transmission media.
Usually, the connections between computers in a network are made using physical
wires or cables.
However, some connections are wireless, using radio waves or infrared signals.

Network Components

Scenario: There are a couple of devices in your office that are off the network
grid. You have been given a task to identify the necessary hardware to set up a
network that meets the following requirements:
-File sharing from the server to all the PCs
-Printer sharing for every personal computer and server
-Local area network only with no Internet connectivity

Network Components Device and Functions


Based on the requirements that are given to you in the previous slide, the network
diagram should look something like the one below. A router or an L3 switch is not
required in this case because all of the devices are in the same local area network
(LAN). A modem is not needed because this is an intranet-only network with no
Internet connection.

Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.

Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.

Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.

Cable is one form of transmission media that can transmit communication signals.
The wired network typology uses a special type of cable to connect computers on a
network.

A switch is a telecommunication device grouped as one of the computer’s network


components. It uses physical device addresses in each incoming message so that it
can deliver the message to the right destination or port.

Networking Cables
Networking cables are used to connect one network device to another or to connect
two or more computers to share a printer, a scanner, and so on. Different types of
network cables like coaxial cable, optical fiber cable, or twisted-pair cables are
used depending on the networks topology, protocol, and size.

Coaxial Cable
Coaxial lines confine the electromagnetic wave inside the cable, between the center
conductor and the shield. The transmission of energy in the line occurs totally
through the dielectric inside the cable between the conductors. Coaxial lines can
be bent and twisted (subject to limits) without negative effects, and they can be
strapped to conductive supports without inducing unwanted currents in them.

Advantages
-Higher bandwidth: 400 to 600 MHz
-Up to 10800 voice conversations
-Can be tapped easily (pros and cons)
-Less susceptible to interference than twisted pair
Disadvantages
-High attenuation rate makes it expensive over long distance
-Bulky

Fiber Optic Cable


An optical fiber cable consists of a center glass core surrounded by several layers
of protective material. The outer insulating jacket is made of Teflon or PVC to
prevent interference. Optical fiber deployment is more expensive than copper, but
offers higher bandwidth and can cover longer distances.

Advantages
-Greater capacity/bandwidth
-Smaller size and lighter weight
-Lower attenuation
-Immunity to environmental interference
-Highly secure due to tap difficulty and lack of signal radiation
Disadvantages
-Expensive over short distance
-Requires highly skilled installers
-Difficult to add additional nodes

Twisted-Pair Cable
Twisted pair cabling is a form of wiring in which pairs of wires (the forward and
return conductors of a single circuit) are twisted together for the purposes of
canceling out electromagnetic interference (EMI) from other wire pairs and from
external sources. This type of cable is used for home and corporate Ethernet
networks.

Advantages
-Inexpensive and readily available
-Flexible and lightweight
-Easy to work with and install
Disadvantages
-Susceptibility to interference and noise
-Attenuation problem
-For analog, repeaters needed every 5–6 km
-For digital, repeaters needed every 2–3 km
-Relatively low bandwidth (300Hz)

Campus vs. Data Center Routing and Switching


There is a prevailing misconception that "campus" routing and switching is the same
as "data center" routing and switching (and conversely). So, what is the difference
between campus and data center switching and routing?

Campus
The campus is where users (employees) connect to the network, along with all of the
devices those employees use.
-Connects users and their devices such as:
Laptop
Desktop
-Mobile phones
-Mixed wired and wireless Ethernet
-Generally speed from 100 Mbps to 1 Gbps
-L2 and L3 switches
-Type of applications: Email, web, and video applications that have wide range of
bandwidth and delay sensitivity requirements

Data Center
The data center is where devices connect to the network. It consists of mainly rack
servers, load balancers, firewalls, and other such devices designed to process and
exchange data.
-Connects devices such as:
Rack servers
Load balancer
Firewalls
-Generally speed from 1Gbps to 10Gbps
-Primarily L2 (within the DC)
-Type of applications: Database management, virtual machine management, and file
transfer applications that have simple bandwidth (albeit large in quantity) and
little delay sensitivity

When considered in this regard, it should be clear that campus and data center
routing and switching are actually very different. The routers and switches you
select should be designed to maximize their potential, based on these very
different set of requirements.

Network Interface Card

A network interface card (NIC) connects your computer to a local data network or
the Internet. The card translates computer data into electrical signals and sends
it through the network; the signals are compatible with the network so computers
can reliably exchange information.
Since Ethernet is so widely used, most modern computers have a NIC built into the
motherboard. A separate network card is not required unless some other type of
network is used.
Key Functions:
-Acts as the middleman between computer and network
-Converts the data from your personal computer into electronic signals and
conversely
-Provides a network address for your personal computer in a networking environment

Layer 2 Switch
Layer 2 switching uses the media access control address (MAC address) from the
hosts network interface cards (NICs) to decide where to forward frames.
It is hardware-based, which means switches use application-specific integrated
circuits (ASICs) to build and maintain filter tables (also known as MAC address
tables or CAM tables).
Layer 2 switching provides the following:

-Hardware-based bridging (MAC)


-Wirespeed/non-blocking forwarding
-Low latency

What is Network Teaming?


Host / LAN Switch / Storage
If you have a server with multiple NICs, you can take advantage of this by grouping
them together. This is called "Network Teaming". The following slides contain more
information about the teaming mode that is available for both Intel and Broadcom
NICs, and Windows Server.
The benefits of network teaming are:
-Fault tolerance
-Link aggregation
-Load balancing

Before running driver and firmware updates to NICs, be prepared to restore teaming
settings.

Teaming Modes (Intel)


To learn more about the available teaming modes for Intel NICs, click each tab.
AFT
A "failed" primary adapter passes its MAC and Layer 3 address to the failover
(secondary) adapter.

SFT
Switch Fault Tolerance
This uses two adapters connected to two switches to provide a fault-tolerant
network connection in the event that the first adapter, cabling, or switch fails.
Only two adapters can be assigned to an SFT team.

ALB
Adaptive Load Balancing
This increases network bandwidth by allowing transmission over two to eight ports
to multiple destination addresses, and incorporates fault tolerance.

LACP (IEEE 802.3ad)


Link Aggregation Control Protocol (LACP) (802.3ad)
This standard has been implemented in two ways:
-Static Link Aggregation (SLA):
Equivalent to EtherChannel or Intel Link Aggregation
Must be used with an 802.3ad, FEC/GEC, or Intel Link Aggregation capable
switch.
-Dynamic mode
Requires 802.3ad Dynamic capable switches
Active aggregators in software determine team membership between the switch
and the ANS software (or between switches)
There is a maximum of two aggregators per server, and you must choose either
maximum bandwidth or maximum adapters

Both 802.3ad modes include adapter fault tolerance and load balancing capabilities.
However, in Dynamic mode, load balancing is within only one team at a time.

Teaming Modes (Broadcom)

Smart Load Balancing and Failover


Smart Load Balancing™ and Failover are the Broadcom implementation of load
balancing based on IP flow.
-Balance IP traffic across multiple adapters (team members) in a bi-directional
manner
-Automatic fault detection and dynamic failover to other team member or to a hot
standby member

Link Aggregation (802.3ad)


All adapters in the team are configured to receive packets for the same MAC
address. The outbound load-balancing scheme is determined by the Broadcom Advanced
Server Program (BASP) driver. The team link partner determines the load-balancing
scheme for inbound packets.

Generic Trunking (FEC/GEC)


The Generic Trunking (FEC/GEC/802.3ad-Draft Static) type of teaming supports load
balancing and failover for both outbound and inbound traffic.

Teaming Modes (Windows)

Switch-independent teaming
This configuration does not require the switch to participate in the teaming. In
switch-independent mode, the switch does not know that the network adapter is part
of a team in the host; because of this, the adapters may be connected to different
switches. Switch independent modes of operation do not require that the team
members connect to different switches; they merely make it possible.

Active/Standby Teaming: Some administrators prefer not to take advantage of the


bandwidth aggregation capabilities of NIC Teaming. These administrators choose to
use one NIC for traffic (active) and one NIC to be held in reserve (standby) to
come into action if the active NIC fails. To use this mode, set the team in switch-
independent teaming. Active/Standby is not required to get fault tolerance; fault
tolerance is always presented anytime there are at least two network adapters in a
team.
This configuration requires the switch to participate in the teaming. Switch-de

Switch-dependent teaming
This configuration requires the switch to participate in the teaming. Switch-
dependent teaming requires all the members of the team to be connected to the same
physical switch.

There are two modes of operation for switch-dependent teaming:


-Generic or static teaming (IEEE 802.3ad draft v1):
This mode requires configuration on both the switch and the host to identify which
links form the team. Because this is a statically configured solution, there is no
additional protocol to assist the switch and the host to identify incorrectly
plugged cables or other errors that could cause the team to fail to perform. This
mode is typically supported by server-class switches.
-Dynamic teaming (IEEE 802.1ax, LACP):
This mode is also commonly referred to as IEEE 802.3ad, as it was developed in the
IEEE 802.3ad committee before being published as IEEE 802.1ax. IEEE 802.1ax works
by using the Link Aggregation Control Protocol (LACP) to dynamically identify links
that are connected between the host and a given switch. This enables the automatic
creation of a team and, in theory but rarely in practice, the expansion and
reduction of a team simply by the transmission or receipt of LACP packets from the
peer entity. Typical server-class switches support IEEE 802.1ax, but most require
the network operator to administratively enable LACP on the port.

Network Configuration Tools

Intel ANS
Intel® Advanced Network Services (ANS) is installed by default along with Intel®
PROSet for Windows Device Manager.
When you install the Intel PROSet software and Intel ANS, more tabs are
automatically added to the supported Intel adapters in Device Manger.
The following tabs are added when the Intel ANS for Windows Device Manager is
installed:
-Boot Options
-Link Speed
-Advanced
-Teaming
-VLANs

Broadcom BACS
Broadcom Advanced Control Suite (BACS) provides management and useful information
about the Broadcom network adapter installed in the system.
BACS enables you to perform the following tasks:
-Detailed tests
-Diagnostics
-Analyses
-View and modify the property values
-View traffic statistics
For network teaming, you can use Broadcom Advanced Server Program (BASP), which
runs within Broadcom Advanced Control Suite itself.

Inter-Switch Connection

Most Dell EMC Networking switches are capable of operating either as stand-alone
devices, or in tandem with a matched device connected together as stack.
The stack behaves as a single switch, but will have the port capacity of the
combined devices.

Characteristics of Switch Stacking

Here are the characteristics of switch stacking:


-One unit in the stack is selected as a Master Unit (MU)
-All of the switches in the stack work on the IP address of the MU
-All protocols and applications that are run on the MU
-Forwarding databases reside on the MU

All units have the same config file for redundancy. The config file is pushed to
all units.

Stack Topology - Ring

Ring Topology
All devices in the stack are connected to each other forming a loop.
Each device in the stack accepts data and sends it to the device to which it is
connected. Packets continue through the stack until they reach their destination.

Failure Behavior
If one device or link in the ring is broken, the system automatically switches to a
stacking failover topology without any system downtime.
After the stacking issues are resolved, the device can be reconnected to the stack
without interruption, and then the ring topology is restored.

Stack Failover Topology - Chain

Stack Failover Toplogy


A stacking failover topology occurs if a cable or device failure occurs in the
stack. In this topology, devices operate in a chain.
The stack master reconfigures and determines where the packets are sent after the
change.
Each unit is connected to two neighboring devices, except for the top and bottom
units.

Firmware Updates

When a device boots, it decompresses the system image from the flash memory area
and runs it. When a new image is downloaded, it is saved in the area that is
allocated for a secondary system image copy.
On each boot, the device decompresses and runs from the system image that is
currently indicated as active. The active image can be set by an administrator.
Image Upgrade Methods:
-Trivial File Transfer Protocol (TFTP)
-Serial with XMODEM
-USB (only applicable to some models)

Updating the Firmware and Boot Code


Instructions for upgrading firmware are included in the release notes for each
specific revision and model. It is essential that only the release notes for the
version upgrade being attempted are used, as there may be differences between
upgrade procedures.
Specific instructions are not included on this page because of the criticality of
following the release notes of the firmware being installed.
Perform the following task to obtain the instruction for firmware upgrade:
1.Browse for the switch model from dell.com/support.
2.Go to Drivers & downloads for that particular model.
3.Click the Network drop-down button, and download the latest firmware.
4.Unzip the file and look for the PDF instructions to upgrade the firmware.

You may not always be responsible for this activity. However, you are required to
understand the information and process on this page. Refer to local policy and
procedure for your specific responsibilities.

Stack Firmware Synchronization

Upgrades / Downgrades / Master Unit Stack Unit 1 Stack Unit 2 Stack Unit 3
When you add a new switch to a stack, the Stack Firmware Synchronization feature
automatically synchronizes the firmware version with the version running on the
stack master.
The synchronization operation may result in either an upgrade or a downgrade of
firmware on the mismatched stack member.

Upgrading Switch Stack Firmware


Because every unit in a stack conforms to the configuration of the master unit,
only one firmware download is required for the whole stack.
This is the command line to push the firmware from master unit to stack unit:
copy image unit {1-12 | all}
For example, if you were to update stack unit 3 only:
console# copy image unit 3
To push firmware from the master unit to all the stack units simultaneously:
console# copy image unit all
The update operation can take some time. Expect 3 to 5 minutes per stack unit.

Unit Insertion and Removal

The most troublesome part of replacing a switch is if it is part of a stack;


replacing a stand-alone unit is less complicated.
Use the stacking menus to set the stacking characteristics of the device. The
changes to these attributes are applied only after the device is reset.
If needed, the new stack unit participates in a Master Unit election to resolve any
unit number conflict.
You can insert and remove switches to/from the current stack without cycling the
power. A new Master Switch will only be re-elected if the Master Switch was removed
from the stack. The entire network might be affected when a topology change occurs,
because a stack reconfiguration takes place.
Insertion
When adding a switch to an existing stack, ensure that no other cables besides the
stacking cables connect the new switch to the existing stack. This helps to prevent
any interference from Spanning Tree Protocol (STP).
Make sure that the new switch is configured for stacking and power off. The
stacking links are not hot-pluggable and will need to be connected while the unit
is off. When the cables are connected, the new unit can be powered on. It becomes a
stack member and receives a unit ID.

Removal
When removing a unit from the stack, you must disconnect all the links (both
logical and physical) on the stack member to be removed.

Also, be sure to take the following actions:


-Remove all the STP participating ports and wait to stabilize the STP
-Remove all the member ports of any port channels (LAGs) so that there will not be
any control traffic that is destined to those ports.
-Statically reroute any traffic going through the removed unit

Back Up Configuration
The following command format only is applicable for the Dell Networking OS
versions. If using third-party OS software, refer to the configuration guides
provided with those releases.
If the original switch is still functioning and the customer has not backed up a
copy, connect a laptop to the management port of the switch. Make sure that the
laptop has a functioning TFTP server that is installed.
To back up the configuration:
1.Assign the laptop an IP address on the switch network to enable communication.
Establish a console connection to the switchs Command Line Interface (CLI).
2.In the console session, use the copy running-config tftp://hostip/filepath
command to back up the configuration to the laptop TFTP server.
If the defective switch will not power on and the customer has not made a backup,
the configuration is lost and there is no way to retrieve it.

Restore Configuration
The following command format only is applicable for the Dell Networking OS
versions. If using third party OS software, refer to the configuration guides
provided with those releases.
To restore the backup configuration:
1.Move the laptop to the replacement switch, and establish a console session and an
Ethernet connection.
2.Place the config file in the root directory of a TFTP server (if it is not
already) on a directly connected laptop. Use the command copy
tftp://hostip/filepath running-config to copy the configuration to the replacement
switch.
3.Verify that all the configuration has been registered in the running
configuration of the replacement system. You can do a "diff" between the output of
show running-configuration and the customer-given configuration.
4.Save the configuration with the command copy run start.

Dell EMC Networking Product Line

Dell Networking Portfolio


The Dell EMC Networking products come in several families. The naming system
follows the existing Dell Force10 naming system:
C-series for chassis-based data-center access switches
S-series for rack switches
Z-series for distributed core switches
N-series for fabric and access switches
W-series for wireless access points

Dell PowerConnect Legacy Switches


PowerConnect was a Dell series of network switches. The PowerConnect "classic"
switches are based on Broadcom or Marvell Technology Group fabric and firmware.
Starting in 2013, Dell rebranded their networking portfolio to Dell Networking,
which covers both the legacy PowerConnect products and the Force10 products.

28xx Series
The PowerConnect 2808, 2816, 2824, and 2848 are dual-mode unmanaged or web-managed
all-Gigabit workgroup switches (10/100/1000). They have eight, 16, 24, or 48 ports
respectively. On the 2824 and 2848, there is an extra two small form-factor
pluggable (SFP) transceiver modules, for fiber-optic connectivity. Switches ship in
a plug-and-play unmanaged mode and can be managed through a graphical user
interface.

35xx Series
The PowerConnect 3524(P) and 3548(P) are managed 10/100 switches with gigabit
uplinks and Power over Ethernet (PoE) options for applications such as voice over
IP (VoIP), denoted on the models denoted with a "P" on the end of the part number.
All switches in this family support resilient stacking and have management and
security capabilities.

52xx Series
The PowerConnect 5200 series of managed switches comprised the 5212 (12 copper Gb
ports and four SFP ports) and the 5224 (24 copper Gb ports and four SFP ports).

55xx Series
The PowerConnect 5500 series switches, the successor of the 5400 series, are based
on Marvell technology. They offer 24 or 48 GbE ports with (-P series) or without
PoE. The 5500 have built-in stacking ports, using HDMI cables, and the range offers
two standard 10Gb SFP+ ports for uplinks to core or distribution layer. All 5500
series models (5524, 5524P, 5548, and 5548P) can be combined in a single stack with
up to eight units per stack. The 5500 series uses standard HDMI cables
(specification 1.4 category 2 or better) to stack with a total bandwidth of 40 Gbps
per switch.

7000 Series
The PowerConnect 7024 and 7048 have the same ports as the 6224/6248, have QoS
features for iSCSI, and incorporate 802.3az Energy Efficient Ethernet. The 7000
series offers the same type of ports: both the 1G on the front as optional 10G and
stacking modules on the rear side. Unlike the 6200 series, firmware for the 7000
series does support new functionality via version 4.x and 5.x like their 10G
brothers in the 8024 and 8100 series.

8000 Series
The PowerConnect 8024 and 8024F were rack-mounted switches offering 10 GbE on
copper or optical fiber using enhanced small form-factor pluggable transceiver
(SFP+) modules on the 8024F. On the 8024, the last four ports (21-24) are combo-
ports with the option to use the 4 SFP+ slots to use fiber optic connections for
longer-distance uplinks to core switches.
8100 Series
The PowerConnect 8100 series switches offered 24 or 48 ports on 10Gb and zero or
two built-in ports for 40 Gb QSFP ports. All models also have one extension-module
slot with either two QSFP 40Gb ports, four SFP+ 10Gb ports, or four 10GbaseT ports.
It is a small (1U) switch with a high port density and can be used as distribution
or (collapsed) core switch for campus networks and for use in the data center.

M Series
The PowerConnect M-series switches are classic Ethernet switches based on Broadcom
fabric for use in the blade server enclosure M1000e. These internal interfaces can
be one or 10 Gbps.

VEP4600

VEP4600 platform is a one rack unit, x86-based networking platform running


virtualized universal customer premise equipment (uCPE) functions, and basic
switching/routing functions as a top-of-rack device.
Features include:
-Two 10GbE SFP+ ports and four 1000Base-T ports
-One MicroUSB-B console port
-Two USB Type-A ports for more file storage
-One board management controller (BMC)
-Two RJ45, RS-232 serial-console ports
-One 10/100/1000BaseT RJ45 Ethernet management port for the processor
-One 10/100/1000BaseT RJ45 Ethernet management port for the BMC
-One or two AC hot-swappable redundant power supplies, depending on the
configuration
-Four or five AC normal hot-swappable fan modules, depending on the configuration
-Standard 1U platform

Dell EMC Networking W-Series Wireless Network Controllers


Dell EMC Networking W-Series controllers offer high-performance, secure, and
reliable wireless access and management across indoor, outdoor, and remote
locations.

Dell EMC Networking W-Series 7200 Controllers


-For large enterprise
-WiFi LANs recognize new cloud-based and mobile applications
-Provide visibility for IT to prioritize mobile apps by user

Dell EMC Networking W-Series 7000 Controllers


-For smaller campuses and branch offices
-Provide simple and cost-effective configuration
-Encryption and visibility for the all-wireless workplace

Dell EMC Networking W-Series 3000 Controllers


-For medium to large enterprise
-Wireless networks offer enhanced authentication, encryption, and radio management
-Layer 2/3 networking features

Dell EMC Networking W-Series Wireless Network Controllers

Model
Details

W-7240
32,768 users, 2048 APs for large-scale campus enterprise

W-7220
24,576 users, 1024 APs for medium to large campus

W-7210
16,384 users, 512 APs for medium campus enterprise

W-7205
8192 users, 256 APs for small campus enterprise

W-7030
4096 users, supports 64 APs for campus and branch

W-7024
2048 users, supports 32 APs for campus and branch

W-7010
2048 users, supports 32 APs for campus and branch

W-7005
1024 users, supports 16 APs for campus and branch

-Optimized for 802.11ac


-Unify policy management for creation of a simple, cost-effective, and all-wireless
workplace
-Offer coverage of up to 6 million ft2.
-Ensure scalable, secure, and easily managed mobility services for branch offices
-Provide high-capacity WLAN for users, devices, and application delivery to campus
environments

W-Series Advantage:
-ClientMatch automatically steers devices to best AP
-Integrated Adaptive Radio Management™ technology
-Advanced Cellular Coexistence minimizes interference from other networks

Dell EMC Networking N-Series


Dell EMC Networking N-Series is a campus access and aggregation switch with models
for PoE+ offering 1GbE and 10GbE switches designed for modernizing and scaling
network infrastructure.

Key Features:
-Range of models: 24 and 48 port, 1GbE and 10GbE, RJ45 and SFP+, Layer 2/Layer 3,
and PoE+
-Loop-free redundancy without spanning tree using MLAG
-Common operating system with consistent management and CLI/GUI across all models
-Standards-based and interoperable, even with propriety protocols such as RPVST+
and CDP
-High throughput and capacity to handle unexpected workloads
-Scales easily with up to 12-unit stacking

N2000 / N3000 / N4000 Series


N4000N3000N2000
The switches in the Dell EMC Networking N2000/N3000/N4000 series are stackable
Layer 2 and 3 switches that extend the Dell EMC Networking LAN switching product
range. These switches include the following features:
-1U form factor, rack-mountable chassis design
-Support for all data-communication requirements for a multilayer switch, including
Layer 2 switching, IPv4 routing, IPv6 routing, IP multicast, quality of service,
security, and system management features
-High availability with hot-swappable stack members
The Dell EMC Networking N2000 / N3000 / N4000 series includes 13 switch models:
N2024, N2024P, N2048,N2048P, N3024, N3024F, N3024P, N3048, N3048P, N4032, N4032F,
N4064, and N4064F.

N1500 Series
N1500 switches to extend enterprise features to small and midsized businesses by
using a Layer 2+ feature set and offering high availability for smaller managed
networks. N1500 switches offer simple management and scalability through a 40 Gbps
(full-duplex) high availability stacking architecture that enables management of up
to four switches from a single IP address.
-Up to 48 line-rate GbE RJ-45 ports and four integrated 10GbE SFP+ ports
-Support for 24 ports of PoE+ in 1RU or up to 48 ports of PoE+ with an optional
external power supply
-Up to 200 1GbE ports in a 4-unit stack for high density and high availability in
Independent Distribution Facility (IDFs), Main Distribution Facility (MDFs), and
wiring closets
-Non-stop forwarding and fast failover in stack configurations

The Dell EMC Networking N1500 series includes four switch models: N1524, N1524P,
N1548, and N1548P.

Dell EMC Open Networking –ON Series


Dell EMC ON Series switch offers more agility, choices, and lower costs than
proprietary networks with select, open-standards-based ON (open networking)
switches with the flexible features:
-Disaggregated-hardware/software solutions bring new levels of freedom and
flexibility to data centers.
-Support for Open Network Install Environment (ONIE) enables zero-touch
installation of alternate network operating systems.
-Alternate choice of network operating system helps simplify data-center fabric
orchestration and automation.
-A broad ecosystem of open-source and Linux-based applications and tools provides
more options to optimize and managing network.

N-Series ON Switches
The following switches are fully managed Layer 2 switching with Open Networking
capabilities:
To learn more about these switches, click each tab.

N1100
Dell EMC Networking introduces a series of new switches that are designed to
provide low cost, basic managed Gigabit-Ethernet switches at near Fast Ethernet
price points. Models provide choices in base switch-port counts and capabilities
that are perfectly suited as affordable managed-switch solutions for both Fast
Ethernet and Gigabit Ethernet customers.
The N1100 series switches offer the following features:
-Full management (remote and local) consistent with N-series switches
-Full industry-standard CLI, consistent with N-series switches
-10/100/1000Mbps RJ45 full and half-duplex fixed ports
-Switches feature scalability through interconnection of more N1000 series switches
-Internal, fixed power supply (no external redundant or PoE PSU support)
-Power over Ethernet+ (30.8 W IEEE 802.3at) and Power over Ethernet (15.4 W)
support
-Aesthetically pleasing and operationally friendly design ideal for many locations
and targeting reduced noise, and quiet operation with fan-less and/or variable
speed fan designs
-New half RU width 8-port N-series rack form factor models using X-series mounting
options adaptable for surface top, wall/ceiling mount, and standard rack mounting
-Energy-Efficient Ethernet design
-And quickly deployable by customers requiring little-to-no networking knowledge
-IPv4 & IPv6 management

N2128 PX-ON
DellEMC Networking N2128PX-ON Network Switch offers 1GbE and 2.5GbE multigigabit
connectivity.
The N2128 PX-ON switch offers the following features:
-24 x RJ45 10/100/l000Mb auto·sensing PoE+ ports (optional external power supply
needed to provide power to all 24 ports at 30.8W)
-2 x RJ45 10/100/1000/2500Mb auto·sensing PoE 60W ports
-2 x integrated 10GbE SFP+ ports
-2 x dedicated rear stacking-ports
-1 x integrated power supply (1000W AC)

N3132 PX-ON
DellEMC Networking N3132PX-ON Campus-Network Switch offers 1GbE, 2.5GbE, and 5.GbE
multigigabit connectivity for customers looking to deploy faster connectivity on
CAT5e networks with Power over Ethernet+ and Universal Power over Ethernet
capability. This switch is a perfect complementary offering to the Dell Networking
W-Series W-AP334/5 and W-IAP334/5 802.11 ac Wave 2 2.5GbE Wireless Access Points.
The N3132PX-ON will be added to Open Manage Network Manager with OMNM 7.0 and
HiveManager NG in a future release.
The N3132 PX-ON switch offers the following features:
-24 x 100MB/1GbE RJ45 ports.
-4 x SFP+ ports.
-8 x 2.5GbE/5GbE (NBase-T) speeds over Cat 5e cable; 4 of these ports support 10G
on Cat 6a.
-40GbE uplink or 2 x 21Gb Stacking module.
-Stacking supported with N3000.
-PoE+ and Universal PoE support (on 8 multi-gig ports).
-ONIE boot loader support.
-No TAA.

Dell EMC Networking S-Series

Dell EMC Networking S-Series product line offers a range of modular and fixed-
configuration 1/10/40GbE systems. These are designed principally for data-center
top-of-rack (ToR) and aggregation applications.
The S-Series switches are:
-1-Gb switches,-3124/S3124F/S3124P, S3148/S3148P
-10-GbE switches- S4128F-ON, S4148F-ON, S4128T-ON, S4148T-ON, S4148FE, S4148U,
S4248FB-ON, S4248FBL-ON
-25-GbE switches: S5048F-ON, S5148F-ON, S5232F-ON, S5248F-ON, S5296F-ON,
It includes S6000 high-density 40-GbE virtualization switch, the S5000 modular
converged SAN/LAN switch, and the S4810 or S4820T ToR switches supporting 1 GbE, 10
GbE, and 40-GbE capabilities.

Dell EMC Networking S-Series 1-GbE Switches

Dell EMC Networking S-Series 1-GbE switches are optimized for high-performance
data-center environments:
-Delivers low-latency, superb performance, and high density with hardware and
software redundancy
-Offers Active fabric designs using S- or Z-Series core switches to create a two-
tier, 1/10/40-GbE data-center network architecture
-Provides ideal solutions for ToR applications in enterprise, Web 2.0, and cloud
service providers data-center networks

1-GbE S-Series Switches


S3100–Series switches are a low-cost, wireless closet switch/router product for
copper connections to 1G endpoints with Power over Ethernet Plus (PoE+) capability
on 1G access ports. The series includes capabilities for 1/10GE switching with 1G
copper links for campus network endpoints and 10G ports for uplinks to
core/aggregation switches.

S3124
The S3124 switch offers the following features:
Twenty-four-Gigabit Ethernet (10/100/1000BASE-T) RJ45 ports that support
autonegotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports

S3124F
The S3124F switch offers the following features:
Twenty-four Gigabit Ethernet 100BASEFX/1000BASE-X SFP ports
Two 1G copper combo ports
Two SFP+ 10G ports

S3124P
The S3124P switch offers the following features:
Twenty-four Gigabit Ethernet (10/100/1000BASE-T) RJ-45 ports for copper that
support auto-negotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports
Supports PoE+
Two fixed, mini Serial Attached SCSI (mini-SAS) stacking ports HG[21] to connect up
to six switches

S3148
The S3148 switch offers the following features:
Forty-eight Gigabit Ethernet 10/100/1000BASE-T RJ-45 ports
Two SFP 1G combo ports
Two SFP+ 10G ports
20G expansion slot that supports an optional small form-factor pluggable plus
(SFP+) or 10GBase-T module
Two fixed mini Serial Attached SCSI (mini-SAS) stacking ports HG[21] to connect up
to twelve S3100 series switches

S3148P
The S3148P switch offers the following features:
Forty-eight Gigabit Ethernet (10BASE-T, 100BASE-TX, 1000BASE-T) RJ-45 ports that
support auto-negotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports
Supports PoE+
Two fixed, mini-SAS stacking ports HG[21] to connect up to six switches

Dell EMC Networking S-Series 10-GbE Switches


Deploy modern workloads and applications that are designed for the open networking
era with an optimized data-center top-of-rack (ToR) networking solution that:
-Includes the S4048-ON and S5000 10-GbE switches and the S4820T and S4048T-ON
1/10GbE BASE-T switches
-Delivers low-latency, superb performance, and high density with hardware and
software redundancy
-Offers Active fabric designs using S- or Z-Series core switches to create a two-
tier, 1/10/40-GbE data -enter network architecture
-Provides an ideal solution for applications in high-performance data-center and
computing environments

S4128F-ON/S4148F-ON

S4100–ON Series (S4128F-ON and S4148F-ON) are one rack unit (RU), full-featured
fixed form-factor top-of-rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It
includes small form-factor pluggable plus (SFP+), quad small form-factor pluggable
plus (QSFP+), and quad small form-factor pluggable (QSFP28) ports.
The features of S4128F-ON/S4148F-ON switches are listed:
-S4128F-ON: 28 fixed 10-GbE SFP+ ports, 2 fixed 100-GbE QSFP28 ports
-S4148F-ON: 48 fixed 10-GbE SFP+ ports, 2 fixed QSFP+ ports, 4 fixed 100-GbE QSFP28
ports
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis

S4128T-ON/S4148T-ON

S4128T-ON and S4148T-ON are one rack unit (RU), full-featured, fixed form-factor,
top-of-rack (ToR) 10/25/40/50/100GbE switches for 10GBaseT servers with copper
BaseT RJ45. It includes small form-factor pluggable plus (SFP+), quad small form-
factor pluggable plus (QSFP+), and quad small form-factor pluggable 28 (QSFP28)
ports.
The features of S4128T-ON/S4148T-ON switches are listed:
-S4128T-ON: 28 fixed copper 10GBase-T RJ45 ports SFP+ ports, two fixed 40-GbE QSFP+
ports
-S4148T-ON: 48 fixed copper 10GBase-T RJ45 ports SFP+ ports, two fixed 40GbE QSFP+
ports, four 100GbE QSFP28 ports
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis
S4248FB-ON/S4248FBL-ON
The S4200–ON Series (S4248FB-ON/S4248FBL-ON) are one rack unit (RU), full-featured
high-density 10/25/40/100GbE switches. It includes 40 small form-factor pluggable
plus (SFP+) optics, six quad form-factor pluggable 28 (QSFP28) optics (100 GbE,
4x25GbE, 40 GbE, and 4x10GbE), and two QSFP+ optics for 40/100GbE aggregation and
1/10GbE top-of-rack (ToR) and end-of-row (EoR) applications.
The features of S4248FB-ON/S4248FBL-ON switches are listed:
-Forty 1/10GbE fixed SFP+ ports
-Two QSFP+ ports supporting 40 GbE or 4x10GbE breakout
-Six QSFP28 ports
-One RJ45 console port
-One USB Type-A 2.0 port for more file storage
-One ESD Jack
-TCAM: on-board Rangeley central processing unit (CPU) system with 32-GB DDR III
RAM, 64-GB iSLC mSATA SSD
-Non-TCAM: on-board Rangeley CPU system with 8-GB DDR III RAM, 16-GB iSLC mSATA SSD
-Two hot-swappable redundant power supplies
-Five hot-swappable fans modules
-Standard 1U switch
-The S4200-ON Series supports the following configurations:
-48 x 10 GbE + 6 x 100 GbE
-40 x 10 GbE + 8 x 40 GbE
-48 x 10 GbE + 12 x 50 GbE
-48 x 10 GbE + 24 x 25 GbE
-72 x 10 GbE

S4148U-ON
S4148U–ON system is a one rack unit (RU), full-featured fixed form-factor top-of-
rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It includes small form-factor
pluggable plus (SFP+), small form-factor pluggable 28 (SFP28), quad small form-
factor pluggable plus (QSFP+), and quad small form-factor pluggable (QSFP28) ports.
The unique features of S4148U-ON are listed:
-24 x 1/10 GbE SFP+ ports
-24 x unified SFP+/SFP28 ports for Fibre Channel and Ethernet
-2 x fixed 40-GbE QSFP+ ports
-4 x unified QSFP28 ports for Fibre Channel and Ethernet
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis

4148FE-ON
S4100-ON Series S4148FE-ON is a one rack unit (RU), full-featured fixed form-factor
top-of-rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It includes small
form-factor pluggable plus (SFP+), quad small form-factor pluggable plus (QSFP+),
and quad small form-factor pluggable (QSFP28) ports. The S4148FE-ON also includes
unified (Fibre channel and Ethernet) 10-GbE SFP+ and QSFP28 ports.
The unique features of S4148FE-ON are listed:
-The first 24 ports are the Unified ports.
-2 fixed 40-GbE QSFP+ ports
-four fixed 100-GbE QSFP28 ports
-Seven-segment stacking indicator
-One micro-USB-B console port
-One USB type-A port
-Support for LRM optics
Note: For specific port profile details, see the OS10 Enterprise Edition User
Guide.

Dell EMC Networking S-Series 25/40/50/100GbE Switches

Gain the flexibility to transform data centers with high-capacity network fabrics
that are easy to deploy, cost-effective. It provides a clear path to a software-
defined data center. They offer:
-High density for 40GbE deployments in ToR, middle-of-row, and end-of-row
deployments
-A choice of S6000-ON and S6010-ON 40-GbE switches and the S6100-ON
10/25/40/50/100GbE modular switch
-S6100-ON modules that include: 16-port 40GbE QSFP+; 8-port 100GbE QSFP28; combo
module with four 100GbE CXP ports and four 100GbE QSFP28 ports
-An ideal solution for modern workloads and applications that are designed for the
open networking era

S5048F-ON/S5148F-ON

The S5048F-ON switch is a one rack unit, full-featured, fixed form-factor top-of-
rack (ToR) compact 10/25/40/50/100-GbE switch. It has 10/25-GbE links for server
connections and 40/50/100-GbE links for clustering—virtual link trunking (VLT) amp;
stacking and uplinks to aggregation and core switches. The switch includes two hot-
swappable AC or DC power-supply units (PSUs) and four hot-swappable fan units.
The S5048F-ON switch offers the following features:
-Standard 1U switch
-On-board Rangeley central processing unit (CPU) system with 8-GB DDR III RAM, 16-
GB iSLC mSATA SSD
-Individual port groups can be configured as either Ethernet or fibre channel,
enabling a single S4148-U to switch native Ethernet and native fiber channel
simultaneously
-S5048F-ON has 48 x 1 GbE/10 GbE/25 GbE SFP28 ports
-S5148F-ON has 48 x 10 GbE and 25-GbE SFP28 ports
-6 x 40 GbE and 100-GbE QSFP28 ports
-1 x MicroUSB-B console port
-1 x RJ45 serial console port
-1 x USB Type-A port for more file storage
-1 x 10/100/1000BaseT Ethernet management port
-2 x hot-swappable redundant power supplies
-4 x hot-swappable fan modules
S5232F-ON, S5248F-ON, S5296F-ON
123
S5200F-ON Series switch is a full-featured fixed form-factor top-of-rack (ToR)
compact 10/25/40/50/100/200GbE switch for data center networks. It includes small
form-factor pluggable plus (SFP+), small form-factor pluggable 28 (SFP28), quad
small form-factor pluggable 28 (QSFP28), and quad small form-factor pluggable
double density (QSFP-DD) ports. S5200F-ON Series switch is a 10/25/40/50/100/200GbE
switch with 10/25GbE links for server connections and 40/50/100GbE links for
clustering—virtual link trunking (VLT) and stacking—and uplinks to aggregation and
core switches. The switch includes two hot-swappable AC or DC power supply units
(PSUs) and four hot-swappable fan units.
The three switches present in this series are:
1.S5232F-ON – one rack unit
2.S5248F-ON – one rack unit
3.S5296F-ON – two rack unit

The S5200F-ON Series offer the following features:


-S5232F-ON: thirty-two 100GbE QSFP28 ports and two 10GbE SFP+ ports – Standard 1U
switch
-S5248F-ON: forty-eight 25GbE SFP28 ports, four 100GbE QSFP28 ports, and two 200GbE
QSFP-DD ports – Standard 1U switch
-S5296F-ON: ninety-six 25GbE SFP28 ports and eight 100GbE QSFP28 ports – Standard
2U switch
-One MicroUSB-B console port
-One RJ45 serial port
-One serial management USB Type-A port for more file storage
-Four-core Intel Denverton central processing unit (CPU) system with 16-GB SDRAM
and 64-GB SSD
-One 10/100/1000BaseT Ethernet management port
-Temperature monitoring
-Software-readable thermal monit
-Real-time clock (RTC) support
-Two hot-plug redundant power supplies
-Power management monitoring
-Four hot-pluggable replaceable fan modules
-2-hole ground lug
-S5296F-ON—USB male-female extension cable
-The S5200F-ON Series supports the following configurations:
-96 x 10 GbE + 8 x 100 GbE
-96 x 25 GbE + 8 x 100 GbE
-128 x 10 GbE
-128 x 25 GbE
-64 x 50 GbE
-32 x 40 GbE
-32 X 100 GbE

Z9500

The Dell EMC Networking Z-Series of core/aggregation switches provide optimal


flexibility, performance, density, and power efficiency.
The Z9500 is part of Dell Networking’s Z-Series family of Data Center Core
Switches.
The Z9500 switch is a compact 3RU, high-performance, and low-latency 10/40Gbe
switch. It runs the Dell EMC Networking Operating System (operating system),
providing features and capabilities that are widely deployed.
The Z9500 switch has the following features:
-132 fixed Quad Small Form-Factor Pluggable (QSFP+) ports on the I/O side of the
switch
-Four slots on the utility side of the switch for hot-swappable power supplies
-Five slots on the utility side of the switch for hot-swappable fan modules
-A management and console interface on the I/O side of the switch

Z9100-ON

The Z9100-ON is a 1RU compact 10/25/40/50/100 GbE switch. The system includes 32
fixed quad form-factor pluggable (QSFP28) optics for 40/100 GbE aggregation and
10/25/40/50 GbE top-of-rack (ToR) and end-of-row (EoR) applications. The Z9100–ON
system includes two hot-swappable AC or DC power supply units (PSUs) and five hot-
swappable fan units.
The Z9100–ON comes with the following feature set:
-Standard 1U chassis
-Two hot-plug redundant power supplies
-Rangeley CPU system with 8-GB DDR III RAM
-QSFP ports support 10/25/40/50/100 GbE
-Two 1000M/10G SFP+ ports
-One MicroUSB-B console port
-One MicroUSB-B console port
-One 10/100/1000 BaseT Ethernet management port

Z9264F-ON

The Z9264F-ON switch is a two rack unit (RU), full-featured, fixed form-factor top-
of-rack (ToR) and end-of-row 1/10/25/40/50/100-GbE switch. Besides, the Z9264F-ON
supports two SFP+ ports at 1/10/100 Mbps with 10 GbE multimode and single-mode
options. The switch includes two hot-swappable AC or DC power supply units (PSUs)
and four hot-swappable fan units.
The Z9264F-ON switch offers the following features:
-Standard 1U switch
-Denverton C3538 4-Core CPU system with 16-GB DDR4 memory and 32-GB SSD storage
-64 x QSFP28 ports that support 1, 10, 25, 40, 50, and 100 GbE
-2 x SFP+ that support 1 GbE, 10 GbE, and 100 GbE
-1 x MicroUSB type-B console port
-1 x RJ45 serial console port
-1 x Serial management USB type-A port for more file storage
-1 x 10/100/1000BaseT Ethernet management port
-1 x hot-swappable redundant power supplies
-4 x hot-swappable fan modules

Dell EMC Networking C-Series

The Dell EMC Networking C-series is chassis-based switches that enable you to
create an expandable, high-density infrastructure with tremendous throughput.
-Alleviate bottlenecks and congestion, and simplify infrastructure with 10/40GbE
and modern protocols.
-Enable high-performance back-end infrastructure for user mobility with PoE+ where
needed.
-Readily support Unified Communications and Collaboration (UC&C) and VDI workloads
across a global workforce with scalable user density and performance on demand.

Expand and configure your chassis based on your needs.


-Dell EMC Networking C7004—Add up to four line cards.
-Dell EMC Networking C7008—Add up to eight line cards.

C7008

The Dell EMC Networking C7008 is a 13U chassis featuring up to eight line card I/O
slots that are coupled with a backplane driving up to 1.536 Tbps bandwidth. It is
designed to provide inherent reliability, network control, and scalability for
high-performance Ethernet environments.
-Switch fabric capacity of up to 1.538 Tbps and up to 952 Mpps L2/L3 packet
forwarding capacity
-1+1 Route Processor Module (RPM) design
-Continuous runtime data plane monitoring and advanced in-service command-line
interface (CLI) diagnostic functions
-Power supply redundancy with load-sharing power bus enabling uninterrupted VoIP
calls during a power supply failure
-Over 300 line-rate 10/100/1000Base-T ports with full 30.8 W Class 4 PoE+ support
in a 13U chassis
-Up to 128 10GbE ports for RJ-45 installations or up to 64 10GbE ports with
pluggable SFP+ or XFP modules
-Full complement of standards-based Layer 2, IP version 4 (IPv4), and IP version 6
(IPv6) for unicast and multicast applications
-5-microsecond switching latency under full load for 64-byte frames
-A suite of security, access control, and wiring closet edge features for
enterprise networks
-Robust Dell EMC Networking operating system enables resiliency and efficient
operation

C7004
The Dell EMC Networking C7004 is a 9U chassis featuring up to four line card I/O
slots that are coupled with a backplane driving up to 768 Gbps bandwidth. The Dell
Networking C7004 is designed to provide inherent reliability, network control, and
scalability for high-performance Ethernet environments.

-Switch fabric capacity of up to 768 Gbps and up to 476 Mpps L2/L3 packet
forwarding capacity
-1+1 Route Processor Module (RPM) design
-Continuous runtime data plane monitoring and advanced in-service command-line
interface (CLI) diagnostic functions
-Power supply redundancy with load-sharing power bus enabling uninterrupted VoIP
calls during a power supply failure
-Over 200 line-rate 10/100/1000Base-T ports with full 30.8 W Class 4 PoE+ support
in a 9U chassis
-Up to 64 10GbE ports for RJ-45 installations or up to 32 10GbE ports with
pluggable SFP+ or XFP modules
-Suite of security, access control, and wiring closet edge features for enterprise
networks
-Full complement of standards-based Layer 2, IPv4, and IPv6 for unicast and
multicast applications
-5-microsecond switching latency under full load for 64-byte frames
-Robust Dell EMC Networking operating system for resiliency and efficient operation

C9010 Network Director

The C9010 switch is part of Dell EMC Networking's next-generation LAN solution,
providing a scalable switch that offers a path to higher density 10 GbE and 40 GbE
capability. The user can deploy the C9010 switch as an access or aggregation/core
switch for installations in which a modular switch is preferred. For larger port
requirements, users can also connect the C1048P port extender as access devices.
The C9010 switch supports up to 248 1GbE, 248 10GbE, or 60 40GbE ports with a
combination of port speeds and media types, such as copper, fiber, and direct
attach copper (DAC). It is an 8U chassis (18 in./45.72 cm depth) that fits into a
standard 19 in. (48.26 cm) rack or cabinet. The C9010 chassis supports the
following components:
-Two full-width route processor modules (RPMs) with four 1/10 GbE SFP+ uplinks per
module
-10 half-width Ethernet line cards of the following types:
-6-port 40 GbE QSFP+
-24-port 1/10 GbE SFP+
-24-port 1/10 GbE SFP+
-Three hot-swappable fan modules with side-to-side airflow (draws air through
ventilation holes on the right side of the chassis and expels air through
ventilation holes on the left side)
-Four 1450/2900W AC PSUs

C1048P Port Extender


The Dell EMC Networking C1048P Port Extender is a stackable GbE port extender (PE)
for campus access deployments. The C1048P is a 1RU port extender for the C9000
series modular chassis and requires a C9010 or other C9000 series switch to
operate. The C1048P functions as a remote line card physically connected to, and
provisioned by, a C9010 over 10 GbE uplinks. You can connect up to 40 C1048P port
extenders to a single C9000 series switch.

Module 6: Dell EMC Storage Products Service


General Purpose Storage Technologies in Enterprise

General Purpose Storage Technologies in Enterprise


Overview
In the Enterprise Data Center, servers provide the computing horsepower to
accomplish great feats.

Storage technologies do not have the same appeal capabilities of servers and
computing platforms. They certainly make up for it by providing data and or more
accurately the availability and integrity of the data. Storage technologies single-
purpose is about ensuring the data (be it an operating system, applications, or
static databases) use by those servers is fully available and usable.

In this lesson, you learn about the general information regarding common storage
technologies in use by the IT industry and Dell EMC in the data center.
Estimated Course Duration
This General Purpose Storage Technologies In The Enterprise Lesson is estimated to
take 20-25 minutes to complete.

Vocabulary - Agreeing on Terms


This topic reviews storage-related terms with which you should be familiar. It is
not intended to be an exhaustive review of all terms.

For an authoritative reference, go to the Storage Networking Industry Association's


(SNIA) online dictionary that is found at:
http://www.snia.org/education/dictionary/

For general storage industry terms, you can inquire the TechTarget.com's "What Is"
database that is found at:
http://whatis.techtarget.com/

Terms and Vocabulary


The following tabs illustrate the components typically found on x86 Servers
Architecture.
Click on each tab below. Click on Next at the bottom center when you are finished
reviewing these terms.

SCSI
SCSI is the storage block protocol 'bus' communication standard used to communicate
controller commands between the host and their peripherals, such as hard drives.

SCSI has been in use since the late '70s and declared a standard in 1986. Over
that time, it has changed from a parallel communication bus (SCSI-1) to the more
reliable and now faster serial bus (SCSI-3 used by SAS). Originally the SCSI
standard defined both hardware & command structure. Today, it focuses mainly on
command structure. As such, SCSI is arguably the most widely implemented I/O
interconnect in use today.
(source: SNIA.org & ANSI)

If you're interested in finding more information about SCSI, visit the SCSI Trade
Association at SCSITA.org and T10.org for hardware specifications.

SATA
Serialized Advanced Technology Attachment (SATA) is used mainly in the consumer
space. SATA drives are normally half-duplex, and generally speaking, have higher
latency and lower reliability specifications.

Physically, SATA connectors are recognized by having a clear separation between the
data connectors and power connectors.

SAS
Serial Attached SCSI (SAS) is a SCSI interface standard for attaching hosts to SCSI
devices, including SAS and SATA disk and tape drives. (source: SNIA.org)

SAS uses the SCSI-3 protocol which defines serial point-to-point communication and
allows SCSI communication to occur over longer distances and more reliably to more
devices over the older SCSI bus.

SAS drives are full-duplex (dual ported), and have lower latency and better
reliability specifications than SATA. SAS controllers also can support SATA
devices, but if a SATA devices is attached, the connection will operate at half-
duplex, and reduce the operating speed of all devices in the chain.
NOTE: For more information on SAS and SCSI, you can view the "SAS Solutions"
reference content available from Educate Dell by using the global search tool for
the "Dell SAS Solutions".

Initiator
The initiator is the system component that originates an Input/Output (I/O) command
over an I/O interconnect.

When within the context of SCSI, it is the endpoint that originates the SCSI I/O
command sequence.
(source: SNIA.org)

Controller
Within a storage context, the term controller refers to the storage system I/O
adapter controller of a disk or tape that performs the controller logic command
decoding and execution, host data transfer, serialization and deserialization of
data, error detection and correction, and overall management of device operations.
(source: SNIA.org)

There are different types of I/O adapters such as network interface controller,
SCSI controller, RAID controller, etc.

HBA
Host Bus Adapter, or HBA, is a more general term for an I/O adapter that connects a
computer to an I/O interconnect, or bus. See Controller.

Target
The target is the system component that receives a SCSI I/O command sequence over
an I/O interconnect from the initiator.
(source: SNIA.org)

Disk Drive
"Hard Disk" means a traditional drive with rotating magnetic and optical disks, as
well as solid-state disks, or non-volatile electronic storage elements. It does not
include specialized devices such as write-once-read-many (WORM) optical disks or
RAM disks.

SSD
A Solid State Drive, or SSD, is a disk drive whose storage capability is provided
by solid state electronics The form factors and interfaces for solid state drives
are typically the same as for traditional disk drives.

Spindle
This term means a disk drive that contains rotational magnetic media. This is
often even further differentiated by specifying the 'spindle speed'. For example,
"My storage unit has six 10k spindles, six 15k spindles, and four SSDs."

Form Factor (FF)


Dell sells and supports both 3.5" and 2.5" size of hard drives (both SATA and
SAS/NL-SAS). Aside from the obvious physical differences (size and speeds), the
drives are used in the same fashion. A 2.5" hard drive can replace a 3.5" hard
drive, under certain circumstances in the event the 3.5" drive is unavailable
and/or discontinued (example, both are SAS, both are the same size and the
replacement drive specifications meet or exceed the 3.5" drive). However, in so
doing, Dell must provide the correct hard drive carrier and cabling to accommodate
the Form Factor of the drive.
RAID
Redundant Array of Independent Disks (RAID)

RAID provides a way of storing data on multiple disks with the ability to
experience a drive failure yet not lose data. Also, by placing data stripes across
multiple disks with mathematical redundancy, this normally greatly improves I/O
operations (IOP) speeds in accessing and storing data.
(source: http://SearchStorage.TechTarget.com/definition/RAID)

Software RAID / Hardware RAID


RAID can be implemented across multiple drives with purely software methods (as is
common in Linux environments), or with purely hardware methods, as seen, for
example with Dell's PowerEdge Expandable RAID Controller (PERC). Software RAID is
managed at the operating system level, while Hardware RAID is managed by the
controller that handles the I/O to the disks. In larger enterprises, hardware
storage arrays receive I/O and implement RAID on the storage device itself
(example, Dell Storage SC series).

IOPS
Input/output OPerations per Second (IOPS)

Backplane
The term backplane is not unique to storage technologies. A backplane is a printed
circuit board (PCB) that is designed to be a core component that other components
plug into. Within a storage enclosure or a simple server with multiple internal
hard drives, the backplane is normally referencing the PCB component that hard
drives and/or hard drive carriers plug into which facilitates target-initiator
communication (i.e., the SAS chanel(s) or SCSI bus).

JBOD
An acronym meaning "Just a Bunch Of Disks". It is in reference to an unintelligent
enclosure that usually consists of a chassis (including power supplies, cooling and
internal backplane), hard-drives and I/O modules that allow the JBOD to be
connected to a host or intelligent storage array and/or other JBODs.

Enclosure
See JBOD. Other synonymous terms include disk enclosure, disk chassis, shelf, and
pod (legacy) among others.

EMM
EMM, or Enclosure Management Module, is commonly used to describe I/O modules found
in JBODs (unintelligent storage arrays). However, not all vendors employ this
term. Some may use Expansion Module, or simply I/O Module/Card. Dell EMC is one of
the vendors that uses EMM to describe the I/O modules found in external enclosures,
and usually indicates that the enclosures is a JBOD for host attachment or for disk
expansion for a storage array.

DAS - Direct Attached Storage. This is the general term to describe a host that is
connected to storage directly from a storage controller to the actual storage or
storage enclosure using a block level protocol (e.g., PCI-SAS Card attached to
internally and/or externally to an enclosure).

SAN - Storage Area Network. This is the general term to describe a host that is
connected to storage over a network using a storage protocol over a storage
controller / HBA using a block level protocol (example, using a LAN card but
transmitting iSCSI, or using an HBA communicating over Fibre Channel)
NAS - Network Attached Storage. This is a term used to refer to storage devices
that connect to a network using a file level protocol and provides file access
services to computer systems (example, CIFS or NFS shares). Note that a NAS device
can be using DAS or SAN for its storage needs, however the hosts connected to the
NAS device do not care about this; they only connect using file access services.

Block Level Protocol


When connecting to a storage device directly or over storage networks, the traffic
is "block level". For hosts directly connected to the storage, the protocol used
is serialized SCSI (example, SAS). For hosts attached over a storage network
(SAN), the protocol can vary, but usually involves a transport protocol that
encapsulates SCSI commands, such as iSCSI or Fibre Channel. Therefore, this is
termed "block level" communication, as SCSI only is aware of an initiator, its
target, and the block of data that it is transmitting.

To simplify, if SCSI commands are being transported from an initiator to a target,


then you can safely refer to this as a "block level protocol".

File Level Protocol


When connecting to storage over a network file protocol, the traffic is deemed
"file level protocol". File Level traffic hits the network before SCSI command
directives can be created. The most common and ubiquitous examples of file level
traffic is found in the CIFS (Common Internet File System) and NFS (Network File
System) network file system protocols.

Within the context of storage, storage devices that transmit data over SMB and CIFs
for communication with hosts transmit file level data.

Storage Tiers
Per SNIA, Storage Tiering indicates any storage that is partitioned into multiple
distinct classes based on price, performance, organizational usage, or other
attributes.

Data may be dynamically moved among classes in a tiered storage implementation


based on access activity or other considerations.

Disk Storage
Disk Storage primary role is real-time data storage and retrieval . Hard Disk and
Solid-State technologies fulfill this role by offering different tiers of storage.

At Dell EMC, parallel SCSI devices have all reached end of service life, so we will
not review those devices. Next, we will briefly review SATA and SAS technologies.
At the time of this writing, Dell EMC sells and supports hard disk drives (HDD) in
1.8", 2.5," and 3.5" form factor.

Differences Between SATA and SAS


There are major technical and performance differences between SAS and SATA drives.

SATA drives are geared for a general-purpose consumer application, whereas SAS
drives are designed for mission-critical enterprise applications.

The table identifying the major differences between the two:

SATA
-SAS

Performance Profile

General Purpose
(Normal front office applications and usage)
(higher latency, slower spindle speeds)

-Enterprise/Mission Critical
(Enterprise back-office application such as databases)
(lower latency, higher spindle speeds)

Duty Cycle

8 hours/day, 5 days/week
-24 hours/day, 7 days/week

Initiators
Supports single initiator
-Can support multiple initiators.

# of Communication Ports/Channels
Single-ported
-Dual-ported

I/O Stream
Half-duplex
-Full-duplex

Storage Protocol
ATA Command Set (Native)*
-SCSI Command Set

Error Recovery
Limited (due to ATA Command set).
-Robust (due to SCSI Command set).

Communication Path
1-meter*
-8 meters

Nearline SAS and SAS-SSD Drives


Two other types of drives that fall within the SAS arena are:
Nearline SAS (NL-SAS)
-Same drive mechanics and reliability of an Enterprise SATA disk
-Standard SAS interface (NOT a SAS interposer board that is attached to a SATA
disk)
-Lower spindle speed than SAS disks (such as 7.2k RPM for NL-SAS)
SAS Solid-State Drives (SAS-SSD)
-Standard SAS interface
-Higher I/O capability (~3,000 to 8,000 IOPS as compared to ~180 IOPS for a 15k-RPM
SAS-HDD)
-Solid-State NAND memory can only be written to a discrete number of times
-Available in Read-Intensive and Write-Intensive (equipped with more and/or more
reliable cells to accommodate more writes)
NVM Express SSDs (and PCIe SSD)

To get even higher performance, some systems can be equipped with SSD drives that
contain their own controller and plug directly into the PCI bus.

An image here represents a PCIe SSD that plugs directly into the PCI bus, and is
offered by Dell EMC partner Fusion I/O.
PCIe SSD drives can provide magnitudes higher of I/O performance than SAS-based
drives: SSD or rotating media:
-Delivery over 500 times the IOPS than rotating media
-Has eight to ten times the performance of SAS SSDs

Express Flash: Front Access PCIe SSD


SAS/SATAPCIe
Dell EMC provides its own PCIe SSDs that are hot-pluggable and front-accessible,
unlike the card-based PCIe SSDs such as Fusion I/O.

It is done by providing an NVM PCIe card to expand the PCI bus to specific external
slots.

PowerEdge platforms that are equipped with Express Flash provide the benefits of
NVM Express without compromising the ability to keep the host operating while
replacing a PCIe SSD Hard Disk in its externally accessible carrier.
-Provides tiered storage option to support high I/O throughput
-Connects directly through PCIe
-Dell EMC provides front-accessible, hot-pluggable form factor
-Same performance gains as in normal PCIe SSD

Self-Encrypting Drives (SED)


SEDs are drives that have integrated circuitry to encrypt all data that are written
to the drive automatically.
Self EncryptingDrive(SED)
-Media encryption key encrypts the data
-Authentication key unlocks the drive and decrypts the media encryption key
-Typically requires a key management server (can be on the host or elsewhere)
SED drives can be traditional rotation media or solid state.

Connecting to Storage
The drives that you encounter within Dell EMC enterprise servers and direct-
attached storage equipment will be SATA, SAS, Near-Line SAS, or SSD.

Drives are either directly connected to a cable or connected through a carrier to a


backplane.

Click the tab to review the types of connectors that you encounter when servicing
hard drives for Dell.

Internal Hard Disk SAS Connector


This connector is found on the SAS drive. It plugs directly into the backplane when
seated with the proper hard drive carrier.

Internal Hard Disk SATA Connector


This connector is found on the SATA hard drive. Note the physical break between the
data lines (left) and the power lines (right).
Internal SAS Backplane Connector (SFF-8482)
This unified connector is the SFF-8482. This connector allows for both SATA and SAS
to be plugged into it. Some controllers support both SATA and SAS, with decreased
performance. Other controllers, like the PERC H810, do not support mixed
technologies on the same data path.
This type of connector can be found on the backplane of many Dell PowerEdge
systems.

SAS External Connectors


DAS environments use Mini-SAS and Mini-SAS High-Density (HD) connectors. These
cables typically run from the host to the enclosure, and in between enclosures.
Dell EMC uses Mini-SAS on enclosures. Dell EMC RAID controllers generally use the
Mini-SAS connectors, although some third-party controllers will use the Mini-SAS HD
connectors.

3.5" Hard Drive Carrier


Although many systems use the same type of drive carrier, not all use the same
carrier .

2.5" Hard Drive Carrier

2.5" to 3.5" Hard drive Carrier

1.8" Hard Drive Carrier

RAID Levels

RAID 0
RAID 60, like RAID-50, is also a nested RAID. In this implementation, a virtual
disk, in theory, can experience more than two disks, as long as there are not more
than two of the failed disks below to the same underlying RAID set.

RAID 1
Known as "Disk Mirroring," RAID 1 consists of two drives, each of which contains
identical data.
-Excellent read speed and write speeds comparable to single drive
-In the event of disk failure, data remains available with little performance
impact
-Rebuild process involves simply copying data from live disk to replacement disk

RAID-10
RAID-10 involves striping multiple RAID-1 mirror sets (two or more) together to
improve performance and increase redundancy.

RAID 2, 3 and 4
These RAID levels are not typically seen in production (if at all). All of these
RAID types stripe data with a protection as part of the stripe. Each requires a
minimum of three drives to operate as well.

RAID 2 writes stripes of data at the bit-level (one bit per drive in a stripe) and
uses the Hamming Code algorithm to correct 1-bit errors, and detect 2-bit errors.

RAID-3 writes stripes of data at the byte-level (one byte per drive in a stripe)
and uses parity for error correction. All parity is written to the same drive
(Dedicated Parity Drive).

RAID-4 writes stripes of data at the block-level (one block of data per drive in a
stripe) and uses parity for error correction. All parity is written to the same
drive (Dedicated Parity Drive).

RAID 5
RAID 5 is implemented with a minimum of three drives, and has an efficiency of N -1
/ N (where N equals the total number of disks in the logical disk).

Commonly referred to as Striping with Distributed Parity, RAID-5 is relatively


efficient, and reliably redundant with smaller size disks, up to one disk failure.
If one disk fails, performance suffers, as the controller has to read the parity
block and perform calculations to determine the missing piece of data. Writing to a
degraded RAID-5 also suffers significantly. When implemented in a solution with a
large number of very large capacity disks, the rebuild times become significant
(sometimes days or even weeks, depending on the number of disks and performance
load of the system).

One way to help improve performance of a RAID-5 with many disks is to implement a
RAID-50.

RAID 50
RAID 50 is a type of "nested RAID." It allows the use of multiple underlying RAID-5
sets and stripes them using RAID-0. Although, in theory, RAID-50 can experience
multiple disk failures, if two disk failures occur in the same underlying RAID-5
set, all data can be considered lost.

RAID 6
RAID 6 is similar to a RAID 5. It is also striping with parity, yet it stripes with
dual parity (commonly referred to as P- and Q-parity) and distributes the parity
across all disks. RAID 6 can experience up to two disk failures. When operating in
a degraded mode, performance is significantly impacted, although caching
controllers can help minimize the impact.

RAID-60
RAID 60, like RAID-50, is also a nested RAID. In this implementation, a virtual
disk, in theory, can experience more than two disks, as long as there are not more
than two of the failed disks below to the same underlying RAID set.

PowerEdge Expandable RAID Controllers (PERC)


A Redundant Array of Independent Disks is managed by a specially designed
controller (RAID controller). It manages the physical disk drives and presents them
to the host operating system as if they were a single logical unit.

As you already know, the PowerEdge Expandable RAID Controller is Dell proprietary
RAID controller for PowerEdge products. PERCs exist as either PCI interface cards
or are integrated onboard controllers.

A PERC supported feature includes hot-spares and hot-swappable technology, drive


roaming identification, which is simplified scalability. It has the ability to
migrate from one level of RAID to another, sometimes. The drive roaming
identification notes the drives that have been moved into different physical slots.
The simplified scalability helps in addition of new drives.

RAID Terms

RAID Controller
The device that manages the physical disk drives and presents them as a logical (or
virtual) disk to the host operating system.

Hot Spare
A physical disk that is assigned a standby role by the RAID controller which will
be used to rebuild and/or copy data of the failed/failing disk.
The disk is recognized by the RAID controller and only used in the event that one
of the physical disks that is part of a logical drive fails. When a failure occurs,
rather than wait for a physical disk to be replaced, the hot-spare becomes
available immediately and the data rebuild process occurs at time of failure.

Modern RAID controllers can detect an imminent hard disk failure and pre-emptively
copy data from the marginal drive to the hot-spare and reduce the risk of data loss
even further.

Hot Swap
When a hard drive can be removed and inserted into a system without having to
reboot or restart the system or host. This implies that the system can recognize
the new device without rebooting. If the device is capable of being inserted
"live," but the OS or system are unable to bring it online by design, that would be
considered "hot-pluggable."

Drive Roaming
When a RAID controller supports drive roaming, this implies the ability to be able
to recognize a previously recognized disk regardless of what slot it is plugged
into (that is, the drive can "roam" from one slot to another and not impact the
ability of the system to recognize the entire logical disk and operate properly).

RAID Expansion
The ability to add a physical drive to a logical drive's total, increasing the
available capacity offered to the host. The operating system must also support the
ability to recognize additional space.

RAID Migration
The ability to convert a logical drive from one RAID protection type to another.
This is only supported on certain controllers (all PERC), and can only occur when
the target RAID level is at equal or greater capacity than the existing RAID level.
For example, you can migrate from a four-drive RAID-6 to a four-drive RAID-5, but
not to a four-drive RAID-10.

DAS, NAS, and SAN


Many options are available to add storage. Internal storage focuses on adding more
capacity within a server. It is relatively simple and the most inexpensive in the
short-term.

A server or multiple servers can scale disk capacity by using directly attached
external storage enclosures (DAS). DAS scalability is dependent upon the storage
enclosure that is added. Entry level DAS is more scalable than internal storage but
is still limited. The more DAS devices in the environment, the more management
overhead.

A Storage Area Network (SAN) and a Network Attached Storage (NAS) provides better
scalability, availability, and a single point of management for efficient storage
consolidation. In these environments, the RAID functionality and management are
removed from the host and handled by the SAN or NAS appliance. DAS, NAS, and SAN
are not exclusive in terms of providing a solution. These often work together,
usually in the form of managed tiering of data.
When comparing DAS, NAS and SAN, you can see the similarities and differences. DAS
communicates using a block-level protocol such as SCSI or ATA. SANs also
communicate with a block level protocol, but over a switch network. SCSI and ATA
do not exist in the network, except when they are encapsulated by a transport
protocol, such as Fibre Channel or iSCSI. NAS, on the other hand, does not perform
any encapsulation of a storage protocol over the network. NAS uses a standard
network file system (for example CIFS or NFS) for all communication with the host.
In this course, we focus on Entry-Level DAS.

DAS is simple, inexpensive, and scalable. Manageability can be challenging and host
connections limited. Examples of DAS are MD1000/2000 and 3000/3220.

DAS - Initiator-Target Communication


When the host needs to read or write data, the application sends a request to the
storage device through the operating system such as device driver (or "controller
driver"). This driver converts the request into a SCSI command, which travels from
the initiator to the target. The target sends the data back to the controller with
a signal that it has completed the command. The initiator continues the sequence
until all data to be read or written is complete.
1.Application requests to read/write data
2.Host interprets command to read/write data, routes to appropriate initiator
3.Initiator receives request and converts to block protocol commands
4.Initiator sends request through interconnect to the target address
5.Target receives request and processes it, sending an acknowledgement back to the
initiator when complete

Direct Attached Storage


Direct Attached Storage (DAS) has three simple requirements:
-An initiator (such as PERC 810, commonly referred to as a controller)
-Interconnect (or cable)
-The target (the disk enclosures or JBOD)

DAS - Initiator
Initiator - Devices that originate I/O communication
The initiator initiates communication with target devices to either read or write
to a storage device (originates I/O communication).

The picture is an example of an initiator that is used in a DAS environment: the


PERC H810 RAID controller.
Some initiators are considered bootable devices if they contain a BIOS/ROM that
supports a hosts architecture, others require the operating system to be
initialized and the driver loaded prior to use.

DAS - Target

Initiator - The device that originates I/O communication


Target - The device that receives and responds to I/O requests given by the
initiator.
The target of the I/O communication is SAS device that responds to the initiator.
Most of the time, a hard disk within an enclosure or attached to a backplane.
However, other SAS devices, such as Tape Drives, are also considered.
This is an image of several JBODs each containing disks that serve as targets in
the SAS chain. Each disk within the enclosure or attached to the backplane is
considered a target. The tape device is also considered a target.
DAS - Interconnect
Initiator - Devices that originate I/O Communication
Target - The device that receives and responds to I/O requests given by the
initiator.
Interconnect - The medium through which the initiator communicates with the target.
The interconnect is the physical path the I/O travels. In the image, this is
represented by the SAS cable and the ports to which it connects.

The interconnect can be considered to start with the part of the initiator, run
through the backplane and/or cable, and an end with the targets port or connector.

In the graphic below, you can see the interconnect in the form of an internal
backplane. The drives that are attached internally to the backplane, and the
interconnect is the drive connectors on the backplane, the backplane, the cables
that run back to the controller.

The application talks to the data through the operating system file system.

The operating system talks to the initiator through the controllers device driver.

The initiator originates all communication to the target by packaging the data
within a block level protocol, such as SCSI, and transporting it over the
interconnect to the target through a transport specification, such as SAS. The
target receives the data, decodes the command, and then responds in reverse order.

Note-- some application bypass the file system and write directly to the disk
through a "raw" mapping. Even in these instances, the conversion to SCSI (or ATA)
commands still has to occur by the device driver.

Backup, Archive, and Recover Overview


In this section, we discuss how Backup, Archive, and Recovery fit into the data
center, along with the technologies and types of equipment.

We review:
-Data Back up Applications
-Back up to Tape Solutions
Linear Tape Open (LTO) Technology
Tape Drive Hardware
Tape Media
-Back up to Disk Solutions
-Data Recovery
-Data Archival

Backup and Archival


In every business, information is required in order for the business to succeed.
Beyond this, many countries have regulatory requirements for companies to keep and
retain records for a specified amount of time.

Businesses not only need access to their data, but also must ensure that the data
is recoverable if production data goes off-line. They also need to keep historical
records to satisfy regulatory and business requirements.
Backup, Archive, and Recovery - Terminology

Backup
A collection of data stored on (removable) nonvolatile storage media in recovery in
case the original copy of the data is lost or becomes inaccessible.

Restore
The act of recreating production data (that is, restoring production data).
Note that this term should be taken within context, as restoring could be as simple
as a version of a document or record within a database, to the restoration of an
entire server or application function.

Recover
Usually synonymous with restore.
Depending on context, recover could imply more than data, especially when used in
the context of a "disaster recovery plan," where not only the data, but also
hardware may need to be recovered/replaced in order to restore a data center's full
functionality.

Archive
A collection of data (the data itself, possibly with associated metadata) in a
storage system whose primary purpose is the long-term preservation and retention of
that data.
Note, this data is usually considered cold and accessible in "near-time."

Backup, Archive, and Recovery - Terminology

Real-time Data (or Active Data)


Data is intended to be available immediately upon request without the need to move
it from lower tiered "near-online" storage (for example, data that resides on a
disk or disk enclosure that is mapped/attached to a host).

Near-online data (or near-time data)


Data is accessible with some delay in gaining access (that is, usually waiting a
number of seconds for access).

Off-line data (or archived data)


Data that may not be accessible for an extended time, measured in hours or even
days (for example, automated restoration from CD or Tape, or possibly from a remote
site).

LTO
Linear Tape Open. An open, standard single-reel magnetic tape technology, developed
by IBM, HPE, and Quantum. Also called Ultrium or LTO Ultrium.

Tape Drive
A data storage device that reads and write data on a magnetic tape.
Unlike a hard disk drive which provide direct access storage, a tape drive adapt
the sequential access storage method to access their data.

Sequential access to data


A storage technology that is read and written in a serial (one after the other)
fashion. Magnetic tape is the common sequential access storage device.

Backup Application
The backup application or appliance (purpose-built hardware with integrated backup
software) is the engine that facilitates data backup and recovery.
It can be real-time monitoring of a host and can copy all changes to a backup
device, or simply provide an agent to reside on the host to facilitate backing up
production data to tape or disk without impacting productivity.
Backup applications are outside the normal scope of a break/fix service. The field
technician must be familiar with how a backup is performed to validate any backup
hardware that may have been replaced.

Back Up to Tape or Disk


Backup applications are the engine that manages the backup. The backup device,
tape, or disk houses the data itself.
Tape backups are Linear Tape Open. Other forms of backup, such as disk, are usually
a part of a tiered storage strategy. These may house backup data for a short time
before archiving to tape and/or offsite.

Linear Tape Open


The LTO Ultrium technology was introduced in 2000 and has gone through seven
generations since inception. The chart describing the characteristics of the LTO
Ultrium (Single-reel methodology) tape drive and cartridge. Key points to keep in
mind:
-Any given LTO drive can read its own generation of tape and the previous two
generations of tape (N, N-1, & N-2)
-Any given LTO drive can write its own generation of tape and the immediate
previous generation of tape (N & N-1)
-LTO Hardware Encryption was introduced with the LTO-4 generation
-LTO Write Once Read Many (WORM) Capabilities was introduced with LTO-3 generation
-Logical partitioning was introduced with LTO-5

Back Up to Disk
With the voluminous increase in the amounts of data, the typical backup to tape
process has increased. With fastest current backup speeds of 160 MB/s to 180 MB/s
and large enterprise customers, backups can take multiple tapes, and 8 hours for 5
TB of data.
To help alleviate, backup administrators now employ a staged strategy. Back up data
to lowered tiered disk, from these disks, the data goes to tape or to a backup and
archival company.

Entry Level Dell EMC Storage Product Lines

Entry-Level Dell Storage


Dell EMC entry-level storage product lines provide simple solutions that are
designed to increase a hosts capacity in the simplest fashion: expansion by
directly attaching an extra disk or over the network with a simple Windows Storage
Server (NAS) solution. Simple and robust data protection options are provided with
Dell LTO and RDX removable media solutions.
This is a summary of these three entry-level storage product lines and their
respective features:

MD Storage
MD Storage is used when:
-Single or two (using split mode) host application
-Capacity intensive application
-Internal storage capacity insufficient
-Capacity growth rate exceeds internal capacity specifications
NX Storage (NAS)
-Windows Storage Server 2012 R2-based NAS Appliance
-Uses Preconfigured Standard Dell Server Hardware
-For customers needing Medium to High NAS Capacity
-Expandable with DAS attachment or SAN Backend Attachment
-Supports common networked file system protocols (CIFS and NFS)

Removable Media
LTO Customer Requirements
-High-performing
-Standards-based
-High-capacity
-Integrated Encryption
RDX (RD1000 Media) Customer Requirements
-Lowest cost
-Essential capacity
-Not Complex
-Removable Media

PowerVault MD Storage

Dell EMC PowerVault MD DAS solutions are highly reliable and expandable.
PowerVault MD DAS has the following common characteristics:
-Support for mainstream applications
-Optimized performance for streaming applications
-Capacity Intensive and Expandable
-Available from 12-drive to 84-drive enclosures
-Ability to expand capacity by daisy chaining enclosures (see technical
specifications for supported configurations)
-All types of drives supported (SSD, 15k, 10k, 7.2k) and available for 2.5" and
3.5" form factor (FF)

Dell PowerVault MD Storage Product Reference


Field technicians rely on reference pages for technical specifications and details
for installing, setting up, and configuring solutions.
Reminder: To access reference material, refer your normal Dell EMC log on. Open one
or more of the reference pages and locate the Field Service Information section and
identify where to find the procedures for part replacement that you will encounter
in the field.

MD1200/MD1220
This 2U JBOD operates at 6-Gbps SAS.

MD1400/MD1420
This 2U JBOD operates 12-Gbps SAS connectivity.

MD 1280
This chassis weights 137 lbs without any drives installed.

ME4 Series Introduction


The PowerVault ME4 Series is the next generation entry-level block storage array
that is optimized for SAN and DAS environments. It replaces the MD3, PS, and
SCv2000 series as the entry-level array.
-The ME4 Series is a configurable hybrid, all flash, or all hard drive solution.
-Simplicity and flexibility are built into the ME4 Series architecture. The ME4
arrays are customer installable in approximately 15 minutes.
-The customer can complete initial configuration of the array quickly with a new
intuitive web-based management UI called ME Storage Manager (MESM).
The backend is 12-Gb SAS
All software is included
ME4 arrays may be configured, monitored, and managed with the MESM or with
the integrated CLI

ME4 Series Product Portfolio

Portfolio DescriptionThe PowerVault ME4 Series is a great addition to Dell EMC


Midrange and Entry Systems portfolio, creating an unbeatable set of storage
solution offerings. The portfolio is optimized across a spectrum of cost, scale,
performance, efficiency, and data protection.
The portfolio includes:
-Dell EMC Unity software-defined storage appliance that enables customers to start
with a no cost community edition
-The low cost and automated ML3 tape library
-The economical and efficient SC Series
-The simple and unified Dell EMC Unity
-Now the entry-level block-only storage solution—the ME4 Series

Dell EMC Midrange and Entry Portfolio


pic in cp

Dell EMC ME4 Series Family


pic in cp

-The ME4 series consists of two 2U base arrays (ME4012 and ME4024), and a 5U base
array (ME4084). These systems support 12, 24, and 84 drives in the base,
respectively.
-Each array:
Is configurable with as much flash as needed
Supports the same multi-protocols
Includes a 12 Gb SAS back end
Ships with all-inclusive software
-The ME4 Series also includes 3 expansion enclosure options. The ME412 and ME424
are the 12 drive and 24 drive 2U options, and the ME484 is the 5U 84 drive
expansion option.
-Support for NX servers with Microsoft Storage File/Print server

Dell EMC PowerVault NX NAS (Windows-Based)

Dell EMC entry-level NAS servers are powered by Microsofts Windows Storage Server
(WSS). The latest released are powered by the WSS 2012 R2 version, though older NAS
appliances running WSS 2008 R2 may still be supported depending on warranty status.
Dell EMC Windows-based NAS appliances provide these common benefits:
-PowerEdge Server Hardware (rebranded to PowerVault)
Same FRU procedures
Same part numbers as the leveraged PowerEdge platform
BIOS identity updated with NX personality
-Windows-based Operating System
Same core operating system as Windows
License modifications enable unlimited connections
License modifications prevent operating system from being used as an
application server
Less expensive than standard Windows operating system
-Common Internet File System (CIFS) services
-Network File System (NFS) Services

WSS is built on the Microsoft Windows Server operating system. It provides file-
based shared storage for applications such as Microsoft Hyper-V, Microsoft SQL
Server, or any host that uses network file services.
Systems using WSS are typically referred to as Network Attached Storage (NAS)
Appliances.

Leveraging PowerEdge Platforms for PowerVault NX Appliances

Same Dell EMC PowerEdge Platform


Dell's Windows Storage Server powered NAS appliances all use common PowerEdge
hardware in specific configurations.
The location of all components are the same on an NX platform as an PE platform,
including the Service Tag information.

Identity Update
The most important thing about the NX platform and the leveraged PE hardware is the
need for the BIOS to be updated with the proper personality.
Just like OEM-rebranding, this is considered Dell confidential, and the personality
flash file is not generally available to the public. This is only available to
certain internal Dell personnel and onsite service providers.
Because different generations may have differences, always refer to the product
reference guide for the steps in acquiring and updating the BIOS of an NX platform.

Sme FRU Procedures as Dell EMC PowerEdge Platform


NX appliances have their own reference pages, but you'll notice that the FRU
procedures are identical to their PowerEdge counterpart.

Dell Windows Storage Server (WSS)-based NAS Appliances


Field technicians rely on reference pages for technical specifications and details
for installing, setting up, and configuring Dell solutions.

Below are the currently shipping WSS-based NAS Appliances. Open two or more of the
reference pages below and find the Field Service Information and the "Personality
Module" / BIOS update procedures.

(Hint: On older reference pages, use the search box to find personality update
procedure by searching "personality" or "rebrand".)

NX3300
Powered by WSS 2012 R2
NX3230
Powered by WSS 2012 R2
NX3200
Powered by WSS 2008 R2
NX430
Powered by WSS 2012 R2
NX400
Powered by WSS 2008 R2
PIC IN CP
NOTE: As new products are introduced, there will be an associated reference page
for the given product. Always search the reference pages for the latest
information.
Dell EMC PowerVault Tape and Removable Media

LTO tape drives are high-performance, high-capacity data-storage devices. It is


used to back up and restore data and to archive and retrieve files in an Open
Systems environment. The drives can be integrated into a system (internal model) or
can be provided as a separately packaged desktop unit (external model). Seven
generations of tape drives are in the LTO series.
RDX is available for essential backup use-case scenarios where high-capacity is not
a requirement and ease of use is more important. The RD1000, which uses RDX
removable disk media, offers increased portability and durability along with backup
software that provides data protection for faster data restores and reduced
downtime.

LTO Product Lines


Field technicians rely on reference pages for technical specifications and details
for installing, setting up, and configuring Dell solutions.

Except for the PowerVault 114X, LTO drives do not have component FRUs. They are the
entry-level LTO solutions.

Review the PowerVault 114X reference page, and identify the tape drive options
available and the total number of FRU procedures.

PowerVault 114X
LTO7 Tape Drive
PowerVault LTO6
PowerVault LTO5
PowerVault LTO4
PIC IN CP

NOTE: As new products are introduced, there will be an associated reference page
for the given product . Always search the reference pages for the latest
information.

RDX Product Lines

RD1000
For customers who only require the ability to back up essential data, or who do not
have a high-capacity requirement, Dell EMC provides the RDX-based RD1000. It has
been sold in the rack-mounted 114X chassis.
There are no FRUs for the RD1000, other than the whole unit.

You might also like