Dell EMC 1000 Enterprise)
Dell EMC 1000 Enterprise)
PRC
Peripheral riser cards (PRCs) are considered extensions of the system board and
allow attachment of additional circuit boards, such as host bus adapters (HBAs) or
network interface cards (NICs) to the system board. The riser board plugs into the
riser connector on the system board.
PERC
The PowerEdge Expandable RAID Controller (PERC) is Dell’s line of RAID controllers.
RAID controllers manage RAID arrays, which are physical disk drives presented to
computer as logical units. This allows data to be divided and replicated among
multiple drives for increased performance and redundancy.
PERC can be located on the system board in addition to a stand-alone PCI card. The
RAID controllers can also be managed within the BIOS and through Dell OpenManage™
software within the operating system.
PDB
The Power Distribution Board (PDB) provides hot-plug logic and power distribution
for the system. Each hot-pluggable power supply has its own AC power input to
provide AC power redundancy. A PDB does not spread the power load between multiple
active power supplies. The PDB provides redundancy and hot swapping only, allowing
a backup power supply to give power if the active supply fails.
A processor is the logic circuitry that responds to and processes the basic
instructions that drive a computer. The term processor has generally replaced the
term central processing unit (CPU). The processor in a personal computer or
embedded in small devices is often called a microprocessor.
The Dell™ PowerEdge™ 13G server features the Intel® Xeon® processor v3 (Haswell) or
v4 (Broadwell) product family while 14G server features the Intel® Xeon® Processor
Scalable Family, offering an ideal combination of performance, power efficiency,
and cost.
NOTE: The processor for the PowerEdge C6420 offers an Intel® Xeon® Processor
Scalable Family with Omni-path fabric solution.
To learn more about different Intel processors, you can visit http://ark.intel.com
and browse for the processor that you want. Information such as scalability, memory
supported, speeds and bandwidth availability and many more can be found on this
site.
The Dell PowerEdge 14G servers support DDR4 registered dual in-line memory modules
(RDIMMs), load-reduced DIMMs (LRDIMMs) and non-volatile dual in-line DIMM-Ns
(NVDIMM-Ns).
System memory holds the instructions that the processor executes and the data with
which instructions work. System memory is an important part of the main processing
subsystem of the PC with the processor, cache, system board, and chipset.
Types of Memory
DDR4
DDR4 is the next evolution in dynamic RAM (DRAM), bringing even higher performance
and more robust control features while improving energy economy for enterprise,
micro-server, tablet, and ultrathin client applications.
-Reduced Memory Power Demand at only 1.2V
-Larger DIMM capacities with 4Gb to 16Gb
-Higher data rates running at 667MHz–1.6GHz
DDR3L
At only 1.35V, DDR3L is the low-voltage version of DDR3.
-Reduces power and cost
-Low voltage does not mean lower performance
RDIMM
RDIMMs provide better signal integrity, population of more DIMM channels, and
better performance for heavier workloads.
-Higher latency than UDIMM
-Lower electrical load on the controller
-For customers who want large amounts of memory and high reliability, availability,
and serviceability (RAS)
-Better signal integrity
-Population of more DIMMs
-Better performance
UDIMM
Unregistered DIMMs (UDIMMs) are unbuffered, low-density, and low-latency DIMMs that
do not include a register or a buffer chip.
-Its lower price and slightly lower power are suited for customers who want cost
and power savings.
-Smaller capacity limits the system memory to 24GB.
LRDIMM
-Allow for greater bandwidth density
-Use a buffer chip in addition to a register
NVDIMM
Non-volatile dual in-line memory module (NVDIMM)
-Random-access memory.
-Allow the data to remain in place when power is removed from the system.
iDRAC
An integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller is
embedded in every Dell PowerEdge server. It provides functionality that helps you
deploy, update, monitor, and maintain Dell PowerEdge servers with or without a
systems management software agent. Because it is embedded within each server from
the factory, the Dell iDRAC with Lifecycle Controller requires no operating system
or hypervisor to work.
It enables streamlined local and remote server management, and it reduces or
eliminates the need for administrators to physically visit the server—even if the
server is not operational.
Please refer to iDRAC9 with Lifecycle Controller reference page for more
information
PERC
The PowerEdge Expandable RAID Controller (PERC) is Dell’s line of RAID controllers.
RAID controllers manage RAID arrays, which are physical disk drives presented to
computer as logical units. This allows data to be divided and replicated among
multiple drives for increased performance and redundancy.
PERC can be located on the system board in addition to a stand-alone PCI card. The
RAID controllers can also be managed within the BIOS and through Dell OpenManage™
software within the operating system.
PERC 9
The PERC 9 is a refresh of the Dell PERC portfolio in support of Dell’s 13G
PowerEdge servers. It encompasses both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers. The PERC 9 series of cards includes:
-PERC H330
-PERC H730
-PERC H730P
-PERC H830
-PERC FD33xD/FD33xS
See the User Guide to view the PERC 9 series and their specifications.
You can also check out PERC 9.2 new features here (Updates > PERC 9.2).
PERC 10
The PERC 10 is a refresh of the Dell PERC portfolio in support of Dell’s 14G
PowerEdge servers, encompassing both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers.
SAS Backplane
The SAS backplane is the interface between the hard-disk drives and controller. It
consist of a group of connectors into which other drives can connect. The actual
location of a hard drive is determined by the physical address on the backplane.
All SAS backplanes are designed and validated to meet 12 Gbps SAS and 6 Gbps SATA
requirements.
The system supports the Hot Spare feature, which significantly reduces the power
overhead associated with power supply redundancy.
When the Hot Spare feature is enabled, one of the redundant power supplies is
switched to a sleep state. The active power supply supports 100% of the load, thus
operating at higher efficiency.
The power supply in the sleep state monitors output voltage of the active power
supply. If the output voltage of the active power supply drops, the power supply in
the sleep state returns to an active output state.
If having both power supplies active is more efficient than having one power supply
in a sleep state, the active power supply can also activate a sleeping power
supply.
You can configure the Hot Spare feature using the iDRAC settings. For more
information on iDRAC settings, see the iDRAC User’s Guide at
dell.com/support/manuals.
PDB
The Power Distribution Board (PDB) provides hot-plug logic and power distribution
for the system. Each hot-pluggable power supply has its own AC power input to
provide AC power redundancy. A PDB does not spread the power load between multiple
active power supplies. The PDB provides redundancy and hot swapping only, allowing
a backup power supply to give power if the active supply fails.
PCIe
Peripheral Component Interconnect Express (PCI Express), officially abbreviated as
PCIe, is a high-speed serial computer expansion bus standard, designed to replace
the older PCI, PCI-X, and AGP bus standards.
PCIe has numerous improvements over the older standards, including higher maximum
system bus throughput, lower I/O pin count, smaller physical footprint, better
performance scaling for bus devices, and a more detailed error detection and
reporting mechanism.
PRC
Peripheral riser cards (PRCs) are considered extensions of the system board and
allow attachment of additional circuit boards, such as host bus adapters (HBAs) or
network interface cards (NICs) to the system board. The riser board plugs into the
riser connector on the system board.
NDC
Beginning with 12G servers, Dell uses a custom Network Daughter Card (NDC) to house
the complete LAN on Motherboard (LOM) subsystem. In these systems, the LOM is
provided on the NDC as part of Dell PowerEdge Select Network Adapters family.
There are two form factors of Select Network Adapters, one for blade servers and
one for rack servers. The Blade Select Network Adapter provides dual-port 10GbE
from various suppliers. The Rack Select Network Adapter provides a selection of
1GbE and 10GbE port options, such as 1000BASE-T, 10GBASE-T, and 10Gb SFP+.
The Select Network Adapter form factor continues to deliver the value of LOM
integration with the system, including BIOS integration and shared ports for
manageability while providing flexibility.
GPGPU
GPGPU stands for General-Purpose Graphics Processing Units, also known as GPU
Computing. Graphics Processing Units (GPUs) are high-performance many-core
processors capable of very high computation and data throughput.
GPU vs. CPU
A simple way to understand the difference between a GPU and a CPU is to compare how
they process tasks.
A CPU consists of a few cores optimized for sequential serial processing. A GPU has
a massively parallel architecture consisting of thousands of smaller, more
efficient cores designed for handling multiple tasks simultaneously.
IDSDM
The Internal Dual SD Module (IDSDM) provides the same feature set as the previous
embedded hypervisor SD card solution, with the added feature of a mirrored SD card.
Starting from 13G, the IDSDM is now optional rather than default.
In 14G, the Internal Dual microSD Module (IDSDM) and vFlash card are combined into
a single card module.
The IDSDM contains two SD ports for fail-safe purpose. When one fails, the other
will serve as a backup to reduce system downtime. There is also an internal USB
port on the system board for a USB key.
The IDSDM card provides the following major functions:
-Dual SD interface, maintained in a mirrored configuration (primary and secondary
SD)
-Provides full RAID-1 functionality
-Dual SD cards are not required; the module can operate with only one card but will
provide no redundancy
-Enables support for Secure Digital eXtended Capacity (SDXC) cards
-USB interface to host system
-I2C interface to host system and onboard Electrically Erasable Programmable ROM
(EEPROM) for out-of-band status reporting
-Onboard LEDs show status of each SD card
TPM
The Trusted Platform Module (TPM) is used to generate/store keys,
protect/authenticate passwords, and create/store digital certificates, specifically
to shield unencrypted keys and platform authentication information (secrets) from
software-based attacks. TPM can also be used to enable the BitLocker™ hard drive
encryption feature in Windows Server 2008. TPM is also used for Intel® Trusted
Execution Technology (Intel® TXT).
TPM Notes
Caution: Do not attempt to remove the TPM plug-in module from the old motherboard.
Once the TPM plug-in module is installed, it is cryptographically bound to that
specific motherboard. Any attempt to remove an installed TPM plug-in module breaks
the cryptographic binding, and it cannot be re-installed or installed on another
motherboard.
Customers may upgrade a system board's TPM module from one version to another. In
this case, the old TPM module is removed and discarded.
If the system is configured with a TPM chip, a new TPM chip has to be dispatched. A
field engineer needs to install a new TPM chip after system board replacement.
If you are using the TPM with an encryption key, you may be prompted to create a
recovery key during program or System Setup. Be sure to create and safely store
this recovery key. If you replace this system board, you must supply the recovery
key when you restart your system or program before you can access the encrypted
data on your hard drives.
What Is iDRAC?
The integrated Dell Remote Access Controller (iDRAC), which is embedded within each
server from the factory, is designed to make server administrators more productive
and improve the overall availability of Dell servers. iDRAC alerts administrators
to server issues, helps them perform remote server management, and reduces the need
for physical access to the server.
Lifecycle Controller
The Dell Lifecycle Controller provides advanced embedded systems management to
perform systems management tasks such as deployment, configuration, updating,
maintenance, and diagnosis using a GUI.
The Quick Sync-enabled bezel must be ordered at time of purchase. After point of
sale, upgrade is not available.
OpenManage Mobile (OMM) 2.0 application is required to interact with the Quick Sync
2 module.
The wireless capability is enabled by an external button referred to as an
Activation Button, it is deactivated by pressing the button again (or upon
disconnect/timeout). The activation button is located on the front of the
mechanical assembly and when pressed starts transmitting and receiving.
You can also view the list of videos below that will show you how to configure
using iDRAC8 and Lifecycle Controller.
iDRAC Simulator (Desktop version)
To run the simulator:
1.Ensure you have administrator rights on your computer
2.Download the file: iDRAC9_sim.zip
3.Unzip iDRAC9_sim.zip to root (C:\) of your OS installation and run the node-
v6.11.1-x64.msi installer
4.Move/copy the provided shortcuts to your desired working location
5.Go to the Start menu and open the node.js command prompt. The first line in the
command windows should be:
Your environment has been set up for using Node.js 6.11.1 (x64) and npm
6.From the c:\ prompt, type cd c:\iDRAC_Simulator then run start.bat. In the
command windows you will see:
c:\iDRAC_Simulator>node simulate.js
Simulator running @port: 3000
Do NOT close/exit the command prompt
7.Launch iDRAC9 Simulator using the provided shortcut and login with the following
credentials:
Username: root
Password: calvin
BMC Interfaces
The following BMC interfaces allow you to configure and manage the system through
the BMC:
The BMC Management Utility allows remote, out-of-band LAN, and/or serial port power
control, event log access, and console redirection.
The Remote Access Configuration Utility in x9xx systems enables configuring BMC in
a pre-operating system environment.
The Dell OpenManage Deployment Toolkit SYSCFG utility provides a powerful command-
line configuration tool.
Dell OpenManage Server Administrator allows remote, in-band access to event logs,
power control, and sensor status information and provides the ability to configure
the BMC.
Command-Line Interface (CLI) tools provide a command-line tool for sensor status
information, System Event Log (SEL) access, and power control.
For the complete list of supported systems and operating systems, see the
readme.txt file in the root installation folder.
Specifically, Dell BMC supports Virtual KVM & Media, while the Lifecycle Controller
Pre-Boot environment has been removed.
Visit the Dell Support website to select the BMC version that you are working on.
Then, download the User Guide > Using the BMC Management Utility > Installation
Procedures section to learn how to install and uninstall the BMC Management
Utility.
Installation Prerequisites
Before using or installing the BMC Management Utility, there are few procedures the
user needs to perform:
Configuring the BIOS
Configuring the Baseboard Management Controller
Configuring the BMC with the Dell OpenManage Deployment ToolKit (DTK) SYSCFG
utility
Configuring the BMC with Dell OpenManage Server Administrator
User Guide
1.Access to the User's Guide from the Dell Support website to learn how to perform
the above configurations.
2.Go to the Dell Support website.
Click View Product > Software & Security > Remote Enterprise Systems Management >
Baseboard Management Controller Management Utilities > BMC Version.
3.Click the Manuals drop-down button.
4.Click the available hyperlink to download or view the user manual in a web
browser.
5.Look under the Configuring Your Managed System section.
A RAID controller is used for controlling a RAID array. It manages the physical
disk drives as logical units.
A PowerEdge RAID Controller (PERC) is Dell's proprietary RAID controller.
PERC 9
The PERC 9 is a refresh of the Dell PERC portfolio in support of Dell’s 13G
PowerEdge servers. It encompasses both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers. The PERC 9 series of cards includes:
PERC H330
PERC H730
PERC H730P
PERC H830
PERC FD33xD/FD33xS
See the User Guide to view the PERC 9 series and their specifications.
You can also check out PERC 9.2 new features (Updates > PERC 9.2).
PERC 10
The PERC 10 is a refresh of the Dell PERC portfolio in support of Dell’s 14G
PowerEdge servers, encompassing both hardware changes and firmware updates, while
continuing to support SAS, SATA, and Solid State Drive (SSD) devices internal to
Dell PowerEdge servers.
Please refer to PowerEdge RAID Controller Card (PERC) 10 reference page (Manuals >
PERC 10 User Guide) for more information.
When you replace a failed PERC card, you need to ensure successful recovery of
existing storage arrays and validate with the customer that the configuration is
being rebuilt.
Follow the procedure below to import the configuration of an existing HDD RAID
array to a replacement PERC card through the PERC BIOS utility.
-Power the system on and press <Ctrl> + <R> to enter the PERC BIOS.
-Press <F2> with the new card selected.
-Use the Up and Down Arrow keys to highlight the Foreign Configuration option.
-Use the Up and Down Arrow keys to navigate to the Import option and press <Enter>
again.
-Choose OK when asked to confirm. The BIOS will state that the import has completed
successfully.
-Confirm that the imported virtual disk has been imported under the VD Management
section.
Introduction
Dell leverages PowerEdge server hardware and rebrands, or changes the identity to a
Storage product. Leveraging allows the use of the same hardware and replacement
parts across several platforms.
Rebranded systems use a Personality Module file to update the system with the
appropriate brand identity (or personality).
The labor ticket contains the system model name, Service Tag, and, in most cases, a
URL to the system's product reference page in Educate Dell.
Because some parts are used in multiple products, do not rely on a part number
description to confirm your model name and type.
Verify that Easy Restore properly restored the backup information. If not, manually
restore the Personality Module file using product reference material instructions.
PowerEdge 14th generation server with iDRAC uses Easy Restore to automate the
system board replacement process.
After replacement, the boot screen displays. This allows the restoration of the:
Service Tag.
Licenses.
Enterprise UEFI diagnostics.
System configuration settings—BIOS, iDRAC, and NIC.
Always verify that Easy Restore properly rebranded the system. If Easy Restore
fails, manually update using the product's reference material.
Easy Restore does not back up other data such as firmware images, vFlash data, or
add-in cards data.
Verify Updates
After replacing the system board, updating the Personality Module, and rebooting,
verify the branding changes on the BIOS/POST screen. You can also verify the
branding changes in the iDRAC.
If you do not see the correct system name, take the following corrective actions:
Ensure you completed all after-service processes described in the product reference
materials.
Manually install the Personality Module file (if you used Easy Restore).
Call Dell Technical Support for assistance.
Common scenarios where the DSP needs to use SLI to troubleshoot the customer's
system include:
Running an unsupported or corrupt operating system
Having misconfigured RAID
Experiencing an inability to install the tools into your existing operating system
Operating system not booting and a backup is needed
Flashing BIOS outside operating system
Resetting Service Tag information
The need to run additional Dell tools outside of the operating system
Before creating a bootable USB key for SLI, please make sure:
- You are using a minimum of 4GB of free space on the USB key.
- You have administrative rights.
Scenario:
DSP Tony is assigned to replace a failed HDD on a server. Once the HDD is replaced,
Tony logs in to the operating system but notices the replacement HDD is not online.
Since an unsupported operating system is used, Tony needs to shut down the server
and boot into the SLI USB key in order to run DSET and retrieve the log to
troubleshoot the issue with the replacement HDD.
Play the video and learn how to boot into the USB Key and retrieve the log.
NOTE: This video includes audio. Please enable your system's audio playback.
Retrieving Logs Using Lasso
Dell Lasso is a Windows-based client and server utility that automates the
collection of hardware, software, and storage logs and configuration from servers,
storage arrays, Fibre Channel switches, Ethernet and FCoE switches, attached hosts,
enclosures, management and monitoring software, and wireless controllers.
After Lasso collects the data, it parses the data into Extensible Markup Language
(XML) and HyperText Markup Language (HTML) formats.
The data is then packaged along with the collected data and encrypted. The
collected data is saved as a .zip file on the local system.
Optionally, you can enable Lasso to automatically upload the report to Dell
Technical Support. Dell uses this data as part of the Systems Maintenance Service
(SMS) to determine hardware, software, and firmware versions for compatibility,
troubleshoot problems in storage devices, and upgrade the existing equipment. Lasso
tracks and waits for completion of each process and notifies the user of any
failures during collection.
Scenario:
In this scenario, DSP Tony will approach a Dell EqualLogic PS6000 storage unit and
use the diagnostic tool Lasso to verify the status of the RAID array and identify
the faulty hard drive.
NOTE: This video includes audio. Please enable your system's audio playback.
The Dell EMC PowerEdge™ rack enclosures are designed to hold and protect server,
network, and data storage equipment.
Dell EMC racks have installation procedures that are listed in the reference
material or user manual. For field technicians, it is important to access or
download rack reference material prior to an installation.
The Dell EMC PowerEdge rack enclosures are offered in three height options: 24U
(2420), 42U (4220), and 48U (4820). Each of these racks is available in the
standard 600 mm x 1070-mm dimensions to fit within a 2-tile floor plan layout.
Wide versions of each rack will be marketed with a W (4820W, 4220W), and deep
versions with a D (4820D, 4220D).
The W and D features include:
-Deeper chassis (1070 mm deep)
-Increased cabling space
-Easier access to IT equipment
-Improved thermal attributes
-Higher door perforation to 80%
-Improved hot/cold air separation
-More power distribution schemes
Accessories:
-Rackmount 17" consoles
-32-port digital KVM console switch
These racks were designed to hold and protect server, network, and data storage
equipment and provide efficient cooling. The Energy Smart rack is a design that is
targeted for raised-floor data centers.
Key features of Dell EMC Energy Smart rack enclosures include:
-600 mm wide x 1200 mm deep-form factor
-Solid polycarbonate panel in the front door with metal extension forms and
vertical plenum to direct air evenly to installed equipment
-Exceptional design facilitates more cabling and power distribution options through
exceptional design
-Static load rating of 2,500 lb. for 46U/40U
-Large open base for airflow and cable entry and exit
-Rack-top cable exits with adjustable sliding door and removable tail-bar
-Reinforced frame for stability
Rack Cabling
There is no perfect or correct method for routing cables. However, there are some
best practices for optimal organization and access.
Dell EMC racks have features that simplify cable routing, including:
-Power distribution unit (PDU) channels in each rack flange route power cables to
each system.
-Cable clips are mounted in the PDU channels to keep cables out of the way and
prevent cords from becoming tangled.
Cables are routed out of the rack in two ways on a standard configuration:
-Through the cable exit at the bottom of the back doors
-Through the adjustable cable slot at the top of the rack and into a cable raceway
Follow the instructions below to open the top cable slot before routing cables
through the slot.
This section of the page outlines the steps to open and close the top cable slot.
The top and bottom bars that are used to stabilize the back doors can be removed,
making it easier to route cables through the top and bottom of the rack.
Dell EMC Rails and Rail Conversion (Dell VersaRails, RapidRails, ReadyRails, and
Cloud Rails)
Overview
Dell EMC rail kits were designed for racks with square mounting holes for tool-less
installation. They are also compatible with racks from other vendors, including
square-hole, round-hole, and threaded-hole racks. Dell EMC provides a rail-sizing
matrix for different rails according to mounting interface, rail type, rack types
supported, rail adjustability range, and rail depth.
Rail Types
Sliding Rails
Sliding rails enable users to fully extend a system out of the rack for service. To
help manage the numerous cables associated with rack-mounted servers, users can use
the cable management arm (CMA) with the sliding rails. The rails can be attached on
either the right or left side without tools.
Stab-in/Drop-in sliding rails for 4-post racks (New for 14G systems)
-Supports drop-in or stab-in installation of the chassis to the rails
-Support for tool-less installation in 19" EIA-310-E compliant square, unthreaded
round hole racks including all generations of the Dell racks. Also supports tool-
less installation in threaded round hole 4-post racks
-Required for installing R640 in a Dell EMC Titan or Titan-D rack
-Support full extension of the system out of the rack to enable serviceability of
key internal components
-Support for optional cable management arm (CMA)
-Minimum rail mounting depth without the CMA: 714 mm
-Minimum rail mounting depth with the CMA: 845 mm
-Square-hole rack adjustment range: 603 to 915 mm
-Round-hole rack adjustment range: 603 to 915 mm
-Threaded-hole rack adjustment range: 603 to 915 mm
Rail Types
Static Rails
The static rails support a wider variety of racks than sliding rails. However, they
do not support serviceability in the rack and are thus not compatible with the CMA.
Static rails features summary
-Supports Stab-in installation of the chassis to the rail
-Support tool-less installation in 19" EIA-310-E compliant square or unthreaded
round hole 4-post racks including all generations of Dell racks
-Support tooled installation in 19" EIA-310-E compliant threaded hole 4-post and 2-
post racks
-Minimum rail mounting depth: 622 mm
-Square-hole rack adjustment range: 608 to 879 mm
-Round-hole rack adjustment range: 594 through 872 mm
-Threaded-hole rack adjustment range: 608 to 890 mm
If installing to 2-Post (Telco) racks, the ReadyRails Static rails (B4) must be
used. Both sliding rails support mounting in 4-post racks only.
1.At the front of the rack cabinet, position one of the RapidRails slide assemblies
so that its mounting-bracket flange fits between the marks on the rack.
The top mounting hook on the slide assembly's front mounting bracket flange should
enter the top hole between the marks on the vertical rails.
2.Push the slide assembly forward until the top mounting hook enters the top square
hole on the vertical rail. Then push down on the mounting-bracket flange until the
mounting hooks seat in the square holes and the push button pops out and clicks.
3.At the back of the cabinet, pull back on the mounting-bracket flange until the
top mounting hook is in the top square hole. Then push down on the flange until the
mounting hooks seat in the square holes and the push button pops out and clicks.
4.Repeat step 1 through step 3 for the slide assembly on the other side of the
rack.
Although the racking and cable management steps are listed separately, for ease in
cable management users will want to begin the cable management installation process
after this step in the racking instructions.
5.Pull the two slide assemblies out of the rack until they lock in the fully
extended position.
If the user is installing more than one system, install the first system in the
lowest available position in the rack.
Never pull more than one component out of the rack at a time. Also, due to the size
and weight of the system, it is recommended to have more than one person install
the system in the slide assemblies.
6.Place one hand on the front-bottom of the system and the other hand on the back-
bottom of the system. Then lift the system into position in front of the extended
slides.
7.Tilt the back of the system down while aligning the back shoulder screws on the
sides of the system with the back slots on the slide assemblies, and engage the
back shoulder screws into their slots.
8.Lower the front of the system and engage the front and middle shoulder screws in
their slots. (The middle slot is just behind the yellow system-release latch.) When
all the shoulder screws are properly seated, the yellow latch on each slide
assembly will click and lock the system into the slide assembly.
9.Press up on the green slide-release latch at the side of each slide to move the
system completely into the rack.
10.Push in and turn the captive thumbscrews on each side of the front chassis panel
to secure the system to the rack.
This completes the Dell EMC Rapid Rail installation.
If possible, remove both rack sides to allow for easier access during installation
of the rails.
1.Choose an appropriate U space in which to install the rails.
2.Take one of the rails (right rail shown), align the front rail pegs with the
corresponding holes in the appropriate rack U space, and press inward to snap the
front of the rail into place.
3.Repeat step 2 with the rear attachment of the rail.
4.Repeat step 1 through step 4 for the rail assembly on the other side of the rack.
Although the racking and cable management steps are listed separately, for ease in
cable management the user will want to begin the cable management installation
process after this step.
5.Pull the two slide assemblies out of the rack until they lock in the fully
extended position.
Never pull more than one component out of the rack at a time. Also, due to the size
and weight of the system, it is recommended to have more than one person install
the system in the slide assemblies.
6.Lower the system in between the rails until the system's shoulder screws are all
engaged in their respective rail slots. Then slide the system backward to lock the
shoulder screws into the rails.
Never pull more than one component out of the rack at a time.
Due to the size and weight of the system, it is recommended to have more than one
person install the system in the slide assemblies.
7.Press the side-release latches
8.Slide the system inward on the rails until it locks into place in the rack.
This completes the Dell EMC Ready Rail installation steps.
Dell EMC Basic Rack Power Distribution Unit (PDU) products are stand-alone power
distribution devices that provide power for devices within a rack.
Dell EMC PDUs provide reliable power distribution throughout a server rack
enclosure without inhibiting access to the components within the enclosure.
Toolless Mounting
To install the rack PDU using the toolless mounting method, first install it in the
rear of an enclosure, in the cable channel directly behind the rear vertical
mounting rails.
Follow the next steps to perform a toolless PDU installation into any standard EIA-
310 rack or enclosure:
1.Slide both mounting pegs into the holes located in the channel in the rear panel
of the enclosure.
2.Snap the rack PDU into place by pushing it downward until it locks into position.
Two PDUs can be mounted on one side of the enclosure, or one on each side, by using
the toolless mounting method.
Standard Orientation
The standard mounting orientation for the PDU is 180 degrees. This is a snap-in,
toolless installation. Two factory-installed mounting pegs are inserted in the
mounting keyholes on the wall of the rack.
90-Degree Orientation
The PDU can also be mounted in a 90-degree mounting orientation. For this
configuration, two deep mounting pegs (provided with the PDU) are user installed
before mounting the PDU into the rack.
Horizontal Mounting
To install a rack PDU using the horizontal mounting brackets, first install the
brackets on the rack PDU and then attach the PDU to the rack using caged nuts
(provided with your enclosure).
Perform the following steps to mount the Rack PDU horizontally in any standard EIA-
310 rack or enclosure:
1.Identify the location for the PDU in the horizontal rail position that is closest
to the systems to which it supplies power.
2.Position the PDU so that the mounting hooks enter the square holes on the
horizontal rail.
3.Push down the PDU until the mounting hooks seat in the square holes and the
release button pops out and clicks.
Standard Mount
Dell EMC Metered Rack Power Distribution Unit (rPDUs) can be installed into Dell
EMC PowerEdge Racks by inserting the preinstalled mounting pegs into the key-shaped
slots of the PDU tray.
Deep Mount
Sometimes, customers may want to mount their rPDUs at 90 degrees to the tray. The
PDU comes with extra mounting pegs and screws to install to the PDU's side to
enable them to do so. Dell recommends the customers remove factory-installed pegs
from the back of the PDU in this scenario.
Grounding Wire
The PDU should be grounded to the rack with the grounding kit. Connect the
grounding wire to the PDU's ground bonding point connection and an exposed metal
component of the rack.
Frame Connection
M5 X 12 Pan Head Screw(Black) Star WasherGround Bonding Point Connection 10-32 X
0.5" Pan Head Screw (Silver) Star Washer
Dell EMC recommends grounding the rPDU to the rack frame with the ground wire
provided in the grounding wire kit.
Dell EMC UPS products provide signaling through RS-232 cable or USB cable. A
network interface card can be added to provide browser accessibility, email
notifications, access security, event logging, and server shut-down controls.
Features
Providing outstanding performance and reliability, UPS's unique benefits include:
-4U size that fits any standard 48 cm (19") rack
-"Start-on battery" capability for turning on the UPS without utility power
-Replacement of batteries without turning off
-Extended run time with an Extended Battery Module (EBM)
-Two standard communication ports (USB and DB-9)
-Optional Dell Network Management Card with enhanced communication capabilities
-Dell UPS Management Software for power monitoring and graceful shutdowns
The Dell EMC 1U Rackmount LED Console enables users to mount a data center
administrator's control station directly into a data center rack without
sacrificing rack space that is needed for servers and other peripherals. A
Rackmount LED Console is typically attached to a keyboard/video/mouse (KVM) console
switch to manage the setup, administration, and maintenance of rack-mounted
servers.
Networking Terminology
The following tabs illustrate the terms and vocabulary that is typically found in
the networking environment:
NIC - Network Interface Card. A network adapter on a circuit board that plugs into
a computer's internal bus architecture.
10base-T - Similar to the standard telephone cabling and also known as Twisted Pair
Ethernet, 10BASE-T is a 10 MBps CSMA/CD (carrier sense multiple access with
collision detection) Ethernet LAN that works on Category 3 or better twisted-pair
cables. 10BASE-T cables can be up to 100 m in length.
1GbaseT - 1GbaseT is also known as 1000BASE-T or 802.3z/802.3ab. It is a later
Ethernet technology that utilizes all four copper wires in a Category 5 (Cat 5 and
Cat 5e) capable of transferring 1 Gbps.
10GbaseT - 10GbaseT is also known as 802.3ae. It is a new standard that was
published in 2002 and supports up to 10 Gbps transmissions. 10 GbE defines only
full-duplex point-to-point links that are connected by network switches, unlike
previous Ethernet standards. Half-duplex operation, CSMA/CD (carrier sense multiple
access with collision detection), and hubs do not exist in 10GbE.
RJ-45 - An RJ-45 is an 8-pin connection used for Ethernet network adapters.
This connector is most commonly connected to the end of Cat 5 or Cat 6 cable, which
is connected between a computer network card and a network device such as a network
router.
Broadcast - A broadcast describes a message or data sent to more than one person or
device.
Broadcast Domain - A broadcast domain is a logical division of a computer network,
in which all nodes can reach each other by broadcast at the data link layer. A
broadcast domain can be within the same LAN segment, or it can be bridged or
switched to other LAN segments.
Hub - A common connection point for devices in a network. Hubs are commonly used to
connect segments of a LAN. A hub contains multiple ports. When a packet arrives at
one port, it is copied to the other ports so that all segments of the LAN can see
all packets.
Switch - In networks, a switch is a device that filters and forwards packets
between LAN segments. Switches operate at the data link layer (layer 2) and
sometimes the network layer (layer 3) of the OSI Reference Model, and therefore
support any packet protocol. LANs that use switches to join segments are called
switched LANs or, in the case of Ethernet networks, switched Ethernet LANs.
Layer 2 Switch (MAC) - An L2 switch does switching only. This means that it uses
MAC addresses to switch the packets from the incoming port to the destination port
(and only the destination port). It maintains a MAC address table so that it can
remember which ports have which MAC address associated.
Layer 3 Switch (IP) - An L3 switch also does switching exactly like a L2 switch.
The L3 means that it has an identity from the L3 layer. Practically this means that
an L3 switch is capable of having IP addresses and routing. For intra-VLAN
communication, it uses the MAC address table. For extra-VLAN communication, it uses
the IP routing table.
MAC - A medium access control (MAC) address is a physical address and hardware
address whose number is uniquely formatted in hexadecimal format and given to each
network interface device on a computer network. The addresses are usually assigned
by the hardware manufacturer, and these IDs are considered burned into the firmware
of the network access hardware.
MAC Table - A MAC address table, sometimes called a Content Addressable Memory
(CAM) table, is used on Ethernet switches to determine where to forward traffic on
a LAN.
Computer Network
Devices - Communication devices - Transmission media
A computer network is a collection of computers and devices that are connected
together through communication devices and transmission media.
Usually, the connections between computers in a network are made using physical
wires or cables.
However, some connections are wireless, using radio waves or infrared signals.
Network Components
Scenario: There are a couple of devices in your office that are off the network
grid. You have been given a task to identify the necessary hardware to set up a
network that meets the following requirements:
-File sharing from the server to all the PCs
-Printer sharing for every personal computer and server
-Local area network only with no Internet connectivity
Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.
Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.
Each of the devices that is connected to the network has a network adapter.
A network adapter is a device that enables a computer to talk with another
computer/network. The data-link protocol uses unique hardware addresses (MAC
address) encoded on the card chip to discover other systems on the network so that
it can transfer data to the right destination.
Cable is one form of transmission media that can transmit communication signals.
The wired network typology uses a special type of cable to connect computers on a
network.
Networking Cables
Networking cables are used to connect one network device to another or to connect
two or more computers to share a printer, a scanner, and so on. Different types of
network cables like coaxial cable, optical fiber cable, or twisted-pair cables are
used depending on the networks topology, protocol, and size.
Coaxial Cable
Coaxial lines confine the electromagnetic wave inside the cable, between the center
conductor and the shield. The transmission of energy in the line occurs totally
through the dielectric inside the cable between the conductors. Coaxial lines can
be bent and twisted (subject to limits) without negative effects, and they can be
strapped to conductive supports without inducing unwanted currents in them.
Advantages
-Higher bandwidth: 400 to 600 MHz
-Up to 10800 voice conversations
-Can be tapped easily (pros and cons)
-Less susceptible to interference than twisted pair
Disadvantages
-High attenuation rate makes it expensive over long distance
-Bulky
Advantages
-Greater capacity/bandwidth
-Smaller size and lighter weight
-Lower attenuation
-Immunity to environmental interference
-Highly secure due to tap difficulty and lack of signal radiation
Disadvantages
-Expensive over short distance
-Requires highly skilled installers
-Difficult to add additional nodes
Twisted-Pair Cable
Twisted pair cabling is a form of wiring in which pairs of wires (the forward and
return conductors of a single circuit) are twisted together for the purposes of
canceling out electromagnetic interference (EMI) from other wire pairs and from
external sources. This type of cable is used for home and corporate Ethernet
networks.
Advantages
-Inexpensive and readily available
-Flexible and lightweight
-Easy to work with and install
Disadvantages
-Susceptibility to interference and noise
-Attenuation problem
-For analog, repeaters needed every 5–6 km
-For digital, repeaters needed every 2–3 km
-Relatively low bandwidth (300Hz)
Campus
The campus is where users (employees) connect to the network, along with all of the
devices those employees use.
-Connects users and their devices such as:
Laptop
Desktop
-Mobile phones
-Mixed wired and wireless Ethernet
-Generally speed from 100 Mbps to 1 Gbps
-L2 and L3 switches
-Type of applications: Email, web, and video applications that have wide range of
bandwidth and delay sensitivity requirements
Data Center
The data center is where devices connect to the network. It consists of mainly rack
servers, load balancers, firewalls, and other such devices designed to process and
exchange data.
-Connects devices such as:
Rack servers
Load balancer
Firewalls
-Generally speed from 1Gbps to 10Gbps
-Primarily L2 (within the DC)
-Type of applications: Database management, virtual machine management, and file
transfer applications that have simple bandwidth (albeit large in quantity) and
little delay sensitivity
When considered in this regard, it should be clear that campus and data center
routing and switching are actually very different. The routers and switches you
select should be designed to maximize their potential, based on these very
different set of requirements.
A network interface card (NIC) connects your computer to a local data network or
the Internet. The card translates computer data into electrical signals and sends
it through the network; the signals are compatible with the network so computers
can reliably exchange information.
Since Ethernet is so widely used, most modern computers have a NIC built into the
motherboard. A separate network card is not required unless some other type of
network is used.
Key Functions:
-Acts as the middleman between computer and network
-Converts the data from your personal computer into electronic signals and
conversely
-Provides a network address for your personal computer in a networking environment
Layer 2 Switch
Layer 2 switching uses the media access control address (MAC address) from the
hosts network interface cards (NICs) to decide where to forward frames.
It is hardware-based, which means switches use application-specific integrated
circuits (ASICs) to build and maintain filter tables (also known as MAC address
tables or CAM tables).
Layer 2 switching provides the following:
Before running driver and firmware updates to NICs, be prepared to restore teaming
settings.
SFT
Switch Fault Tolerance
This uses two adapters connected to two switches to provide a fault-tolerant
network connection in the event that the first adapter, cabling, or switch fails.
Only two adapters can be assigned to an SFT team.
ALB
Adaptive Load Balancing
This increases network bandwidth by allowing transmission over two to eight ports
to multiple destination addresses, and incorporates fault tolerance.
Both 802.3ad modes include adapter fault tolerance and load balancing capabilities.
However, in Dynamic mode, load balancing is within only one team at a time.
Switch-independent teaming
This configuration does not require the switch to participate in the teaming. In
switch-independent mode, the switch does not know that the network adapter is part
of a team in the host; because of this, the adapters may be connected to different
switches. Switch independent modes of operation do not require that the team
members connect to different switches; they merely make it possible.
Switch-dependent teaming
This configuration requires the switch to participate in the teaming. Switch-
dependent teaming requires all the members of the team to be connected to the same
physical switch.
Intel ANS
Intel® Advanced Network Services (ANS) is installed by default along with Intel®
PROSet for Windows Device Manager.
When you install the Intel PROSet software and Intel ANS, more tabs are
automatically added to the supported Intel adapters in Device Manger.
The following tabs are added when the Intel ANS for Windows Device Manager is
installed:
-Boot Options
-Link Speed
-Advanced
-Teaming
-VLANs
Broadcom BACS
Broadcom Advanced Control Suite (BACS) provides management and useful information
about the Broadcom network adapter installed in the system.
BACS enables you to perform the following tasks:
-Detailed tests
-Diagnostics
-Analyses
-View and modify the property values
-View traffic statistics
For network teaming, you can use Broadcom Advanced Server Program (BASP), which
runs within Broadcom Advanced Control Suite itself.
Inter-Switch Connection
Most Dell EMC Networking switches are capable of operating either as stand-alone
devices, or in tandem with a matched device connected together as stack.
The stack behaves as a single switch, but will have the port capacity of the
combined devices.
All units have the same config file for redundancy. The config file is pushed to
all units.
Ring Topology
All devices in the stack are connected to each other forming a loop.
Each device in the stack accepts data and sends it to the device to which it is
connected. Packets continue through the stack until they reach their destination.
Failure Behavior
If one device or link in the ring is broken, the system automatically switches to a
stacking failover topology without any system downtime.
After the stacking issues are resolved, the device can be reconnected to the stack
without interruption, and then the ring topology is restored.
Firmware Updates
When a device boots, it decompresses the system image from the flash memory area
and runs it. When a new image is downloaded, it is saved in the area that is
allocated for a secondary system image copy.
On each boot, the device decompresses and runs from the system image that is
currently indicated as active. The active image can be set by an administrator.
Image Upgrade Methods:
-Trivial File Transfer Protocol (TFTP)
-Serial with XMODEM
-USB (only applicable to some models)
You may not always be responsible for this activity. However, you are required to
understand the information and process on this page. Refer to local policy and
procedure for your specific responsibilities.
Upgrades / Downgrades / Master Unit Stack Unit 1 Stack Unit 2 Stack Unit 3
When you add a new switch to a stack, the Stack Firmware Synchronization feature
automatically synchronizes the firmware version with the version running on the
stack master.
The synchronization operation may result in either an upgrade or a downgrade of
firmware on the mismatched stack member.
Removal
When removing a unit from the stack, you must disconnect all the links (both
logical and physical) on the stack member to be removed.
Back Up Configuration
The following command format only is applicable for the Dell Networking OS
versions. If using third-party OS software, refer to the configuration guides
provided with those releases.
If the original switch is still functioning and the customer has not backed up a
copy, connect a laptop to the management port of the switch. Make sure that the
laptop has a functioning TFTP server that is installed.
To back up the configuration:
1.Assign the laptop an IP address on the switch network to enable communication.
Establish a console connection to the switchs Command Line Interface (CLI).
2.In the console session, use the copy running-config tftp://hostip/filepath
command to back up the configuration to the laptop TFTP server.
If the defective switch will not power on and the customer has not made a backup,
the configuration is lost and there is no way to retrieve it.
Restore Configuration
The following command format only is applicable for the Dell Networking OS
versions. If using third party OS software, refer to the configuration guides
provided with those releases.
To restore the backup configuration:
1.Move the laptop to the replacement switch, and establish a console session and an
Ethernet connection.
2.Place the config file in the root directory of a TFTP server (if it is not
already) on a directly connected laptop. Use the command copy
tftp://hostip/filepath running-config to copy the configuration to the replacement
switch.
3.Verify that all the configuration has been registered in the running
configuration of the replacement system. You can do a "diff" between the output of
show running-configuration and the customer-given configuration.
4.Save the configuration with the command copy run start.
28xx Series
The PowerConnect 2808, 2816, 2824, and 2848 are dual-mode unmanaged or web-managed
all-Gigabit workgroup switches (10/100/1000). They have eight, 16, 24, or 48 ports
respectively. On the 2824 and 2848, there is an extra two small form-factor
pluggable (SFP) transceiver modules, for fiber-optic connectivity. Switches ship in
a plug-and-play unmanaged mode and can be managed through a graphical user
interface.
35xx Series
The PowerConnect 3524(P) and 3548(P) are managed 10/100 switches with gigabit
uplinks and Power over Ethernet (PoE) options for applications such as voice over
IP (VoIP), denoted on the models denoted with a "P" on the end of the part number.
All switches in this family support resilient stacking and have management and
security capabilities.
52xx Series
The PowerConnect 5200 series of managed switches comprised the 5212 (12 copper Gb
ports and four SFP ports) and the 5224 (24 copper Gb ports and four SFP ports).
55xx Series
The PowerConnect 5500 series switches, the successor of the 5400 series, are based
on Marvell technology. They offer 24 or 48 GbE ports with (-P series) or without
PoE. The 5500 have built-in stacking ports, using HDMI cables, and the range offers
two standard 10Gb SFP+ ports for uplinks to core or distribution layer. All 5500
series models (5524, 5524P, 5548, and 5548P) can be combined in a single stack with
up to eight units per stack. The 5500 series uses standard HDMI cables
(specification 1.4 category 2 or better) to stack with a total bandwidth of 40 Gbps
per switch.
7000 Series
The PowerConnect 7024 and 7048 have the same ports as the 6224/6248, have QoS
features for iSCSI, and incorporate 802.3az Energy Efficient Ethernet. The 7000
series offers the same type of ports: both the 1G on the front as optional 10G and
stacking modules on the rear side. Unlike the 6200 series, firmware for the 7000
series does support new functionality via version 4.x and 5.x like their 10G
brothers in the 8024 and 8100 series.
8000 Series
The PowerConnect 8024 and 8024F were rack-mounted switches offering 10 GbE on
copper or optical fiber using enhanced small form-factor pluggable transceiver
(SFP+) modules on the 8024F. On the 8024, the last four ports (21-24) are combo-
ports with the option to use the 4 SFP+ slots to use fiber optic connections for
longer-distance uplinks to core switches.
8100 Series
The PowerConnect 8100 series switches offered 24 or 48 ports on 10Gb and zero or
two built-in ports for 40 Gb QSFP ports. All models also have one extension-module
slot with either two QSFP 40Gb ports, four SFP+ 10Gb ports, or four 10GbaseT ports.
It is a small (1U) switch with a high port density and can be used as distribution
or (collapsed) core switch for campus networks and for use in the data center.
M Series
The PowerConnect M-series switches are classic Ethernet switches based on Broadcom
fabric for use in the blade server enclosure M1000e. These internal interfaces can
be one or 10 Gbps.
VEP4600
Model
Details
W-7240
32,768 users, 2048 APs for large-scale campus enterprise
W-7220
24,576 users, 1024 APs for medium to large campus
W-7210
16,384 users, 512 APs for medium campus enterprise
W-7205
8192 users, 256 APs for small campus enterprise
W-7030
4096 users, supports 64 APs for campus and branch
W-7024
2048 users, supports 32 APs for campus and branch
W-7010
2048 users, supports 32 APs for campus and branch
W-7005
1024 users, supports 16 APs for campus and branch
W-Series Advantage:
-ClientMatch automatically steers devices to best AP
-Integrated Adaptive Radio Management™ technology
-Advanced Cellular Coexistence minimizes interference from other networks
Key Features:
-Range of models: 24 and 48 port, 1GbE and 10GbE, RJ45 and SFP+, Layer 2/Layer 3,
and PoE+
-Loop-free redundancy without spanning tree using MLAG
-Common operating system with consistent management and CLI/GUI across all models
-Standards-based and interoperable, even with propriety protocols such as RPVST+
and CDP
-High throughput and capacity to handle unexpected workloads
-Scales easily with up to 12-unit stacking
N1500 Series
N1500 switches to extend enterprise features to small and midsized businesses by
using a Layer 2+ feature set and offering high availability for smaller managed
networks. N1500 switches offer simple management and scalability through a 40 Gbps
(full-duplex) high availability stacking architecture that enables management of up
to four switches from a single IP address.
-Up to 48 line-rate GbE RJ-45 ports and four integrated 10GbE SFP+ ports
-Support for 24 ports of PoE+ in 1RU or up to 48 ports of PoE+ with an optional
external power supply
-Up to 200 1GbE ports in a 4-unit stack for high density and high availability in
Independent Distribution Facility (IDFs), Main Distribution Facility (MDFs), and
wiring closets
-Non-stop forwarding and fast failover in stack configurations
The Dell EMC Networking N1500 series includes four switch models: N1524, N1524P,
N1548, and N1548P.
N-Series ON Switches
The following switches are fully managed Layer 2 switching with Open Networking
capabilities:
To learn more about these switches, click each tab.
N1100
Dell EMC Networking introduces a series of new switches that are designed to
provide low cost, basic managed Gigabit-Ethernet switches at near Fast Ethernet
price points. Models provide choices in base switch-port counts and capabilities
that are perfectly suited as affordable managed-switch solutions for both Fast
Ethernet and Gigabit Ethernet customers.
The N1100 series switches offer the following features:
-Full management (remote and local) consistent with N-series switches
-Full industry-standard CLI, consistent with N-series switches
-10/100/1000Mbps RJ45 full and half-duplex fixed ports
-Switches feature scalability through interconnection of more N1000 series switches
-Internal, fixed power supply (no external redundant or PoE PSU support)
-Power over Ethernet+ (30.8 W IEEE 802.3at) and Power over Ethernet (15.4 W)
support
-Aesthetically pleasing and operationally friendly design ideal for many locations
and targeting reduced noise, and quiet operation with fan-less and/or variable
speed fan designs
-New half RU width 8-port N-series rack form factor models using X-series mounting
options adaptable for surface top, wall/ceiling mount, and standard rack mounting
-Energy-Efficient Ethernet design
-And quickly deployable by customers requiring little-to-no networking knowledge
-IPv4 & IPv6 management
N2128 PX-ON
DellEMC Networking N2128PX-ON Network Switch offers 1GbE and 2.5GbE multigigabit
connectivity.
The N2128 PX-ON switch offers the following features:
-24 x RJ45 10/100/l000Mb auto·sensing PoE+ ports (optional external power supply
needed to provide power to all 24 ports at 30.8W)
-2 x RJ45 10/100/1000/2500Mb auto·sensing PoE 60W ports
-2 x integrated 10GbE SFP+ ports
-2 x dedicated rear stacking-ports
-1 x integrated power supply (1000W AC)
N3132 PX-ON
DellEMC Networking N3132PX-ON Campus-Network Switch offers 1GbE, 2.5GbE, and 5.GbE
multigigabit connectivity for customers looking to deploy faster connectivity on
CAT5e networks with Power over Ethernet+ and Universal Power over Ethernet
capability. This switch is a perfect complementary offering to the Dell Networking
W-Series W-AP334/5 and W-IAP334/5 802.11 ac Wave 2 2.5GbE Wireless Access Points.
The N3132PX-ON will be added to Open Manage Network Manager with OMNM 7.0 and
HiveManager NG in a future release.
The N3132 PX-ON switch offers the following features:
-24 x 100MB/1GbE RJ45 ports.
-4 x SFP+ ports.
-8 x 2.5GbE/5GbE (NBase-T) speeds over Cat 5e cable; 4 of these ports support 10G
on Cat 6a.
-40GbE uplink or 2 x 21Gb Stacking module.
-Stacking supported with N3000.
-PoE+ and Universal PoE support (on 8 multi-gig ports).
-ONIE boot loader support.
-No TAA.
Dell EMC Networking S-Series product line offers a range of modular and fixed-
configuration 1/10/40GbE systems. These are designed principally for data-center
top-of-rack (ToR) and aggregation applications.
The S-Series switches are:
-1-Gb switches,-3124/S3124F/S3124P, S3148/S3148P
-10-GbE switches- S4128F-ON, S4148F-ON, S4128T-ON, S4148T-ON, S4148FE, S4148U,
S4248FB-ON, S4248FBL-ON
-25-GbE switches: S5048F-ON, S5148F-ON, S5232F-ON, S5248F-ON, S5296F-ON,
It includes S6000 high-density 40-GbE virtualization switch, the S5000 modular
converged SAN/LAN switch, and the S4810 or S4820T ToR switches supporting 1 GbE, 10
GbE, and 40-GbE capabilities.
Dell EMC Networking S-Series 1-GbE switches are optimized for high-performance
data-center environments:
-Delivers low-latency, superb performance, and high density with hardware and
software redundancy
-Offers Active fabric designs using S- or Z-Series core switches to create a two-
tier, 1/10/40-GbE data-center network architecture
-Provides ideal solutions for ToR applications in enterprise, Web 2.0, and cloud
service providers data-center networks
S3124
The S3124 switch offers the following features:
Twenty-four-Gigabit Ethernet (10/100/1000BASE-T) RJ45 ports that support
autonegotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports
S3124F
The S3124F switch offers the following features:
Twenty-four Gigabit Ethernet 100BASEFX/1000BASE-X SFP ports
Two 1G copper combo ports
Two SFP+ 10G ports
S3124P
The S3124P switch offers the following features:
Twenty-four Gigabit Ethernet (10/100/1000BASE-T) RJ-45 ports for copper that
support auto-negotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports
Supports PoE+
Two fixed, mini Serial Attached SCSI (mini-SAS) stacking ports HG[21] to connect up
to six switches
S3148
The S3148 switch offers the following features:
Forty-eight Gigabit Ethernet 10/100/1000BASE-T RJ-45 ports
Two SFP 1G combo ports
Two SFP+ 10G ports
20G expansion slot that supports an optional small form-factor pluggable plus
(SFP+) or 10GBase-T module
Two fixed mini Serial Attached SCSI (mini-SAS) stacking ports HG[21] to connect up
to twelve S3100 series switches
S3148P
The S3148P switch offers the following features:
Forty-eight Gigabit Ethernet (10BASE-T, 100BASE-TX, 1000BASE-T) RJ-45 ports that
support auto-negotiation for speed, flow control, and duplex
Two combo SFP ports
Two SFP+ 10G ports
Supports PoE+
Two fixed, mini-SAS stacking ports HG[21] to connect up to six switches
S4128F-ON/S4148F-ON
S4100–ON Series (S4128F-ON and S4148F-ON) are one rack unit (RU), full-featured
fixed form-factor top-of-rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It
includes small form-factor pluggable plus (SFP+), quad small form-factor pluggable
plus (QSFP+), and quad small form-factor pluggable (QSFP28) ports.
The features of S4128F-ON/S4148F-ON switches are listed:
-S4128F-ON: 28 fixed 10-GbE SFP+ ports, 2 fixed 100-GbE QSFP28 ports
-S4148F-ON: 48 fixed 10-GbE SFP+ ports, 2 fixed QSFP+ ports, 4 fixed 100-GbE QSFP28
ports
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis
S4128T-ON/S4148T-ON
S4128T-ON and S4148T-ON are one rack unit (RU), full-featured, fixed form-factor,
top-of-rack (ToR) 10/25/40/50/100GbE switches for 10GBaseT servers with copper
BaseT RJ45. It includes small form-factor pluggable plus (SFP+), quad small form-
factor pluggable plus (QSFP+), and quad small form-factor pluggable 28 (QSFP28)
ports.
The features of S4128T-ON/S4148T-ON switches are listed:
-S4128T-ON: 28 fixed copper 10GBase-T RJ45 ports SFP+ ports, two fixed 40-GbE QSFP+
ports
-S4148T-ON: 48 fixed copper 10GBase-T RJ45 ports SFP+ ports, two fixed 40GbE QSFP+
ports, four 100GbE QSFP28 ports
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis
S4248FB-ON/S4248FBL-ON
The S4200–ON Series (S4248FB-ON/S4248FBL-ON) are one rack unit (RU), full-featured
high-density 10/25/40/100GbE switches. It includes 40 small form-factor pluggable
plus (SFP+) optics, six quad form-factor pluggable 28 (QSFP28) optics (100 GbE,
4x25GbE, 40 GbE, and 4x10GbE), and two QSFP+ optics for 40/100GbE aggregation and
1/10GbE top-of-rack (ToR) and end-of-row (EoR) applications.
The features of S4248FB-ON/S4248FBL-ON switches are listed:
-Forty 1/10GbE fixed SFP+ ports
-Two QSFP+ ports supporting 40 GbE or 4x10GbE breakout
-Six QSFP28 ports
-One RJ45 console port
-One USB Type-A 2.0 port for more file storage
-One ESD Jack
-TCAM: on-board Rangeley central processing unit (CPU) system with 32-GB DDR III
RAM, 64-GB iSLC mSATA SSD
-Non-TCAM: on-board Rangeley CPU system with 8-GB DDR III RAM, 16-GB iSLC mSATA SSD
-Two hot-swappable redundant power supplies
-Five hot-swappable fans modules
-Standard 1U switch
-The S4200-ON Series supports the following configurations:
-48 x 10 GbE + 6 x 100 GbE
-40 x 10 GbE + 8 x 40 GbE
-48 x 10 GbE + 12 x 50 GbE
-48 x 10 GbE + 24 x 25 GbE
-72 x 10 GbE
S4148U-ON
S4148U–ON system is a one rack unit (RU), full-featured fixed form-factor top-of-
rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It includes small form-factor
pluggable plus (SFP+), small form-factor pluggable 28 (SFP28), quad small form-
factor pluggable plus (QSFP+), and quad small form-factor pluggable (QSFP28) ports.
The unique features of S4148U-ON are listed:
-24 x 1/10 GbE SFP+ ports
-24 x unified SFP+/SFP28 ports for Fibre Channel and Ethernet
-2 x fixed 40-GbE QSFP+ ports
-4 x unified QSFP28 ports for Fibre Channel and Ethernet
-1 x MicroUSB-B serial console management port
-1 x RJ45 serial console management port
-1 x universal serial bus (USB) Type-A port for more file storage
-1 x 2 Core Rangeley C2338 central processing unit (CPU), with 4-GB DDR3 SDRAM and
one 16-GB mSATA/M.2 SSD module
-7-segment stacking indicator
-Temperature monitoring
-Real-time clock (RTC) support
-Hot-plug redundant power supplies
-Removable fans
-Standard 1U chassis
4148FE-ON
S4100-ON Series S4148FE-ON is a one rack unit (RU), full-featured fixed form-factor
top-of-rack (ToR) 10/25/40/50/100GbE switch for 10G servers. It includes small
form-factor pluggable plus (SFP+), quad small form-factor pluggable plus (QSFP+),
and quad small form-factor pluggable (QSFP28) ports. The S4148FE-ON also includes
unified (Fibre channel and Ethernet) 10-GbE SFP+ and QSFP28 ports.
The unique features of S4148FE-ON are listed:
-The first 24 ports are the Unified ports.
-2 fixed 40-GbE QSFP+ ports
-four fixed 100-GbE QSFP28 ports
-Seven-segment stacking indicator
-One micro-USB-B console port
-One USB type-A port
-Support for LRM optics
Note: For specific port profile details, see the OS10 Enterprise Edition User
Guide.
Gain the flexibility to transform data centers with high-capacity network fabrics
that are easy to deploy, cost-effective. It provides a clear path to a software-
defined data center. They offer:
-High density for 40GbE deployments in ToR, middle-of-row, and end-of-row
deployments
-A choice of S6000-ON and S6010-ON 40-GbE switches and the S6100-ON
10/25/40/50/100GbE modular switch
-S6100-ON modules that include: 16-port 40GbE QSFP+; 8-port 100GbE QSFP28; combo
module with four 100GbE CXP ports and four 100GbE QSFP28 ports
-An ideal solution for modern workloads and applications that are designed for the
open networking era
S5048F-ON/S5148F-ON
The S5048F-ON switch is a one rack unit, full-featured, fixed form-factor top-of-
rack (ToR) compact 10/25/40/50/100-GbE switch. It has 10/25-GbE links for server
connections and 40/50/100-GbE links for clustering—virtual link trunking (VLT) amp;
stacking and uplinks to aggregation and core switches. The switch includes two hot-
swappable AC or DC power-supply units (PSUs) and four hot-swappable fan units.
The S5048F-ON switch offers the following features:
-Standard 1U switch
-On-board Rangeley central processing unit (CPU) system with 8-GB DDR III RAM, 16-
GB iSLC mSATA SSD
-Individual port groups can be configured as either Ethernet or fibre channel,
enabling a single S4148-U to switch native Ethernet and native fiber channel
simultaneously
-S5048F-ON has 48 x 1 GbE/10 GbE/25 GbE SFP28 ports
-S5148F-ON has 48 x 10 GbE and 25-GbE SFP28 ports
-6 x 40 GbE and 100-GbE QSFP28 ports
-1 x MicroUSB-B console port
-1 x RJ45 serial console port
-1 x USB Type-A port for more file storage
-1 x 10/100/1000BaseT Ethernet management port
-2 x hot-swappable redundant power supplies
-4 x hot-swappable fan modules
S5232F-ON, S5248F-ON, S5296F-ON
123
S5200F-ON Series switch is a full-featured fixed form-factor top-of-rack (ToR)
compact 10/25/40/50/100/200GbE switch for data center networks. It includes small
form-factor pluggable plus (SFP+), small form-factor pluggable 28 (SFP28), quad
small form-factor pluggable 28 (QSFP28), and quad small form-factor pluggable
double density (QSFP-DD) ports. S5200F-ON Series switch is a 10/25/40/50/100/200GbE
switch with 10/25GbE links for server connections and 40/50/100GbE links for
clustering—virtual link trunking (VLT) and stacking—and uplinks to aggregation and
core switches. The switch includes two hot-swappable AC or DC power supply units
(PSUs) and four hot-swappable fan units.
The three switches present in this series are:
1.S5232F-ON – one rack unit
2.S5248F-ON – one rack unit
3.S5296F-ON – two rack unit
Z9500
Z9100-ON
The Z9100-ON is a 1RU compact 10/25/40/50/100 GbE switch. The system includes 32
fixed quad form-factor pluggable (QSFP28) optics for 40/100 GbE aggregation and
10/25/40/50 GbE top-of-rack (ToR) and end-of-row (EoR) applications. The Z9100–ON
system includes two hot-swappable AC or DC power supply units (PSUs) and five hot-
swappable fan units.
The Z9100–ON comes with the following feature set:
-Standard 1U chassis
-Two hot-plug redundant power supplies
-Rangeley CPU system with 8-GB DDR III RAM
-QSFP ports support 10/25/40/50/100 GbE
-Two 1000M/10G SFP+ ports
-One MicroUSB-B console port
-One MicroUSB-B console port
-One 10/100/1000 BaseT Ethernet management port
Z9264F-ON
The Z9264F-ON switch is a two rack unit (RU), full-featured, fixed form-factor top-
of-rack (ToR) and end-of-row 1/10/25/40/50/100-GbE switch. Besides, the Z9264F-ON
supports two SFP+ ports at 1/10/100 Mbps with 10 GbE multimode and single-mode
options. The switch includes two hot-swappable AC or DC power supply units (PSUs)
and four hot-swappable fan units.
The Z9264F-ON switch offers the following features:
-Standard 1U switch
-Denverton C3538 4-Core CPU system with 16-GB DDR4 memory and 32-GB SSD storage
-64 x QSFP28 ports that support 1, 10, 25, 40, 50, and 100 GbE
-2 x SFP+ that support 1 GbE, 10 GbE, and 100 GbE
-1 x MicroUSB type-B console port
-1 x RJ45 serial console port
-1 x Serial management USB type-A port for more file storage
-1 x 10/100/1000BaseT Ethernet management port
-1 x hot-swappable redundant power supplies
-4 x hot-swappable fan modules
The Dell EMC Networking C-series is chassis-based switches that enable you to
create an expandable, high-density infrastructure with tremendous throughput.
-Alleviate bottlenecks and congestion, and simplify infrastructure with 10/40GbE
and modern protocols.
-Enable high-performance back-end infrastructure for user mobility with PoE+ where
needed.
-Readily support Unified Communications and Collaboration (UC&C) and VDI workloads
across a global workforce with scalable user density and performance on demand.
C7008
The Dell EMC Networking C7008 is a 13U chassis featuring up to eight line card I/O
slots that are coupled with a backplane driving up to 1.536 Tbps bandwidth. It is
designed to provide inherent reliability, network control, and scalability for
high-performance Ethernet environments.
-Switch fabric capacity of up to 1.538 Tbps and up to 952 Mpps L2/L3 packet
forwarding capacity
-1+1 Route Processor Module (RPM) design
-Continuous runtime data plane monitoring and advanced in-service command-line
interface (CLI) diagnostic functions
-Power supply redundancy with load-sharing power bus enabling uninterrupted VoIP
calls during a power supply failure
-Over 300 line-rate 10/100/1000Base-T ports with full 30.8 W Class 4 PoE+ support
in a 13U chassis
-Up to 128 10GbE ports for RJ-45 installations or up to 64 10GbE ports with
pluggable SFP+ or XFP modules
-Full complement of standards-based Layer 2, IP version 4 (IPv4), and IP version 6
(IPv6) for unicast and multicast applications
-5-microsecond switching latency under full load for 64-byte frames
-A suite of security, access control, and wiring closet edge features for
enterprise networks
-Robust Dell EMC Networking operating system enables resiliency and efficient
operation
C7004
The Dell EMC Networking C7004 is a 9U chassis featuring up to four line card I/O
slots that are coupled with a backplane driving up to 768 Gbps bandwidth. The Dell
Networking C7004 is designed to provide inherent reliability, network control, and
scalability for high-performance Ethernet environments.
-Switch fabric capacity of up to 768 Gbps and up to 476 Mpps L2/L3 packet
forwarding capacity
-1+1 Route Processor Module (RPM) design
-Continuous runtime data plane monitoring and advanced in-service command-line
interface (CLI) diagnostic functions
-Power supply redundancy with load-sharing power bus enabling uninterrupted VoIP
calls during a power supply failure
-Over 200 line-rate 10/100/1000Base-T ports with full 30.8 W Class 4 PoE+ support
in a 9U chassis
-Up to 64 10GbE ports for RJ-45 installations or up to 32 10GbE ports with
pluggable SFP+ or XFP modules
-Suite of security, access control, and wiring closet edge features for enterprise
networks
-Full complement of standards-based Layer 2, IPv4, and IPv6 for unicast and
multicast applications
-5-microsecond switching latency under full load for 64-byte frames
-Robust Dell EMC Networking operating system for resiliency and efficient operation
The C9010 switch is part of Dell EMC Networking's next-generation LAN solution,
providing a scalable switch that offers a path to higher density 10 GbE and 40 GbE
capability. The user can deploy the C9010 switch as an access or aggregation/core
switch for installations in which a modular switch is preferred. For larger port
requirements, users can also connect the C1048P port extender as access devices.
The C9010 switch supports up to 248 1GbE, 248 10GbE, or 60 40GbE ports with a
combination of port speeds and media types, such as copper, fiber, and direct
attach copper (DAC). It is an 8U chassis (18 in./45.72 cm depth) that fits into a
standard 19 in. (48.26 cm) rack or cabinet. The C9010 chassis supports the
following components:
-Two full-width route processor modules (RPMs) with four 1/10 GbE SFP+ uplinks per
module
-10 half-width Ethernet line cards of the following types:
-6-port 40 GbE QSFP+
-24-port 1/10 GbE SFP+
-24-port 1/10 GbE SFP+
-Three hot-swappable fan modules with side-to-side airflow (draws air through
ventilation holes on the right side of the chassis and expels air through
ventilation holes on the left side)
-Four 1450/2900W AC PSUs
Storage technologies do not have the same appeal capabilities of servers and
computing platforms. They certainly make up for it by providing data and or more
accurately the availability and integrity of the data. Storage technologies single-
purpose is about ensuring the data (be it an operating system, applications, or
static databases) use by those servers is fully available and usable.
In this lesson, you learn about the general information regarding common storage
technologies in use by the IT industry and Dell EMC in the data center.
Estimated Course Duration
This General Purpose Storage Technologies In The Enterprise Lesson is estimated to
take 20-25 minutes to complete.
For general storage industry terms, you can inquire the TechTarget.com's "What Is"
database that is found at:
http://whatis.techtarget.com/
SCSI
SCSI is the storage block protocol 'bus' communication standard used to communicate
controller commands between the host and their peripherals, such as hard drives.
SCSI has been in use since the late '70s and declared a standard in 1986. Over
that time, it has changed from a parallel communication bus (SCSI-1) to the more
reliable and now faster serial bus (SCSI-3 used by SAS). Originally the SCSI
standard defined both hardware & command structure. Today, it focuses mainly on
command structure. As such, SCSI is arguably the most widely implemented I/O
interconnect in use today.
(source: SNIA.org & ANSI)
If you're interested in finding more information about SCSI, visit the SCSI Trade
Association at SCSITA.org and T10.org for hardware specifications.
SATA
Serialized Advanced Technology Attachment (SATA) is used mainly in the consumer
space. SATA drives are normally half-duplex, and generally speaking, have higher
latency and lower reliability specifications.
Physically, SATA connectors are recognized by having a clear separation between the
data connectors and power connectors.
SAS
Serial Attached SCSI (SAS) is a SCSI interface standard for attaching hosts to SCSI
devices, including SAS and SATA disk and tape drives. (source: SNIA.org)
SAS uses the SCSI-3 protocol which defines serial point-to-point communication and
allows SCSI communication to occur over longer distances and more reliably to more
devices over the older SCSI bus.
SAS drives are full-duplex (dual ported), and have lower latency and better
reliability specifications than SATA. SAS controllers also can support SATA
devices, but if a SATA devices is attached, the connection will operate at half-
duplex, and reduce the operating speed of all devices in the chain.
NOTE: For more information on SAS and SCSI, you can view the "SAS Solutions"
reference content available from Educate Dell by using the global search tool for
the "Dell SAS Solutions".
Initiator
The initiator is the system component that originates an Input/Output (I/O) command
over an I/O interconnect.
When within the context of SCSI, it is the endpoint that originates the SCSI I/O
command sequence.
(source: SNIA.org)
Controller
Within a storage context, the term controller refers to the storage system I/O
adapter controller of a disk or tape that performs the controller logic command
decoding and execution, host data transfer, serialization and deserialization of
data, error detection and correction, and overall management of device operations.
(source: SNIA.org)
There are different types of I/O adapters such as network interface controller,
SCSI controller, RAID controller, etc.
HBA
Host Bus Adapter, or HBA, is a more general term for an I/O adapter that connects a
computer to an I/O interconnect, or bus. See Controller.
Target
The target is the system component that receives a SCSI I/O command sequence over
an I/O interconnect from the initiator.
(source: SNIA.org)
Disk Drive
"Hard Disk" means a traditional drive with rotating magnetic and optical disks, as
well as solid-state disks, or non-volatile electronic storage elements. It does not
include specialized devices such as write-once-read-many (WORM) optical disks or
RAM disks.
SSD
A Solid State Drive, or SSD, is a disk drive whose storage capability is provided
by solid state electronics The form factors and interfaces for solid state drives
are typically the same as for traditional disk drives.
Spindle
This term means a disk drive that contains rotational magnetic media. This is
often even further differentiated by specifying the 'spindle speed'. For example,
"My storage unit has six 10k spindles, six 15k spindles, and four SSDs."
RAID provides a way of storing data on multiple disks with the ability to
experience a drive failure yet not lose data. Also, by placing data stripes across
multiple disks with mathematical redundancy, this normally greatly improves I/O
operations (IOP) speeds in accessing and storing data.
(source: http://SearchStorage.TechTarget.com/definition/RAID)
IOPS
Input/output OPerations per Second (IOPS)
Backplane
The term backplane is not unique to storage technologies. A backplane is a printed
circuit board (PCB) that is designed to be a core component that other components
plug into. Within a storage enclosure or a simple server with multiple internal
hard drives, the backplane is normally referencing the PCB component that hard
drives and/or hard drive carriers plug into which facilitates target-initiator
communication (i.e., the SAS chanel(s) or SCSI bus).
JBOD
An acronym meaning "Just a Bunch Of Disks". It is in reference to an unintelligent
enclosure that usually consists of a chassis (including power supplies, cooling and
internal backplane), hard-drives and I/O modules that allow the JBOD to be
connected to a host or intelligent storage array and/or other JBODs.
Enclosure
See JBOD. Other synonymous terms include disk enclosure, disk chassis, shelf, and
pod (legacy) among others.
EMM
EMM, or Enclosure Management Module, is commonly used to describe I/O modules found
in JBODs (unintelligent storage arrays). However, not all vendors employ this
term. Some may use Expansion Module, or simply I/O Module/Card. Dell EMC is one of
the vendors that uses EMM to describe the I/O modules found in external enclosures,
and usually indicates that the enclosures is a JBOD for host attachment or for disk
expansion for a storage array.
DAS - Direct Attached Storage. This is the general term to describe a host that is
connected to storage directly from a storage controller to the actual storage or
storage enclosure using a block level protocol (e.g., PCI-SAS Card attached to
internally and/or externally to an enclosure).
SAN - Storage Area Network. This is the general term to describe a host that is
connected to storage over a network using a storage protocol over a storage
controller / HBA using a block level protocol (example, using a LAN card but
transmitting iSCSI, or using an HBA communicating over Fibre Channel)
NAS - Network Attached Storage. This is a term used to refer to storage devices
that connect to a network using a file level protocol and provides file access
services to computer systems (example, CIFS or NFS shares). Note that a NAS device
can be using DAS or SAN for its storage needs, however the hosts connected to the
NAS device do not care about this; they only connect using file access services.
Within the context of storage, storage devices that transmit data over SMB and CIFs
for communication with hosts transmit file level data.
Storage Tiers
Per SNIA, Storage Tiering indicates any storage that is partitioned into multiple
distinct classes based on price, performance, organizational usage, or other
attributes.
Disk Storage
Disk Storage primary role is real-time data storage and retrieval . Hard Disk and
Solid-State technologies fulfill this role by offering different tiers of storage.
At Dell EMC, parallel SCSI devices have all reached end of service life, so we will
not review those devices. Next, we will briefly review SATA and SAS technologies.
At the time of this writing, Dell EMC sells and supports hard disk drives (HDD) in
1.8", 2.5," and 3.5" form factor.
SATA drives are geared for a general-purpose consumer application, whereas SAS
drives are designed for mission-critical enterprise applications.
SATA
-SAS
Performance Profile
General Purpose
(Normal front office applications and usage)
(higher latency, slower spindle speeds)
-Enterprise/Mission Critical
(Enterprise back-office application such as databases)
(lower latency, higher spindle speeds)
Duty Cycle
8 hours/day, 5 days/week
-24 hours/day, 7 days/week
Initiators
Supports single initiator
-Can support multiple initiators.
# of Communication Ports/Channels
Single-ported
-Dual-ported
I/O Stream
Half-duplex
-Full-duplex
Storage Protocol
ATA Command Set (Native)*
-SCSI Command Set
Error Recovery
Limited (due to ATA Command set).
-Robust (due to SCSI Command set).
Communication Path
1-meter*
-8 meters
To get even higher performance, some systems can be equipped with SSD drives that
contain their own controller and plug directly into the PCI bus.
An image here represents a PCIe SSD that plugs directly into the PCI bus, and is
offered by Dell EMC partner Fusion I/O.
PCIe SSD drives can provide magnitudes higher of I/O performance than SAS-based
drives: SSD or rotating media:
-Delivery over 500 times the IOPS than rotating media
-Has eight to ten times the performance of SAS SSDs
It is done by providing an NVM PCIe card to expand the PCI bus to specific external
slots.
PowerEdge platforms that are equipped with Express Flash provide the benefits of
NVM Express without compromising the ability to keep the host operating while
replacing a PCIe SSD Hard Disk in its externally accessible carrier.
-Provides tiered storage option to support high I/O throughput
-Connects directly through PCIe
-Dell EMC provides front-accessible, hot-pluggable form factor
-Same performance gains as in normal PCIe SSD
Connecting to Storage
The drives that you encounter within Dell EMC enterprise servers and direct-
attached storage equipment will be SATA, SAS, Near-Line SAS, or SSD.
Click the tab to review the types of connectors that you encounter when servicing
hard drives for Dell.
RAID Levels
RAID 0
RAID 60, like RAID-50, is also a nested RAID. In this implementation, a virtual
disk, in theory, can experience more than two disks, as long as there are not more
than two of the failed disks below to the same underlying RAID set.
RAID 1
Known as "Disk Mirroring," RAID 1 consists of two drives, each of which contains
identical data.
-Excellent read speed and write speeds comparable to single drive
-In the event of disk failure, data remains available with little performance
impact
-Rebuild process involves simply copying data from live disk to replacement disk
RAID-10
RAID-10 involves striping multiple RAID-1 mirror sets (two or more) together to
improve performance and increase redundancy.
RAID 2, 3 and 4
These RAID levels are not typically seen in production (if at all). All of these
RAID types stripe data with a protection as part of the stripe. Each requires a
minimum of three drives to operate as well.
RAID 2 writes stripes of data at the bit-level (one bit per drive in a stripe) and
uses the Hamming Code algorithm to correct 1-bit errors, and detect 2-bit errors.
RAID-3 writes stripes of data at the byte-level (one byte per drive in a stripe)
and uses parity for error correction. All parity is written to the same drive
(Dedicated Parity Drive).
RAID-4 writes stripes of data at the block-level (one block of data per drive in a
stripe) and uses parity for error correction. All parity is written to the same
drive (Dedicated Parity Drive).
RAID 5
RAID 5 is implemented with a minimum of three drives, and has an efficiency of N -1
/ N (where N equals the total number of disks in the logical disk).
One way to help improve performance of a RAID-5 with many disks is to implement a
RAID-50.
RAID 50
RAID 50 is a type of "nested RAID." It allows the use of multiple underlying RAID-5
sets and stripes them using RAID-0. Although, in theory, RAID-50 can experience
multiple disk failures, if two disk failures occur in the same underlying RAID-5
set, all data can be considered lost.
RAID 6
RAID 6 is similar to a RAID 5. It is also striping with parity, yet it stripes with
dual parity (commonly referred to as P- and Q-parity) and distributes the parity
across all disks. RAID 6 can experience up to two disk failures. When operating in
a degraded mode, performance is significantly impacted, although caching
controllers can help minimize the impact.
RAID-60
RAID 60, like RAID-50, is also a nested RAID. In this implementation, a virtual
disk, in theory, can experience more than two disks, as long as there are not more
than two of the failed disks below to the same underlying RAID set.
As you already know, the PowerEdge Expandable RAID Controller is Dell proprietary
RAID controller for PowerEdge products. PERCs exist as either PCI interface cards
or are integrated onboard controllers.
RAID Terms
RAID Controller
The device that manages the physical disk drives and presents them as a logical (or
virtual) disk to the host operating system.
Hot Spare
A physical disk that is assigned a standby role by the RAID controller which will
be used to rebuild and/or copy data of the failed/failing disk.
The disk is recognized by the RAID controller and only used in the event that one
of the physical disks that is part of a logical drive fails. When a failure occurs,
rather than wait for a physical disk to be replaced, the hot-spare becomes
available immediately and the data rebuild process occurs at time of failure.
Modern RAID controllers can detect an imminent hard disk failure and pre-emptively
copy data from the marginal drive to the hot-spare and reduce the risk of data loss
even further.
Hot Swap
When a hard drive can be removed and inserted into a system without having to
reboot or restart the system or host. This implies that the system can recognize
the new device without rebooting. If the device is capable of being inserted
"live," but the OS or system are unable to bring it online by design, that would be
considered "hot-pluggable."
Drive Roaming
When a RAID controller supports drive roaming, this implies the ability to be able
to recognize a previously recognized disk regardless of what slot it is plugged
into (that is, the drive can "roam" from one slot to another and not impact the
ability of the system to recognize the entire logical disk and operate properly).
RAID Expansion
The ability to add a physical drive to a logical drive's total, increasing the
available capacity offered to the host. The operating system must also support the
ability to recognize additional space.
RAID Migration
The ability to convert a logical drive from one RAID protection type to another.
This is only supported on certain controllers (all PERC), and can only occur when
the target RAID level is at equal or greater capacity than the existing RAID level.
For example, you can migrate from a four-drive RAID-6 to a four-drive RAID-5, but
not to a four-drive RAID-10.
A server or multiple servers can scale disk capacity by using directly attached
external storage enclosures (DAS). DAS scalability is dependent upon the storage
enclosure that is added. Entry level DAS is more scalable than internal storage but
is still limited. The more DAS devices in the environment, the more management
overhead.
A Storage Area Network (SAN) and a Network Attached Storage (NAS) provides better
scalability, availability, and a single point of management for efficient storage
consolidation. In these environments, the RAID functionality and management are
removed from the host and handled by the SAN or NAS appliance. DAS, NAS, and SAN
are not exclusive in terms of providing a solution. These often work together,
usually in the form of managed tiering of data.
When comparing DAS, NAS and SAN, you can see the similarities and differences. DAS
communicates using a block-level protocol such as SCSI or ATA. SANs also
communicate with a block level protocol, but over a switch network. SCSI and ATA
do not exist in the network, except when they are encapsulated by a transport
protocol, such as Fibre Channel or iSCSI. NAS, on the other hand, does not perform
any encapsulation of a storage protocol over the network. NAS uses a standard
network file system (for example CIFS or NFS) for all communication with the host.
In this course, we focus on Entry-Level DAS.
DAS is simple, inexpensive, and scalable. Manageability can be challenging and host
connections limited. Examples of DAS are MD1000/2000 and 3000/3220.
DAS - Initiator
Initiator - Devices that originate I/O communication
The initiator initiates communication with target devices to either read or write
to a storage device (originates I/O communication).
DAS - Target
The interconnect can be considered to start with the part of the initiator, run
through the backplane and/or cable, and an end with the targets port or connector.
In the graphic below, you can see the interconnect in the form of an internal
backplane. The drives that are attached internally to the backplane, and the
interconnect is the drive connectors on the backplane, the backplane, the cables
that run back to the controller.
The application talks to the data through the operating system file system.
The operating system talks to the initiator through the controllers device driver.
The initiator originates all communication to the target by packaging the data
within a block level protocol, such as SCSI, and transporting it over the
interconnect to the target through a transport specification, such as SAS. The
target receives the data, decodes the command, and then responds in reverse order.
Note-- some application bypass the file system and write directly to the disk
through a "raw" mapping. Even in these instances, the conversion to SCSI (or ATA)
commands still has to occur by the device driver.
We review:
-Data Back up Applications
-Back up to Tape Solutions
Linear Tape Open (LTO) Technology
Tape Drive Hardware
Tape Media
-Back up to Disk Solutions
-Data Recovery
-Data Archival
Businesses not only need access to their data, but also must ensure that the data
is recoverable if production data goes off-line. They also need to keep historical
records to satisfy regulatory and business requirements.
Backup, Archive, and Recovery - Terminology
Backup
A collection of data stored on (removable) nonvolatile storage media in recovery in
case the original copy of the data is lost or becomes inaccessible.
Restore
The act of recreating production data (that is, restoring production data).
Note that this term should be taken within context, as restoring could be as simple
as a version of a document or record within a database, to the restoration of an
entire server or application function.
Recover
Usually synonymous with restore.
Depending on context, recover could imply more than data, especially when used in
the context of a "disaster recovery plan," where not only the data, but also
hardware may need to be recovered/replaced in order to restore a data center's full
functionality.
Archive
A collection of data (the data itself, possibly with associated metadata) in a
storage system whose primary purpose is the long-term preservation and retention of
that data.
Note, this data is usually considered cold and accessible in "near-time."
LTO
Linear Tape Open. An open, standard single-reel magnetic tape technology, developed
by IBM, HPE, and Quantum. Also called Ultrium or LTO Ultrium.
Tape Drive
A data storage device that reads and write data on a magnetic tape.
Unlike a hard disk drive which provide direct access storage, a tape drive adapt
the sequential access storage method to access their data.
Backup Application
The backup application or appliance (purpose-built hardware with integrated backup
software) is the engine that facilitates data backup and recovery.
It can be real-time monitoring of a host and can copy all changes to a backup
device, or simply provide an agent to reside on the host to facilitate backing up
production data to tape or disk without impacting productivity.
Backup applications are outside the normal scope of a break/fix service. The field
technician must be familiar with how a backup is performed to validate any backup
hardware that may have been replaced.
Back Up to Disk
With the voluminous increase in the amounts of data, the typical backup to tape
process has increased. With fastest current backup speeds of 160 MB/s to 180 MB/s
and large enterprise customers, backups can take multiple tapes, and 8 hours for 5
TB of data.
To help alleviate, backup administrators now employ a staged strategy. Back up data
to lowered tiered disk, from these disks, the data goes to tape or to a backup and
archival company.
MD Storage
MD Storage is used when:
-Single or two (using split mode) host application
-Capacity intensive application
-Internal storage capacity insufficient
-Capacity growth rate exceeds internal capacity specifications
NX Storage (NAS)
-Windows Storage Server 2012 R2-based NAS Appliance
-Uses Preconfigured Standard Dell Server Hardware
-For customers needing Medium to High NAS Capacity
-Expandable with DAS attachment or SAN Backend Attachment
-Supports common networked file system protocols (CIFS and NFS)
Removable Media
LTO Customer Requirements
-High-performing
-Standards-based
-High-capacity
-Integrated Encryption
RDX (RD1000 Media) Customer Requirements
-Lowest cost
-Essential capacity
-Not Complex
-Removable Media
PowerVault MD Storage
Dell EMC PowerVault MD DAS solutions are highly reliable and expandable.
PowerVault MD DAS has the following common characteristics:
-Support for mainstream applications
-Optimized performance for streaming applications
-Capacity Intensive and Expandable
-Available from 12-drive to 84-drive enclosures
-Ability to expand capacity by daisy chaining enclosures (see technical
specifications for supported configurations)
-All types of drives supported (SSD, 15k, 10k, 7.2k) and available for 2.5" and
3.5" form factor (FF)
MD1200/MD1220
This 2U JBOD operates at 6-Gbps SAS.
MD1400/MD1420
This 2U JBOD operates 12-Gbps SAS connectivity.
MD 1280
This chassis weights 137 lbs without any drives installed.
-The ME4 series consists of two 2U base arrays (ME4012 and ME4024), and a 5U base
array (ME4084). These systems support 12, 24, and 84 drives in the base,
respectively.
-Each array:
Is configurable with as much flash as needed
Supports the same multi-protocols
Includes a 12 Gb SAS back end
Ships with all-inclusive software
-The ME4 Series also includes 3 expansion enclosure options. The ME412 and ME424
are the 12 drive and 24 drive 2U options, and the ME484 is the 5U 84 drive
expansion option.
-Support for NX servers with Microsoft Storage File/Print server
Dell EMC entry-level NAS servers are powered by Microsofts Windows Storage Server
(WSS). The latest released are powered by the WSS 2012 R2 version, though older NAS
appliances running WSS 2008 R2 may still be supported depending on warranty status.
Dell EMC Windows-based NAS appliances provide these common benefits:
-PowerEdge Server Hardware (rebranded to PowerVault)
Same FRU procedures
Same part numbers as the leveraged PowerEdge platform
BIOS identity updated with NX personality
-Windows-based Operating System
Same core operating system as Windows
License modifications enable unlimited connections
License modifications prevent operating system from being used as an
application server
Less expensive than standard Windows operating system
-Common Internet File System (CIFS) services
-Network File System (NFS) Services
WSS is built on the Microsoft Windows Server operating system. It provides file-
based shared storage for applications such as Microsoft Hyper-V, Microsoft SQL
Server, or any host that uses network file services.
Systems using WSS are typically referred to as Network Attached Storage (NAS)
Appliances.
Identity Update
The most important thing about the NX platform and the leveraged PE hardware is the
need for the BIOS to be updated with the proper personality.
Just like OEM-rebranding, this is considered Dell confidential, and the personality
flash file is not generally available to the public. This is only available to
certain internal Dell personnel and onsite service providers.
Because different generations may have differences, always refer to the product
reference guide for the steps in acquiring and updating the BIOS of an NX platform.
Below are the currently shipping WSS-based NAS Appliances. Open two or more of the
reference pages below and find the Field Service Information and the "Personality
Module" / BIOS update procedures.
(Hint: On older reference pages, use the search box to find personality update
procedure by searching "personality" or "rebrand".)
NX3300
Powered by WSS 2012 R2
NX3230
Powered by WSS 2012 R2
NX3200
Powered by WSS 2008 R2
NX430
Powered by WSS 2012 R2
NX400
Powered by WSS 2008 R2
PIC IN CP
NOTE: As new products are introduced, there will be an associated reference page
for the given product. Always search the reference pages for the latest
information.
Dell EMC PowerVault Tape and Removable Media
Except for the PowerVault 114X, LTO drives do not have component FRUs. They are the
entry-level LTO solutions.
Review the PowerVault 114X reference page, and identify the tape drive options
available and the total number of FRU procedures.
PowerVault 114X
LTO7 Tape Drive
PowerVault LTO6
PowerVault LTO5
PowerVault LTO4
PIC IN CP
NOTE: As new products are introduced, there will be an associated reference page
for the given product . Always search the reference pages for the latest
information.
RD1000
For customers who only require the ability to back up essential data, or who do not
have a high-capacity requirement, Dell EMC provides the RDX-based RD1000. It has
been sold in the rack-mounted 114X chassis.
There are no FRUs for the RD1000, other than the whole unit.