Avamar Data Store Hardware Installation - SRG
Avamar Data Store Hardware Installation - SRG
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be the property of their respective owners. Published in the USA.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos,
and service marks (collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing
contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party
that owns the Trademark.
AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems,
Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra
Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection
Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera,
EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony,
Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel,
InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx,
MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse,
OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC
Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo,
SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF,
EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder,
TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual
Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-
Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta,
Zero-Friction Enterprise Storage.
The Avamar Data Store can be deployed in a number of different configurations. An Avamar Data store
can be deployed either in a single-node configuration, where one node alone performs all Avamar
functions, or in a multi-node configuration, where a group of nodes work together as a single Avamar
server. The nodes themselves are available in a number of different capacity options. These options allow
for a great deal of flexibility in the types of Avamar servers that can be built.
In the front of the node are the hard drives. The number of hard drives vary between each node type.
Because of this, you can quickly identify the type of node by counting the number of hard drives on the
front. Hard drives are hot-swappable and can be easily replaced if there is a failure.
The back of the node contains two redundant the power supplies. These can also be removed and
replaced, if needed.
The higher capacity node types: M2400 and S2400, contain an SSD card toward the left side of the back
of the node. It is located behind the SSD bay door, and can also be replaced if needed.
Each node has two network ports on the I/O panel on the left. Additionally, each node also has 2 SLICs
which provide 4 network ports each. These SLICs are also replaceable. The SLIC on the left, which is
used for backup and replication traffic has the option of using optical connections.
Inside the node are some internal components. Six fans are located toward the front, and can be removed
for replacement. Underneath the cooling shroud are the DIMMS and CPUs.
If a single node were to fail, its data would become inaccessible or lost. For this reason, single node
servers require replication, with two exceptions. The S2600, or Business Edition, single-node server,
which has RAID 6 protection does not require replication. Also, a single-node server with an integrated
Data Domain may be configured to backup Avamar checkpoint data to that Data Domain. A single-node
with this configuration does not require replication.
The S2400 “Business Edition” is unique in that it uses RAID 6 protection instead of RAID 1. This allows for
dual drive failure without the loss of data. The S2400 hardware can only be used in a single-node
configuration.
The utility node is dedicated to scheduling and managing background Avamar server jobs. The hostname
and IP address of the utility node is the identity of the Avamar server for access and client/server
communication. It uses its own node type, which has only two disks, of 2 TB each. Because utility nodes
are dedicated to running these essential services, they cannot be used to store backups.
Instead, backup data is load balanced across multiple storage nodes. Storage nodes use the same exact
hardware as a single node server: the M600, M1200, and M2400. The same hardware is used, just
configured with different software. As with the single-node, this allows each node to have licensable
capacities of 2.0 TB, 3.9 TB, or 7.8 TB. Note that the S2400 “Business Edition” node hardware cannot be
used as a storage node.
Every multi-node server also has two internal network switches that provide communication between
nodes. These switches are redundant so that the failure of one does not stop the Avamar server.
When racking the server, the utility node is placed at the bottom, with the storage nodes placed from the
bottom to the top. The internal switches are placed at the top with an empty spot in between for the power
cords. Empty slots are left in the rack so that additional storage nodes can be added for future capacity
expansion.
Some customers like to keep replication and management traffic on separate networks. Instead of using
the regular customer network connections, replication traffic can optionally use a separate connection, or
two connections for high availability. Similarly, administration traffic can use still another connection, or
two for high availability.
Additionally, RMC connections can be made either with its own dedicated port, or by sharing a port with
backup traffic.
Fortunately, Avamar nodes have a lot of network ports that can accommodate all these connections.
However, you must be sure that network cables are connected to the correct ports.
The ports on each SLIC are referred to as NIC0, NIC1, NIC2, and NIC3, from the bottom to top.
Ports are often referred to by combining these two notations. For example, SLIC1NIC0 refers to the
bottom port in the right SLIC.
Each port is mapped to an internal name. SLIC0NIC0 is mapped to eth1, SLIC0NIC1 is mapped to eth2,
and so on across both SLICS.
The NICs on the Rear I/O panel on the left are used for RMC traffic. The one on the left is for dedicated
RMC connections, while the one on the right is for shared.
If the customer desires to separate their replication traffic, this will be done over SLIC0NIC2. If high
availability is required for replication, use SLIC0NIC3.
If the customer desires to separate their management and administration traffic, this will be done over
SLIC1NIC0. If high availability is required for management and administration, use SLIC1NIC1.
Notice that all backup and replication traffic, no matter what configuration is using SLIC0. This SLIC can
optionally use optical ports for higher throughput.
First, all nodes need to be connected to both internal Avamar switches. These internal switches are used
for all internal traffic sent from one node to another. Each Avamar node is connected to both switches.
SLIC1NIC2 connects to internal switch A, on the bottom. SLIC1NIC3 connects to internal switch B, on the
top. An easy way to remember this is that the top NIC of SLIC1 on the node connects to the top switch.
As with the single-node server, SLIC0NIC0 is used to connect to the customer switch and SLIC0NIC1 can
connect to a second switch for high availability. However, every node must connect to the customer
network using these ports. Since each node has at least one connection to the customer switch, it is
important to ensure that the customer has enough ports available, especially when installing an Avamar
server with many nodes.
The customer connections are bonded together as bond0 and internal connections are bonded together as
bond1. Each bond has its own IP address. These bonds are on the OS only and are Active/Passive.
If the customer requires separate connections for replication and management traffic, connections are only
made on the utility node using the same ports as in the single-node configuration. SLIC0NIC2 and
SLIC0NIC3 are used for replication, and SLIC1NIC0 and SLIC1NIC1 are for management.
If RMC is used, it typically uses a dedicated RMC port on each node. In this case, the RMC connection is
made to the port on the left. This port is dedicated to RMC and does not work for anything else.
However, some environments want use RMC without using up additional ports. To do so, RMC can share
a port with backup traffic. The RMC shared port on the right is enabled for RMC and also mapped
internally to eth0 for regular network use. To connect the server for shared RMC, the backup connection is
moved from SLIC0NIC0 to the RMC shared port. This is done for all nodes. Both RMC and backup traffic
will use this port. Also, if High Availability is used, the secondary connection on all nodes is made to
SLIC0NIC0 instead. Since the backup connections now use eth0 and eth1, rather than eth1 and eth2, the
bonding configuration will have to be changed as well.
The internal connections, replication, and management connections, if used, remain the same whether the
RMC port is shared or dedicated.
Drag and drop the ends of each wire on the left into the proper ports on the right. The activity will only
proceed when the correct connections are made.
It is important to always generate a new procedure for every installation. SolVe Desktop is updated
periodically and will have the most relevant procedures.
This is an overview of the Avamar Data Store Hardware Installation process. Each of these steps will be
explained in more detail.
During the hardware installation, the nodes are first placed on the rack and, if it’s a multi-node server, you
will perform internal network cabling. Then, you will configure initial network settings and copy Avamar
software packages onto the server utility or single node. If the server is a multi-node, you will test the
internal switches and set the IP addresses and other basic networking for each storage node. If advanced
networking settings are used, you will run dpnnetutil to configure them. You will also install a few system
tools, configure the power button, and update firmware. You will configure RMC on each node as required
by the customer. Once the server is installed, you will connect server to the customer network.
You should always use new procedures from SolVe Desktop to perform hardware installation. SolVe
Desktop is updated much more frequently than this course. The content presented in this course should
only be used as a guide.
Occasionally, a node may have to be re-kickstarted. This may be required if there are configuration errors
on the node. To rekickstart a node, always follow a new procedure from SolVe Desktop. The procedure
tells you to download the Avamar kickstart ISO file from the Avamar FTP site. Burn this ISO file to a DVD,
insert it to an external USB DVD drive, connect the drive to the node, and boot the node. You need to edit
the BIOS settings to boot from the external DVD drive. Once the node is booting from the DVD drive,
select the hardware configuration of the node from the menu. The OS will be installed in a few minutes.
The installation will begin on the utility node, or the single node. The nodes do not have any local ports for
a keyboard or monitor, so you will have to use the serial port on the rear I/O panel. Connect your laptop to
the serial port, and use PuTTY to connect. You will have to configure PuTTY properly for serial
connections. Set the baud rate, data bits, stop bits, parity, and flow control. Also, configure PuTTY to use
linux keyboard sequences.
Edit the bondconf.xml file in the Avamar var directory shown. Ensure that the entry for the backup network,
bond0, includes the proper port names. If shared RMC is used, change the entries to eth0 and eth1.
You may need to modify the bondconf.xml file on storage nodes as well. You may either modify them
manually or copy the bondconf.xml file from the utility node once networking has been configured on all
nodes.
Run the yast command. From the main menu select Network Devices and Network Settings. Then, select
the primary backup port, and edit it. Here you will provide an IP Address, subnet mask, and hostname.
Yast is also used to set DNS parameters and gateway information.
After the transfer completes, run an md5sum check each file to ensure that they were not corrupted. Also,
unzip the Avamar bundle zip file.
Run the command and examine the output. A successful test will report that no errors were found and it
will also report the correct number of storage nodes, including the spare if one is present.
A failed test will report errors. For example, this output shows that 11 nodes are connected to switch A, but
only 10 are connected to switch B. Most likely, one of the nodes was not connected to switch B.
The network test also configures the switches with the IP addresses shown.
Also, use the yast utility to set the IP Address, subnet mask, hostname, DNS parameters, and gateway.
Be sure to configure the correct primary backup port. For dedicated RMC, configure eth1. For shared
RMC configure eth0.
Dpnnetutil is included in the Avamar Bundle zip file. To use it, install the utility from its .run file, and run it
using the dpnnetutil command. Dpnnetutil will prompt for various networking information including the
internal subnet, NAT, VLAN, and hostnames.
You must also configure the power/reset button behavior on each node. Enter the ipmitool command as
shown. This will configure the button to reset the node if it is pressed for less than 10 seconds, and power
off the node if it is held for more than 10 seconds.
Also, you must update each node’s firmware to the latest version. Firmware updates must be placed on a
hidden FAT32 partition called “/firmware”. Upload the firmware zip file, mount the hidden firmware
partition, and extract the contents of the zip file into that partition. Reboot the node to apply the changes.
This must be done for all nodes.
Also, if separate replication and management networks are used, connect those using the proper ports.
Connect the High Availability replication and management networks as well, if they are being used.
Finally, establish a serial connection to each node, and verify that they can ping other devices on the
customer network.