0% found this document useful (0 votes)
8 views

Avamar Data Store Hardware Installation - SRG

The document provides installation guidelines for the Avamar Data Store hardware, detailing its components, configurations, and networking requirements. It explains the differences between single-node and multi-node setups, including node types, capacities, and the importance of replication for data protection. Additionally, it outlines the necessary network connections for optimal performance and high availability in various configurations.

Uploaded by

Tomek Dymek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Avamar Data Store Hardware Installation - SRG

The document provides installation guidelines for the Avamar Data Store hardware, detailing its components, configurations, and networking requirements. It explains the differences between single-node and multi-node setups, including node types, capacities, and the importance of replication for data protection. Additionally, it outlines the necessary network connections for optimal performance and high availability in various configurations.

Uploaded by

Tomek Dymek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Welcome to Avamar Data Store Hardware Installation.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be the property of their respective owners. Published in the USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos,
and service marks (collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing
contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party
that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems,
Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra
Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection
Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO,
Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera,
EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony,
Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel,
InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx,
MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse,
OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC
Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo,
SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF,
EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder,
TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual
Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-
Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta,
Zero-Friction Enterprise Storage.

Revision Date: March 2017

Revision Number: MR-1WP-ADS.7.4.1

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 1


This course covers the knowledge necessary to install and configure EMC Avamar Data Store hardware.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 2


This module focuses on the Avamar Data Store product, its main hardware components, and networking
configurations.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 3


Avamar Data Store is the physical hardware edition of Avamar. It is a pre-packaged solution that includes
the Avamar software installed onto approved hardware.

The Avamar Data Store can be deployed in a number of different configurations. An Avamar Data store
can be deployed either in a single-node configuration, where one node alone performs all Avamar
functions, or in a multi-node configuration, where a group of nodes work together as a single Avamar
server. The nodes themselves are available in a number of different capacity options. These options allow
for a great deal of flexibility in the types of Avamar servers that can be built.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 4


The node is the building block for the Avamar Data Store. Although there are a number of different types
of nodes, they share many of the same components, many of which are replaceable.

In the front of the node are the hard drives. The number of hard drives vary between each node type.
Because of this, you can quickly identify the type of node by counting the number of hard drives on the
front. Hard drives are hot-swappable and can be easily replaced if there is a failure.

The back of the node contains two redundant the power supplies. These can also be removed and
replaced, if needed.

The higher capacity node types: M2400 and S2400, contain an SSD card toward the left side of the back
of the node. It is located behind the SSD bay door, and can also be replaced if needed.

Each node has two network ports on the I/O panel on the left. Additionally, each node also has 2 SLICs
which provide 4 network ports each. These SLICs are also replaceable. The SLIC on the left, which is
used for backup and replication traffic has the option of using optical connections.

Inside the node are some internal components. Six fans are located toward the front, and can be removed
for replacement. Underneath the cooling shroud are the DIMMS and CPUs.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 5


The Avamar Single Node server consists of one node. It has both utility and storage components installed
on that node. It hosts the Administration Server, also known as Management Console Server (MCS),
which manages the Avamar server. It also hosts the gsan process which processes backup data.

If a single node were to fail, its data would become inaccessible or lost. For this reason, single node
servers require replication, with two exceptions. The S2600, or Business Edition, single-node server,
which has RAID 6 protection does not require replication. Also, a single-node server with an integrated
Data Domain may be configured to backup Avamar checkpoint data to that Data Domain. A single-node
with this configuration does not require replication.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 6


With the Gen4T version of hardware, there are 4 options for a single node Data Store: the M600, M1200,
M2400, and S2400 “Business Edition”. The main difference between these is the number of 2 TB hard
drives that each has. This allows three different choices for the amount of licensable capacity: 2.0 TB, 3.9
TB, and 7.8 TB. You can easily see the number of drives on the front of the node. Both the M2400 and
S2400 also contain a solid state drive.

The S2400 “Business Edition” is unique in that it uses RAID 6 protection instead of RAID 1. This allows for
dual drive failure without the loss of data. The S2400 hardware can only be used in a single-node
configuration.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 7


A multi-node Avamar server contains two types of nodes; a utility node and multiple storage nodes.

The utility node is dedicated to scheduling and managing background Avamar server jobs. The hostname
and IP address of the utility node is the identity of the Avamar server for access and client/server
communication. It uses its own node type, which has only two disks, of 2 TB each. Because utility nodes
are dedicated to running these essential services, they cannot be used to store backups.

Instead, backup data is load balanced across multiple storage nodes. Storage nodes use the same exact
hardware as a single node server: the M600, M1200, and M2400. The same hardware is used, just
configured with different software. As with the single-node, this allows each node to have licensable
capacities of 2.0 TB, 3.9 TB, or 7.8 TB. Note that the S2400 “Business Edition” node hardware cannot be
used as a storage node.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 8


A multi-node server consists of one utility node, and between 3 and 16 storage nodes. Storage nodes
must all be of the same capacity. Mixing different capacity types in a single Avamar server is not
supported. In addition, a spare storage node can optionally be added to a multi-node server. This allows
faster node replacement.

Every multi-node server also has two internal network switches that provide communication between
nodes. These switches are redundant so that the failure of one does not stop the Avamar server.

When racking the server, the utility node is placed at the bottom, with the storage nodes placed from the
bottom to the top. The internal switches are placed at the top with an empty spot in between for the power
cords. Empty slots are left in the rack so that additional storage nodes can be added for future capacity
expansion.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 9


The Avamar NDMP Accelerator is an optional node that is used as a bridge between an Avamar server
and an NDMP compliant NAS device. It is used to backup data from the NAS device. Data streams
through the NDMP Accelerator and no user data is stored on the NDMP Accelerator. A single accelerator
can support multiple NAS storage devices. The NDMP Accelerator uses identical hardware as a utility
node.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 10


In this activity, you must create a valid Avamar multi-node server with three storage nodes. Drag and drop
the components on the left into the proper positions in the rack on the right. Not all of the components will
be used. The activity will only complete when the correct placements are made.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 11


There are a number of network connections that need to be made with the Avamar Data Store. At the
minimum, each Avamar server needs to be connected to the customer’s network switch and, in the case
of a multi-node, connections to the internal Avamar network. Optionally, secondary connections to a
second customer switch can be made to provide High Availability.

Some customers like to keep replication and management traffic on separate networks. Instead of using
the regular customer network connections, replication traffic can optionally use a separate connection, or
two connections for high availability. Similarly, administration traffic can use still another connection, or
two for high availability.

Additionally, RMC connections can be made either with its own dedicated port, or by sharing a port with
backup traffic.

Fortunately, Avamar nodes have a lot of network ports that can accommodate all these connections.
However, you must be sure that network cables are connected to the correct ports.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 12


Each Avamar node, no matter what capacity, has two SLICs that provide 4 NICs each. The SLICs are
referred to as “SLIC0” on the left, and “SLIC1” on the right.

The ports on each SLIC are referred to as NIC0, NIC1, NIC2, and NIC3, from the bottom to top.

Ports are often referred to by combining these two notations. For example, SLIC1NIC0 refers to the
bottom port in the right SLIC.

Each port is mapped to an internal name. SLIC0NIC0 is mapped to eth1, SLIC0NIC1 is mapped to eth2,
and so on across both SLICS.

The NICs on the Rear I/O panel on the left are used for RMC traffic. The one on the left is for dedicated
RMC connections, while the one on the right is for shared.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 13


On a single-node server, the minimum connection requirement is to have one connection to the
customer’s network switch. This is done through the SLIC0NIC0 port. If High Availability is desired, the
SLIC0NIC1 port is connected to a second customer switch. These two ports are bonded together as
bond0.

If the customer desires to separate their replication traffic, this will be done over SLIC0NIC2. If high
availability is required for replication, use SLIC0NIC3.

If the customer desires to separate their management and administration traffic, this will be done over
SLIC1NIC0. If high availability is required for management and administration, use SLIC1NIC1.

Notice that all backup and replication traffic, no matter what configuration is using SLIC0. This SLIC can
optionally use optical ports for higher throughput.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 14


For a multi-node server many of the network connections use the same ports as in a single-node server.
However there are some differences.

First, all nodes need to be connected to both internal Avamar switches. These internal switches are used
for all internal traffic sent from one node to another. Each Avamar node is connected to both switches.
SLIC1NIC2 connects to internal switch A, on the bottom. SLIC1NIC3 connects to internal switch B, on the
top. An easy way to remember this is that the top NIC of SLIC1 on the node connects to the top switch.

As with the single-node server, SLIC0NIC0 is used to connect to the customer switch and SLIC0NIC1 can
connect to a second switch for high availability. However, every node must connect to the customer
network using these ports. Since each node has at least one connection to the customer switch, it is
important to ensure that the customer has enough ports available, especially when installing an Avamar
server with many nodes.

The customer connections are bonded together as bond0 and internal connections are bonded together as
bond1. Each bond has its own IP address. These bonds are on the OS only and are Active/Passive.

If the customer requires separate connections for replication and management traffic, connections are only
made on the utility node using the same ports as in the single-node configuration. SLIC0NIC2 and
SLIC0NIC3 are used for replication, and SLIC1NIC0 and SLIC1NIC1 are for management.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 15


Multi-node ADS servers have two switches for internal networking, a primary switch A and secondary
switch B for failover functionality. The utility node is connected to both switches into port 1. Storage nodes
also have a connection to both switches using ports 2 through 18. The storage nodes should be connected
in the order that they are in the rack, from bottom to top. Storage node 1 at the bottom of the rack
connects to port 2, storage node 2 just above it, connects to port 3, and so on. Ports 21 and 22 are used
as a crossover connection between the two switches. Port 24 is reserved as a service port and is used
during the installation of the Avamar server. The internal switches do not connect to the customer network.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 16


Remote Management Console (RMC) is a service in each node that provides tools to monitor,
troubleshoot, and potentially repair any node over the network. If used, the RMC connection must be
made to one of the two RMC ports on the rear I/O panel on each node. The port used depends on whether
RMC will use a dedicated port, or whether it will share a port with backup traffic.

If RMC is used, it typically uses a dedicated RMC port on each node. In this case, the RMC connection is
made to the port on the left. This port is dedicated to RMC and does not work for anything else.

However, some environments want use RMC without using up additional ports. To do so, RMC can share
a port with backup traffic. The RMC shared port on the right is enabled for RMC and also mapped
internally to eth0 for regular network use. To connect the server for shared RMC, the backup connection is
moved from SLIC0NIC0 to the RMC shared port. This is done for all nodes. Both RMC and backup traffic
will use this port. Also, if High Availability is used, the secondary connection on all nodes is made to
SLIC0NIC0 instead. Since the backup connections now use eth0 and eth1, rather than eth1 and eth2, the
bonding configuration will have to be changed as well.

The internal connections, replication, and management connections, if used, remain the same whether the
RMC port is shared or dedicated.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 17


In this activity, you must connect the network cables for a multi-node Avamar server. Be sure to connect
the cables to the correct ports on the nodes and the internal switches.

Drag and drop the ends of each wire on the left into the proper ports on the right. The activity will only
proceed when the correct connections are made.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 18


Shown here are the types of manuals which are listed on the Dell/EMC support site for Avamar Data
Store.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 19


SolVe Desktop is a program that generates procedures for various support tasks for Dell EMC products.
For Avamar, SolVe generates procedures for installation, upgrades, FRUs and more. When launching
SolVe, you are asked to provide various information relating to the task, such as software versions,
customer requirements, and installation options. A customized procedure is then created based on this
information.

It is important to always generate a new procedure for every installation. SolVe Desktop is updated
periodically and will have the most relevant procedures.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 20


This module focused on the Avamar Data Store product, its main hardware components, and networking
configurations.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 21


This module focuses on hardware installation of the Avamar Data Store.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 22


The goal of the Avamar Data Store Hardware Installation process is to have an Avamar server that is
racked, cabled, powered on with network connectivity, and loaded with Avamar software installation files.
At the end of this procedure, the server is ready for the Avamar software to be installed and configured.

This is an overview of the Avamar Data Store Hardware Installation process. Each of these steps will be
explained in more detail.

During the hardware installation, the nodes are first placed on the rack and, if it’s a multi-node server, you
will perform internal network cabling. Then, you will configure initial network settings and copy Avamar
software packages onto the server utility or single node. If the server is a multi-node, you will test the
internal switches and set the IP addresses and other basic networking for each storage node. If advanced
networking settings are used, you will run dpnnetutil to configure them. You will also install a few system
tools, configure the power button, and update firmware. You will configure RMC on each node as required
by the customer. Once the server is installed, you will connect server to the customer network.

You should always use new procedures from SolVe Desktop to perform hardware installation. SolVe
Desktop is updated much more frequently than this course. The content presented in this course should
only be used as a guide.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 23


All Avamar nodes are shipped to the customer site with the SLES 11 OS already installed. This operating
system is installed from a kickstart installation package specifically designed for Avamar. After the node is
kickstarted, it will have the operating system installed along with some basic Avamar directories and files
such as the /usr/local/avamar directory. The root, and admin user accounts are also configured by the
kickstart process. They both have a default password of changeme. The disks are also partitioned and
mounted to proper directories.

Occasionally, a node may have to be re-kickstarted. This may be required if there are configuration errors
on the node. To rekickstart a node, always follow a new procedure from SolVe Desktop. The procedure
tells you to download the Avamar kickstart ISO file from the Avamar FTP site. Burn this ISO file to a DVD,
insert it to an external USB DVD drive, connect the drive to the node, and boot the node. You need to edit
the BIOS settings to boot from the external DVD drive. Once the node is booting from the DVD drive,
select the hardware configuration of the node from the menu. The OS will be installed in a few minutes.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 24


First, nodes need to be placed on the rack. Typically, nodes are installed onto an EMC Titan rack, but an
existing customer rack may also be used. In either case, follow the rack instructions to properly secure the
nodes into the rack. For a multi-node server, be sure to place the nodes and switches in the correct
positions: utility node at the bottom with storage nodes above. The switches are located at the top with a
1U space between them. Nodes are heavy and should always be installed by two people.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 25


Connect both power supplies on each node to the power distribution units, or PDUs. The top power supply
on each nodes connects to the PDU on the left and the bottom power supply on each node connects to
the PDU on the right. Combine and tie off power cords with a Velcro strap. Depending whether the power
is single-phase, three-phase delta, or three-phase wye, the exact receptacle on the PDU used for each
node will vary. See the output of SolVe Desktop for diagrams.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 26


For a multi-node server, ensure that the network cabling between the nodes and internal switches are
properly connected. You will use two network cable bundles: one for each switch. Connect the first bundle
to the bottom switch A. Fasten the cable bundle to the left wall of the rack so that the wall, not the switch
connectors, receives the weight of the cables. Attach the individual cables to the appropriate node’s
SLIC1NIC2 port. Connect the second cable bundle similarly to switch B and fasten it to the right wall of the
rack. Connect these cables to the appropriate node’s SLIC1NIC3 port.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 27


Now that the nodes are racked and cabled properly, power on each mode by pressing the reset/power
button on the rear I/O panel with a small tool or paperclip. The order in which the nodes are powered on is
not important since they do not have any Avamar software installed on them. They are just individual
nodes with a SLES operating system on them.

The installation will begin on the utility node, or the single node. The nodes do not have any local ports for
a keyboard or monitor, so you will have to use the serial port on the rear I/O panel. Connect your laptop to
the serial port, and use PuTTY to connect. You will have to configure PuTTY properly for serial
connections. Set the baud rate, data bits, stop bits, parity, and flow control. Also, configure PuTTY to use
linux keyboard sequences.

Log into the node as user root with a password of changeme.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 28


By default, the eth1 and eth2 network ports are bonded together for the backup network as bond0.
However, if the customer chooses to use shared RMC ports, the backup network uses eth0 and eth1
instead. In this case, the bonding configuration file needs to be modified.

Edit the bondconf.xml file in the Avamar var directory shown. Ensure that the entry for the backup network,
bond0, includes the proper port names. If shared RMC is used, change the entries to eth0 and eth1.

You may need to modify the bondconf.xml file on storage nodes as well. You may either modify them
manually or copy the bondconf.xml file from the utility node once networking has been configured on all
nodes.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 29


On the utility or single node, use the Yast configuration utility is to configure basic networking for the
primary backup network port. The backup port depends on whether dedicated or shared RMC is being
used. For dedicated RMC configurations, the primary backup port is eth1. For shared RMC configurations,
the primary backup port is eth0.

Run the yast command. From the main menu select Network Devices and Network Settings. Then, select
the primary backup port, and edit it. Here you will provide an IP Address, subnet mask, and hostname.
Yast is also used to set DNS parameters and gateway information.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 30


For the full installation to take place, the installation files and their md5sum files must be transferred to the
utility node, or the single node server. Place these files into the directory shown. The files include the main
installation AVP package, operating system security updates, and the Avamar bundle zip file. The Avamar
bundle zip file includes a number of different installation files including the avinstaller-bootstrap file,
dpnnetutil, and the network test script.

After the transfer completes, run an md5sum check each file to ensure that they were not corrupted. Also,
unzip the Avamar bundle zip file.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 31


One of the files that was extracted from the Avamar Bundle is the network test script. This script must be
run on multi-node installations. It will ensure that all internal network connections have been made to the
right ports on the right switches.

Run the command and examine the output. A successful test will report that no errors were found and it
will also report the correct number of storage nodes, including the spare if one is present.

A failed test will report errors. For example, this output shows that 11 nodes are connected to switch A, but
only 10 are connected to switch B. Most likely, one of the nodes was not connected to switch B.

The network test also configures the switches with the IP addresses shown.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 32


In a multi-node server, you must configure the storage node networking. If the customer is using shared
RMC, the bondconf.xml file needs to be modified to reflect the changed port assignments for the backup
network. bond0 needs to include eth0 and eth1, instead of eth1 and eth2. Modify the file manually, or copy
the already modified file from the utility node.

Also, use the yast utility to set the IP Address, subnet mask, hostname, DNS parameters, and gateway.
Be sure to configure the correct primary backup port. For dedicated RMC, configure eth1. For shared
RMC configure eth0.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 33


Some advanced network configurations require you to run the dpnnetutil interactive utility. If the customer
needs support for VLAN interfaces, or wants to use Network Address Translation, or use custom
hostnames, then you must use dpnnetutil to configure them. Also, if the default Avamar internal subnet of
192.168.255.1/24 conflicts with the customer network, then dpnnetutil can configure a new subnet. If none
of these conditions are met, you do not need to run dpnnetutil.

Dpnnetutil is included in the Avamar Bundle zip file. To use it, install the utility from its .run file, and run it
using the dpnnetutil command. Dpnnetutil will prompt for various networking information including the
internal subnet, NAT, VLAN, and hostnames.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 34


There are a few system tools that do not come pre-installed with the Gen4T operating system. Install
these tools on each node by uploading the system tools zip file to the node, extracting it, and running its
install script.

You must also configure the power/reset button behavior on each node. Enter the ipmitool command as
shown. This will configure the button to reset the node if it is pressed for less than 10 seconds, and power
off the node if it is held for more than 10 seconds.

Also, you must update each node’s firmware to the latest version. Firmware updates must be placed on a
hidden FAT32 partition called “/firmware”. Upload the firmware zip file, mount the hidden firmware
partition, and extract the contents of the zip file into that partition. Reboot the node to apply the changes.
This must be done for all nodes.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 35


To configure RMC, you must enable access to it, configure an IP address, subnet and gateway, for the
RMM port, and set a password. This is all done with the ipmitool command. Within this command, each
RMC port is referred to according to a channel number. Channel 1 refers to the shared RMC port, while
channel 4 refers to the dedicated RMC port. Depending on whether dedicated or shared RMC is used, you
will set an IP address for one, and set the other ports IP address and default gateway to 0.0.0.0. This
disables the port for RMC use. When configuring the password, the default root password is “calvin”.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 36


At this point, the Avamar server hardware is fully installed and can be connected to the customer’s
network switch. Connect each node to the customer network using backup network port. If high availability
is used, connect the secondary backup port to the customer's secondary switch. Remember, that the port
used will depend on whether shared or dedicated RMC is used.

Also, if separate replication and management networks are used, connect those using the proper ports.
Connect the High Availability replication and management networks as well, if they are being used.

Finally, establish a serial connection to each node, and verify that they can ping other devices on the
customer network.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 37


The EMC Install Base Group must be notified of the Avamar installation effort. This is required to ensure
that the CS/Oracle database is up to date for each customer. The notification is done through the Business
Services Portal website. While completing the form, you will need to submit node serial numbers. These
are located on a tag that pulls out on the back of the node, on the right just below the power supplies.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 38


This module focused on hardware installation of the Avamar Data Store.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 39


This course covered the knowledge necessary to install and configure EMC Avamar Data Store hardware.

Copyright © 2017 Dell Inc. Avamar Data Store Hardware Installation 40

You might also like