0% found this document useful (0 votes)
149 views95 pages

Hitachi Unified Compute Platform 1000 For Vmware Evo:Rail Installation and Reference Manual

Uploaded by

Vương Nhân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views95 pages

Hitachi Unified Compute Platform 1000 For Vmware Evo:Rail Installation and Reference Manual

Uploaded by

Vương Nhân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

Hitachi Unified Compute Platform 1000 for

VMware EVO:RAIL Installation and


Reference Manual
Hitachi Unified Compute Platform

FASTFIND LINKS
Contents
Product Version
Getting Help

MK-92UCP077-02
© 2016 Hitachi Data Systems Corporation. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose
without the express written permission of Hitachi, Ltd.

Hitachi Data Systems reserves the right to make changes to this document at any time without notice and
assumes no responsibility for its use. This document contains the most current information available at the
time of publication. When new or revised information becomes available, this entire document will be updated
and distributed to all registered users.

Some of the features described in this document might not be currently available. Refer to the most recent
product announcement for information about feature and product availability, or contact Hitachi Data Systems
Corporation at https://portal.hds.com.

Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of
the applicable Hitachi Data Systems Corporation agreements. The use of Hitachi Data Systems products is
governed by the terms of your agreements with Hitachi Data Systems Corporation.

Notice on Export Controls. The technical data and technology inherent in this Document may be subject to
U.S. export control laws, including the U.S. Export Administration Act and its associated regulations, and may
be subject to export or import regulations in other countries. Reader agrees to comply strictly with all such
regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.

Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries.

Microsoft Internet Explorer, Active Directory, Windows Server, and Windows are trademarks or registered
trademarks of Microsoft Corporation.

All other trademarks, service marks, and company names in this document or website are properties of their
respective owners.

ii

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Contents

Preface .................................................................................................. vii


Intended audience and qualifications .................................................................. viii
About this document ......................................................................................... viii
Document conventions ...................................................................................... viii
Accessing product documentation ......................................................................... ix
Getting help ........................................................................................................ ix
Comments .......................................................................................................... ix

Prerequisites and Checklist .................................................................... 2-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Setup Checklist .......2-2

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance3-1
Hardware .........................................................................................................3-2
Software ..........................................................................................................3-5
Networking .......................................................................................................3-5
EVO:RAIL Setup Workflow .................................................................................3-6
Step 1: Network Plan ..........................................................................3-6
Top-of-Rack 10GbE Switch ....................................................................3-7
Reserve VLANs (best practice) ..............................................................3-8
System ................................................................................................3-9
Management ..................................................................................... 3-10
vMotion and Virtual SAN ..................................................................... 3-13
Solutions ........................................................................................... 3-14
Workstation/Laptop (for configuration and management) ...................... 3-15
Out-of-Band Management (optional) .................................................... 3-16
Step 2: Step 2: Set Up Switch ........................................................... 3-17
Understanding Switch Configuration .................................................... 3-17
Network Traffic .................................................................................. 3-18

Contents iii

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Multicast Traffic................................................................................. 3-18
vSphere Security Recommendations.................................................... 3-19
Configure VLANs on your 10GbE Switch(es) ......................................... 3-19
Inter-switch Communication ............................................................... 3-20
Disable Link Aggregation .................................................................... 3-21
Step 3: Hands-on Lab (Optional) ....................................................... 3-21
Step 4: Cable & On .......................................................................... 3-21
Step 5: Management VLAN ............................................................... 3-23
Step 6: Connect & Configure ............................................................ 3-24
Avoiding Common Network Mistakes ......................................................... 3-26

Deployment and Initial Configuration ..................................................... 4-1


How to Configure EVO:RAIL .............................................................................. 4-2
Initial Configuration User Interfase .................................................................... 4-4
Adding Appliances to an EVO:RAIL Cluster.......................................................... 4-8
Step 1: Plan .............................................................................................. 4-8
Step 2: Set Up Switch ................................................................................ 4-9
Step 3: Hands-on Lab................................................................................. 4-9
Step 4: Cable & On .................................................................................... 4-9
Step 5: Management VLAN ......................................................................... 4-9
Step 6: Add EVO:RAIL Appliance ............................................................... 4-10

Management and Maintenance .............................................................. 5-1


EVO:RAIL Management ..................................................................................... 5-2
Access EVO:RAIL Management ................................................................... 5-2
Access EVO:RAIL Management ............................................................. 5-2
Access vSphere Web Client .................................................................. 5-2
Localize EVO:RAIL ............................................................................... 5-3
Creating and Managing Virtual Machines (VMs) ............................................ 5-3
Create VMs ......................................................................................... 5-3
Manage VMs ....................................................................................... 5-6
Monitoring an EVO:RAIL Cluster .................................................................. 5-8
Monitor Appliance Health ..................................................................... 5-8
View Events ...................................................................................... 5-10
Mamanging an EVO:RAIL Cluster .............................................................. 5-10
License EVO:RAIL .............................................................................. 5-11
Update EVO:RAIL .............................................................................. 5-12
Add Appliances to an EVO:RAIL Cluster ............................................... 5-12
Shutdown an EVO:RAIL Cluster or Appliance ....................................... 5-12
EVO:RAIL Maintenance ................................................................................... 5-13
Logs ................................................................................................. 5-13
Hardware Replacement ...................................................................... 5-14

iv Contents

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Network Configuration Table ..................................................... 1
EVO:RAIL Network Configuration Table ..................................................................2

JSON Configuration File ........................................................................... 1


JSON Configuration File ........................................................................................2
Upload Configuration File ......................................................................................2
JSON File Format and Valid Values ........................................................................3

Physical Requirements ............................................................................. 1

Virtual Machine Size by Guest Operating System ........................................ 1


Virtual Machine Profiles.........................................................................................2

Security Profile Details ............................................................................. 1


Audit only ............................................................................................................2
Risk Profile 1........................................................................................................3
Risk Profile 2........................................................................................................3
Risk Profile 3........................................................................................................4

EVO:RAIL Cluster Shutdown and Restart Procedures .................................. 1


EVO:RAIL Cluster Shutdown Procedure ..................................................................2
EVO:RAIL Cluster Restart Procedure ......................................................................4
EVO:RAIL single appliance shutdown procedure .....................................................5
EVO:RAIL single appliance restart procedure ..........................................................6

Customizing the EVO:RAIL Initial IP Address.............................................. 1


Customizing Initial IP Address ...............................................................................2

Contents v

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
vi Contents

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Preface
This document describes the deployment, management, ongoing
configurations, and maintenance procedures that a data center administrator
needs to perform when working with the Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL solution.

Read this document carefully to understand how to replace FRUs and maintain
a copy for reference purposes.
 Intended audience and qualifications
 About this document
 Document conventions
 Accessing product documentation
 Getting help
 Comments

Preface vii

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Intended audience and qualifications
This guide is intended for data center administrators and others responsible for
the deployment, configuration, management, and maintenance of Hitachi
Unified Compute Platform 1000 for VMware EVO:RAIL.

About this document


This document is based on VMware EVO:RAIL documentation releases 2.0 and
document revision 2.0.0-1 and is intended to be used with the document listed
below. All referenced information, sections, and page numbers provided in this
guide are only accurate for the specific revisions of the following document:
• QuantaPlex Series T41S-2U/T41SP-2U User’s Guide (See
www.quantaqct.com)

Document conventions
This document uses the following typographic conventions:

Convention Description

Regular text bold In text: keyboard key, parameter name, property name, hardware label,
hardware button, hardware switch
In a procedure: user interface item

Italic Variable, emphasis, reference to document title, called-out term


screen text Command name and option, drive name, file name, folder name, directory name,
code, file content, system and application output, user input

< > angle brackets Variable (used when italic is not enough to identify variable)

[ ] square brackets Optional value

{ } braces Required or expected value

| vertical bar Choice between two or more options or arguments

viii Preface

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
This document uses the following icons to draw attention to information:

Icon Meaning Description

Tip Provides helpful information, guidelines, or suggestions for performing


tasks more effectively.

Important Provides information that is essential to the completion of a task.

Caution Warns that failure to take or avoid a specified action can result in
adverse conditions or consequences (for example, loss of access to
data).

WARNING Warns the user of severe conditions, consequences, or both (for


example, destructive operation).

Accessing product documentation


The user documentation for the Hitachi RAID storage systems is available on
the Hitachi Data Systems Portal: https://portal.hds.com. Check this site for
the most current documentation, including important updates that may have
been made after the release of the product.

Getting help
The Hitachi Data Systems customer support staff is available 24 hours a day,
seven days a week. If you need technical support, log on to the Hitachi Data
Systems Portal for contact information: https://portal.hds.com

Comments
Please send us your comments on this document: [email protected].
Include the document title and number, including the revision level (for
example, -07), and refer to specific sections and paragraphs whenever
possible. All comments become the property of Hitachi Data Systems.

Thank you!

Preface ix

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
x Preface

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
1
Prerequisites and Checklist
To ensure the correct functioning of Hitachi Unified Compute Platform 1000 for
VMware EVO:RAIL and an optimal end-to-end user experience, understanding
the recommendations and requirements are essential. Availability of resources
and workload is critical for any environment, but even more so in a hyper-
converged environment as compute, networking, storage, and management
are provided on the same platform.

 Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Setup


Checklist

Prerequisites and Checklist 2-1

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Hitachi Unified Compute Platform 1000 for VMware
EVO:RAIL Setup Checklist
Before you proceed with the deployment and configuration of the Hitachi
Unified Compute Platform 1000 for VMware EVO:RAIL appliance or appliances:
• Review the “UCP for VMware EVO:RAIL Setup Checklist” below and fill
in “Appendix A: EVO:RAIL Network Configuration Table” by following
all of the steps listed in “EVO:RAIL Setup Workflow” This is essential for
smooth deployment and configuration.

Hitachi UCP 1000 for VMware EVO:RAIL Setup Checklist

Before cabling EVO:RAIL

Step 1: Plan

• Eight 10GbE ports (SFP+ or RJ-45) for each EVO:RAIL appliance in an EVO:RAIL
cluster.
You can have up to 16 appliances (64 server nodes) in an EVO:RAIL cluster.
Top-of-Rack 10GbE
• Decide if you will have a single or multiple switch setup for redundancy.
Switch
• Power and cooling specifications, operating conditions, and hardware configuration
options as provided in Appendix C.
• One management VLAN with IPv6 multicast for traffic from EVO:RAIL, vCenter
Server™, ESXi™, and any additional solutions provided with EVO:RAIL (default is
Reserve
untagged/native).
VLANs
• One VLAN with IPv4 multicast for Virtual SAN™ traffic.
(Best practice) • One VLAN for vSphere® vMotion™.
• At least one VM Network.
• Time zone.
• Hostname or IP address of the NTP server(s) on your network (recommended).
• IP address of the DNS server(s) on your network (required, except in totally
System
isolated environments).
• Optional: Domain, username, and password for Active Directory authentication.
• Optional: IP address, port, username, and password of your proxy server.
• Decide on your ESXi host naming scheme.
• Decide on the hostnames for vCenter Server and for EVO:RAIL.
• Reserve one IP address for EVO:RAIL.
• Reserve one IP address for vCenter Server.
Management • IP address of the default gateway and subnet mask.
• Reserve four contiguous IP addresses for ESXi hosts for each appliance in an
EVO:RAIL cluster.
• Select a single password for all ESXi hosts in the EVO:RAIL cluster.
• Select a single password for EVO:RAIL and vCenter Server.
• Reserve four contiguous IP addresses and a subnet mask for vSphere vMotion for
vMotion and Virtual each appliance in an EVO:RAIL cluster.
SAN • Reserve four contiguous IP addresses and a subnet mask for Virtual SAN for each
appliance in an EVO:RAIL cluster.

2-2 Prerequisites and Checklist

Installation and Reference Guide for Hitachi Unified Compute Platform 1000
• Logging Solution:
 To use vRealize Log Insight: Reserve one IP address and decide on the
hostname.
 To use an existing syslog server: Find out the hostname or IP address of your
Solutions
third-party syslog server.
• HDS Solutions:
 If other solutions are provided with your appliance: Reserve one or two IP
addresses.
• Client workstation/laptop (any operating system).
Workstation/ • A browser to access the EVO:RAIL user interface.
Laptop (The latest versions of Firefox, Chrome, and Internet Explorer 10+ are all
supported)
• A 1GbE switch with four available ports per EVO:RAIL appliance or enough extra
Out-of-band
capacity on the 10GbEswitch.
management
• If needed contact HDS for instructions to modify the default information such as
(Optional) username, password, and IP address for each node.

Step 2: Set Up Switch


Configure your 10GbE switch. This must be done BEFORE you connect or power on
EVO:RAIL.
• Configure your selected management VLAN from Step 1B (default is
untagged/native). See the Setup Guide
• Make sure that IPv6 multicast is configured/enabled on the management VLAN
(regardless of whether tagged or native).
• Configure your selected VLAN for Virtual SAN.
• Make sure that IPv4 multicast is configured/enabled on the Virtual SAN VLAN
(enabling IGMP snooping and querier is highly recommended).
• Configure your selected VLAN for vSphere vMotion.
• Configure your selected VLANs for VM Networks.
• In multi-switch environments, be sure to configure the management and Virtual SAN
VLANs to carry the multicast traffic between switches.
• Do not use link aggregation (LACP/EtherChannel) on 10GbE switch ports connected
to EVO:RAIL.

Step 3: Hands-On Lab (Optional)


• Test drive the user interface using the EVO:RAIL’s Hands-on Lab:
www.vmware.com/go/evorail-lab

After planning and switch setup

Step 4: Cable & On

• Connect the 10GbE ports on EVO:RAIL to the 10GbE switch(es) as shown in the
cabling diagrams.
• Then power on all four nodes on your first EVO:RAIL appliance. See Figure 1 for
power buttons.
• Do not turn on any nodes in other appliance in an EVO:RAIL cluster until you have
completed the full configuration of the first appliance. Each appliance must be added
to an EVO:RAIL cluster one at a time.

Step 5: Management VLAN

• Optional: On each ESXi host, customize the management VLAN with the ID you
specified in Step “Reserve VLANs”.
Otherwise, the default is your switch’s Native VLAN for untagged traffic.

Step 6: Connect & Configure

Prerequisites and Checklist 2-3


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
• Connect a workstation/laptop to access the EVO:RAIL initial IP address on the
EVO:RAIL management VLAN.
• Configure the network settings on your workstation/laptop to talk to EVO:RAIL.
• Browse to the EVO:RAIL initial IP address (the default is https://192.168.10.200)
and build your appliance.
• Configure your corporate DNS server for all EVO:RAIL hostnames and IP addresses
unless you are in an isolated environment.
• Refer to the EVO:RAIL Management & Maintenance section in this document to
create VMs, add more appliances to your EVO:RAIL cluster, license your software,
and monitor your appliances.
Detailed instructions are provided on “EVO:RAIL Setup Workflow”, “Deployment and Initial
Configuration” and “Management and Maintenance” sections.

2-4 Prerequisites and Checklist

Installation and Reference Guide for Hitachi Unified Compute Platform 1000
2
The Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
combines compute, networking, storage, and management resources together
with industry leading infrastructure, virtualization, and management products
from VMware to form a radically simple hyper-converged infrastructure
appliance offered by Hitachi Data Systems.

 Hardware
 Software
 EVO:RAIL Networking

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-1
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Hardware
Hitachi Data Systems and VMware collaborated to create VMware EVO:RAIL,
the industry’s first hyper-converged infrastructure appliance for SDDC. It is
powered by VMware software using an architecture built with Hitachi Data
Systems and QuantaPlex hardware. Hitachi Data Systems (HDS), as a
Qualified EVO:RAIL Partner (QEP), sells the hardware with integrated
EVO:RAIL software, providing all hardware and software support to customers.

Each Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
has four independent nodes with the following:
• 2 Intel Xeon Processors:
– E5-2620 v3 (6C 2.4GHz 85W) or
– E5-2650 v3 (10C 2.3GHz 105W) or
– E5-2680 v3 (12C 2.5GHz 120W)

Note: A UCP 1000 ROBO configuration will have a single CPU and half of the memory
of a standard configuration

• 16GB or 32GB DDR4-2133MHz Memory Module (Total of 64GB to 512GB)


• 1 × disk for the VMware ESXi boot device:
– SAS 10K RPM 300GB HDD or
– 64GB SATADOM
• 3 to 5 × SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
• 1 × enterprise-grade SSD for read/write cache
– 400GB S3700 MLC or
– 800GB S3700 MLC
• 1 × virtual SAN-certified pass-through disk controller
– LSI SAS3008 RAID controller mezzanine card
• 2 × 10GbE NIC ports:
– RJ-45 connections: Dual port 10GigE Base-T Intel X540 OCP Mezzanine
card
or
– SFP+ connections: Dual port 10GigE Intel 82599ES SFP+ OCP
Mezzanine Card
• 1 × 10/100 Base-T RJ45 port for remote (out-of-band) management

3-2 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
The EVO:RAIL appliance is designed with fault-tolerance and high availability
in mind. There are with four independent nodes, each consisting of the
following dedicated computer, network, and storage resources:
• 4 VMware ESXi hosts in a single appliance enables resiliency for hardware
failures and planned maintenance
• 2 fully redundant power supplies
• 2 redundant 10GbE NIC ports per node
• Enterprise-grade ESXi boot device, HDDs, and SSD
• Fault-tolerant Virtual SAN datastore

EVO:RAIL can scale out to 16 appliances for a total of 64 VMware ESXi hosts,
and one Virtual SAN datastore backed by a single VMware vCenter server and
EVO:RAIL instance. Deployment, configuration, and management are handled
by EVO:RAIL, allowing the compute capacity and the Virtual SAN datastore to
grow automatically. New appliances are automatically discovered and easily
added to an EVO:RAIL cluster with a few mouse clicks. Figure 1 shows the
appliance front view and HDD configuration per node for one of the UCP 1000
options. Figure 2 shows the appliance rear view and I/O ports.

Figure 1

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-3
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 2

3-4 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Software
EVO:RAIL delivers the first hyper-converged infrastructure appliance 100%
powered by the proven suite of core products from VMware. The EVO:RAIL
software bundle, preloaded onto Hitachi UCP 1000hardware, is comprised of
the following:
• EVO:RAIL Deployment, Configuration, and Management
• VMware vSphere Enterprise Plus
• VMware vCenter Server
• VMware Virtual SAN
• VMware vRealize Log Insight

EVO:RAIL is optimized for the new VMware user as well as for experienced
administrators. Minimal IT experience is required to deploy, configure, and
manage EVO:RAIL, allowing it to be used where there is limited or no IT staff
on-site. Since EVO:RAIL utilizes core products of VMware, administrators can
apply existing VMware knowledge, best practices, and processes.

EVO:RAIL leverages the same database as VMware vCenter server, so any


changes in EVO:RAIL initial configuration and management are also reflected
in vCenter Server and vice-versa.

Networking
Each Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
ships with eight RJ-45 or eight SFP+ NIC ports. Eight corresponding ports are
required for each EVO:RAIL appliance on the top-of-rack (TOR) switch or
switches. One port, either on the TOR switch or on a management VLAN that
can reach the TOR network, is required for a workstation/laptop with a web
browser for VMware EVO:RAIL configuration and management. Any other
ports on the appliance will be covered and disabled.

Many of the hardware components in EVO:RAIL are driven by VMware Virtual


SAN requirements, although EVO:RAIL is more prescriptive than Virtual SAN in
order for customers to have a true “appliance” experience.

The first EVO:RAIL node in a cluster creates a new instance of VMware vCenter
server, and all additional EVO:RAIL nodes in one or more appliances join that
first instance.

Note: A UCP 1000 ROBO Edition with a single CPU and half of the memory of a
standard configuration will be supported with 1GbE switches.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-5
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Setup Workflow
To successfully set up EVO:RAIL you must complete the steps in the setup
workflow in this order:

Before cabling EVO:RAIL After Planning and Switch Setup

1. Network Plan 4. Cable & On

• Plan the network architecture • Cable appliance to switch(es) as


including the 10GbE switch shown in cabling diagrams
configuration, VLANs, IP addresses.
Also, consider a 1Gb switch for out- • Turn on all four EVO:RAIL nodes
of-band management.

2. Setup Switch 5. Management VLAN

• Configure the 10GbE switch. This • EVO:RAIL’s management traffic is


must be done BEFORE you connect untagged by default. Use the ESXi
or power on EVO:RAIL command-line interface to customize
the management VLAN for all four
appliance nodes.

3. Hands-on Lab (optional) 6. Connect & Configure

• Visit • Connect to EVO:RAIL initial IP


www.vmware.com/go/evorail- address via workstation/laptop.
lab to test-drive the user interface. Launch the EVO:RAIL user interface
to build/configure the appliance.

To ensure the correct functioning of EVO:RAIL, understanding the


recommendations and requirements in this user guide is essential.

Step 1: Plan
A Hitachi UCP 1000 for VMware EVO:RAIL cluster consists of one or more
appliances. The plan can include multiple appliances. Up to 16 appliances (64
server nodes) can be joined together in one EVO:RAIL cluster . If you have
already configured enough IP addresses for expansion (which we recommend),
all you do is supply the passwords that you created for the first appliance in
the EVO:RAIL cluster. If you do not have enough IP addresses, follow
instructions from section Adding Appliances to an EVO:RAIL Cluster. The
EVO:RAIL user interface will prompt you to add the new IP addresses and the
passwords.

3-6 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Once you set up EVO:RAIL, the configuration cannot be changed easily.
Consequently, we strongly recommend that you take care during this planning
phase to decide on the configurations that will work most effectively for your
organization. We want you to set up EVO:RAIL correctly when it arrives

In this plan, consider the following:

• Top-of-Rack 10GbE switch

• Reserve VLANs (best practice)

• System

• Management

• vMotion and Virtual SAN

• Solutions

• Workstation/laptop

• Out-of-band management (optional)

Top-of-Rack 10GbE Switch

A Hitachi UCP 1000 for VMware EVO:RAIL appliance ships with either eight
SFP+ or RJ-45 NIC ports. Eight corresponding ports are required for each
EVO:RAIL appliance on one or more 10GbE switch(es). One port on the 10GbE
switch or one logical path on the EVO:RAIL management VLAN is required for
a workstation/laptop to access the EVO:RAIL user interface.

The 10GbE switch(es) must be correctly configured to carry IPv6 multicast


traffic on the management VLAN. IPv4 multicast must be carried on the Virtual
SAN VLAN. Multicast is not required on your entire network, just on the ports
connected to EVO:RAIL.

Why multicast? EVO:RAIL has no backplane, so communication between its


four nodes is facilitated via the 10GbE switch. This communication between
the four nodes uses VMware Loudmouth auto-discovery capabilities, based on
the RFC-recognized "Zero Network Configuration" protocol. New EVO:RAIL
appliances advertise themselves on a network using the VMware Loudmouth
service, which uses IPv6 multicast. This IPv6 multicast communication is
strictly limited to the management VLAN that the nodes use for
communication.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-7
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
One or two switches? Decide if you plan to use one or two 10GbE switches
for EVO:RAIL. One switch is acceptable and is often seen in test/development
or remote/branch office (ROBO) environments. However, two or more 10GbE
switches are used for high availability and failover in production environments.
Because EVO:RAIL is an entire software-defined data center in a box, if one
switch fails you are at risk of losing availability of hundreds of virtual
machines.

Reserve VLANs (best practice)


EVO:RAIL groups traffic in the following categories: management, vSphere
vMotion, Virtual SAN, and Virtual Machine. Partners may provide other types
of traffic in their solutions. Traffic isolation on separate VLANs is highly
recommended (but not required) in EVO:RAIL. If you are using multiple 10GbE
switches, connect them via VLAN trunked interfaces and ensure that all VLANs
used for EVO:RAIL are carried across the trunk following the requirements in
this user guide.
By default, all management traffic is untagged and must be able to go over
a Native VLAN on your 10GbE switch or you will not be able to create the
appliance and configure the ESXi hosts. Management traffic includes all
EVO:RAIL, vCenter Server, and ESXi communication. The management VLAN
also carries traffic for optional solutions such as vRealize Log Insight and
partner solution VMs.
Alternately, you can configure a custom management VLAN to allow tagged
management traffic. When you receive the appliance, please follow the
instructions in Step 5 to change the management VLAN.
vSphere vMotion and Virtual SAN traffic cannot be routed. This traffic will
be tagged for the VLANs you specify in the EVO:RAIL initial configuration user
interface.
Dedicated VLANs are preferred to divide virtual machine traffic. For
example, you could have one VLAN for Development, one for Production, and
one for Staging. Each VM can be assigned to one or more VLANs.

Network Enter the management VLAN ID for EVO:RAIL, ESXi, vCenter


Configuration Server, and Log Insight. If you do not plan to have a dedicated
Table management VLAN and will accept this traffic as untagged, enter
 Row 1 “0” or “Native VLAN”..

Network
Configuration Enter a VLAN ID for vSphere vMotion.
Table (Enter a 0 in the VLAN ID field for untagged traffic)
 Row 30

Network
Configuration Enter a VLAN ID for Virtual SAN.
Table (Enter a 0 in the VLAN ID field for untagged traffic)
 Row 34

Network Enter a VLAN ID and name for each VM network you want to
Configuration create. You must create at least one VM network.
Table
 Rows 35-38 (Enter a 0 in the VLAN ID field for untagged traffic)

3-8 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Configure your TOR switch or switches with these VLANs.
TOR switch or
switches Configure the corresponding VLANs between TOR switches and/or
core switches.

System

EVO:RAIL can configure connections to external servers in your network. The


only required selection is the time zone. An NTP server is highly recommended
and a DNS server is required except in isolated environments. Active Directory
and proxy servers are optional.

Time zone

A time zone is required. It is configured on vCenter Server and each ESXi host.
Network
Configuration
Table Enter your time zone.
 Row 3

NTP server or servers

An NTP server is not required, but is recommended. If you provide an NTP


server, vCenter Server will be configured to use it. If you do not provide at
least one NTP server, EVO:RAIL uses the time that is set on ESXi host #1,
regardless of whether the time is correct or not.

Network
Configuration
Table Enter the hostname(s) or IP address(es) of your NTP server(s).
 Row 4

DNS server or servers

One or more external DNS servers are required for production use in a non-
isolated environment. (This is not required in a completely isolated
environment). During initial configuration, EVO:RAIL sets up vCenter Server to
resolve hostnames to the DNS server. If you have an external DSN server,
enter the EVO:RAIL, vCenter Server, Log Insight, and ESXi hostnames and IP
addresses in your corporate DNS server tables in Step 6d.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-9
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Make sure that the DNS IP address is accessible from the network to which
EVO:RAIL is connected and the address functions properly.
If the DNS server requires access via a gateway that is not reachable during
initial configuration, do not enter a DNS IP address. Instead, add a DNS server
after you have configured EVO:RAIL using the instructions in the VMware
Knowledge Base (http://kb.vmware.com/kb/2107249).

Network
Configuration Enter the IP address(es) for your DNS server(s). Leave blank if
Table you are in an isolated environment.
 Row 5

Active Directory

Active Directory can optionally be entered in EVO:RAIL initial configuration.


However, to use Active Directory, you must perform the steps on the vSphere
Web Client, as documented in vSphere:

ESXi and vCenter Server 6.0 Documentation -> Add a vCenter Single Sign-on
Identity Source
Network
Configuration If you will be using Active Directory, perform the steps on the
Table vSphere Web Client after EVO:RAIL is configured. Values entered
 Row 6-8 in the EVO:RAIL user interface are not used.

Proxy Server

A proxy server is optional. If you have a proxy server on your network and
vCenter Server needs to access services outside of your network, you need to
supply the IP address, a port, user name, and password of the proxy server.
Network
Configuration Enter the proxy server IP address, port, user name, and
Table password.
 Row 9-12

Management

An EVO:RAIL cluster consists of one or more appliances, each with four ESXi
hosts. The cluster is managed by a single instance of EVO:RAIL and vCenter
Server. After initial configuration, you will access EVO:RAIL to manage your
cluster at the hostname or IP address that you specify.

Hostnames

3-10 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Since EVO:RAIL is an appliance with four server nodes, it does not have a
single hostname. The asset tag attached to each appliance is used to display
the identity of the appliance in the EVO:RAIL user interface. Asset tags are 11
alphanumeric characters where the first three characters identify your selected
EVO:RAIL partner and the remaining characters uniquely identify the
appliance.

You must configure hostnames for EVO:RAIL, vCenter Server, and your ESXi
hosts. All ESXi hostnames are defined by a naming scheme that comprises: an
ESXi hostname prefix (an alphanumeric string), a separator (“None” or a dash
”-“), an iterator (Alpha, Num X, or Num 0X), and a top-level domain. The
Preview field in the EVO:RAIL user interface shows an example of the result
for the first ESXi host. For example, if the prefix is “host”, the separator is
“None”, the iterator is “Num 0X”, and the top-level domain is “local”, the first
ESXi hostname would be “host01.local”

The vCenter Server hostname is an alphanumeric string. The top-level domain


is automatically applied to the vCenter Server hostname. (Example:
vcenter.local)

Examples:
Example 1 Example 2 Example 3
Prefix host myname esxi-host
Separator None - -
Iterator Num 0X Num X Alpha
Domain local mydomain company
Resulting hostname host01.local myname-1.mydomain esxi-host-a.company

Network
Enter an example of your desired ESXi host-naming scheme. Be
Configuration
sure to show your desired prefix, separator, iterator, and top-level
Table
domain.
 Row 13-16

Network
Configuration
Enter an alphanumeric string for the vCenter Server hostname.
Table
 Row 17

Network
Configuration Enter an alphanumeric string for the EVO:RAIL hostname.
Table
 Row 18

IP Addresses

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance ships
with a default set of IP addresses (see Network Configuration Table). It can be
configured on-site with different IP addresses.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-11
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
You must configure the IP addresses for EVO:RAIL, vCenter Server, and your
ESXi hosts. When selecting your IP addresses, you must make sure that none
of them conflict with existing IP addresses in your network. Also make sure
that these IP addresses can reach other hosts in your network.

There are four ESXi hosts per appliance and each requires an IP address. We
recommend that you consider allocating additional ESXi IP addresses for future
appliances to join your EVO:RAIL cluster. If you have already configured
enough IP addresses for expansion, all you do is supply the passwords that
you created for the first appliance in the cluster. Because EVO:RAIL supports
up to sixteen appliances in a cluster, you can allocate up to 64 ESXi IP
addresses.

EVO:RAIL, vCenter Server, and the ESXi hosts all share the same subnet mask
and gateway. EVO:RAIL leverages the same database as vCenter Server, so
any changes in EVO:RAIL are reflected in vCenter Server and vice-versa.

In Release 2.x, EVO:RAIL and vCenter Server are separate virtual machines
and thus have separate IP addresses. They both use default ports (443), so
you do not specify a port number when you point a browser to reach them.

In Release 1.x, EVO:RAIL and vCenter Server share an IP address because


they are contained in one virtual machine. EVO:RAIL is accessible on port
7443 (https://<evorail-ip-address>:7443) and vCenter Server is accessible via
the vSphere Web Client on port 9443 (https://<evorail-ip-address>:9443).

Network
Configuration Enter the starting and ending IP addresses for the ESXi hosts - a
Table continuous IP range is required, with a minimum of 4 IPs.
 Row 19-20

Network
Configuration
Enter the IP address for vCenter Server.
Table
 Row 21

Network
Configuration Enter the IP address for EVO:RAIL after it is configured.
Table (In Release 1.x only, enter the same IP address that is in Row 21.)
 Row 22

Network
Configuration Enter the subnet mask and gateway for all management IP
Table addresses.
 Row 23-24

Passwords

3-12 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Passwords are required for ESXi hosts and for the EVO:RAIL / vCenter Server
virtual machines. Passwords must contain between 8 and 20 characters with at
least one lowercase letter, one uppercase letter, one numeric character, and
one special character. For more information about password requirements, see
the vSphere password documentation and vCenter Server password
documentation.

For ESXi hosts, the username is root; the pre-configuration password is


Passw0rd! and the post-configuration password is the one you set in the
EVO:RAIL user interface (Row 25).

For vCenter Server and EVO:RAIL, the username for both user interfaces is
[email protected] and the console username is root. The pre-
configuration password for EVO:RAIL is Passw0rd! and the post-configuration
password for both is the one you set in the EVO:RAIL user interface (Row
26).

Network
Please check that you know your passwords in these rows, but for security
Configuration Table
reasons, we suggest that you do not write them down.
 Row 25-26

vMotion and Virtual SAN

vSphere vMotion and Virtual SAN each require at least four IP addresses per
appliance. We recommend that you consider allocating additional IP addresses
for future appliances to join your EVO:RAIL cluster. If you have already
configured enough IP addresses for expansion, all you do is supply the
passwords that you created for the first appliance in the cluster.

Because EVO:RAIL supports up to sixteen appliances in a cluster, you can


allocate up to 64 vMotion IP addresses and 64 Virtual SAN IP addresses.

Network
Enter the starting and ending IP addresses for vSphere vMotion –
Configuration
a continuous IP range is required, with a minimum of 4 IPs.
Table
Routing is not configured for vMotion.
 Rows 27-28

Network
Configuration
Enter the subnet mask for vMotion.
Table
 Rows 29

Network
Enter the starting and ending IP addresses for Virtual SAN – a
Configuration
continuous IP range is required, with a minimum of 4 IPs. Routing
Table
is not configured for Virtual SAN.
 Rows 31-32

Network
Configuration
Enter the subnet mask for Virtual SAN.
Table
 Rows 33

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-13
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Solutions

EVO:RAIL supports initial configuration for solutions both from VMware and
from EVO:RAIL partners.

Logging Solution

EVO:RAIL is deployed with VMware vRealize Log Insight. Alternately, you may
choose to use your own third-party syslog server(s). If you choose to use
vRealize Log Insight, it will always be available by pointing a browser to the
configured IP address with the username, admin. (If you ssh to Log Insight
instead of pointing your browser to it, the username is root.) The password, in
either case, is the same password that you specified for vCenter
Server/EVO:RAIL (Row 26).

The IP address for Log Insight must be on the same subnet as EVO:RAIL and
vCenter Server.

Network
Configuration
Table Enter the hostname and IP address for vRealize Log Insight or the
 Row 39-40 or hostname(s) of your existing third-party syslog server(s).
 Row 41

HDS Solutions VMs

EVO:RAIL supports the deployment of up to two VM(s) for other optional


solutions provided by VMware or VMware partners. Users specify the IP
address of a primary VM and optional secondary VM. EVO:RAIL deploys and
configures the VM(s) on the management VLAN (i.e., the same VLAN that
EVO:RAIL, vCenter Server, the ESXi hosts, and Log Insight communicate on).

The IP address(es) must be on the same subnet as EVO:RAIL and vCenter


Server.
Network
Configuration Enter the IP addresses for the Primary VM from the HDS Solution
Table (Hitachi Data Ingestor or HDI).
 Rows 42

Network
Configuration Enter the IP addresses for the Secondary VM from the HDS
Table Solution (Hitachi Compute Advisor or HCA).
 Rows 43

3-14 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Workstation/Laptop (for configuration and management)

A workstation/laptop with a web browser for EVO:RAIL Configuration and


Management is required. It must be either plugged into the 10GbE switch or
able to logically reach the EVO:RAIL management VLAN from elsewhere on
your network; for example, a jump server
(https://en.wikipedia.org/wiki/Jump_server).

Reaching EVO:RAIL for the First Time

To access the first appliance in an EVO:RAIL cluster for the first time, you
must use the temporary EVO:RAIL initial IP address that was pre-configured
on the appliance, typically 192.168.10.200/24. You will change this IP address
in the EVO:RAIL initial configuration user interface to your desired permanent
address for your new EVO:RAIL cluster.

If you cannot reach the EVO:RAIL initial IP address, you will need to follow the
instructions in Step 6a to configure a custom IP address, subnet mask, and
gateway.

Note: We do not recommend using the default EVO:RAIL initial IP address


(192.168.10.200/24) as your permanent EVO:RAIL IP address, because if you
later add more appliances to the EVO:RAIL cluster or if your create more
clusters, the initial IP addresses will conflict with the existing cluster’s IP
address.

If you later need to change the EVO:RAIL IP address, contact your selected
EVO:RAIL partner.

Your workstation/laptop will need to be able to reach both the EVO:RAIL initial
IP address and the permanent IP address. The EVO:RAIL user interface will
remind you that you may need to reconfigure your workstation/laptop network
settings to access the new IP address. See the instructions in Step 6b.

Network Please enter the EVO:RAIL initial IP address.


Configuration
Enter 192.168.10.200/24 if you can reach this address on your
Table
network.
 Row 2
Otherwise, enter the custom IP address, subnet mask, and gateway
that you will configure in Step 6a.

Network Check the permanent EVO:RAIL IP address that you entered in


Configuration Step 1d.
Table
 Row 22 We recommend that you do not use the default 192.168.10.200/24

Browser Support

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-15
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Use a browser to talk to EVO:RAIL user interface. The latest versions of
Firefox, Chrome, and Internet Explorer 10+ are all supported.

If you use Internet Explorer 10+ and an administrator has set your browser to
“compatibility mode” for all internal websites (local web addresses), you will
get a warning message from EVO:RAIL. Contact your administrator to whitelist
URLs mapping to the EVO:RAIL user interface. Alternately, connect to the
EVO:RAIL user interface using either an IP address or a fully-qualified domain
name (FQDN) configured on the local DNS server.

Out-of-Band Management (optional)

Remote/lights-out management is available on each node through a BMC port.


To use out-of-band management, connect the BMC port on each node to a
separate switch to provide physical network separation. Although you could
use four additional ports on your TOR 10GbE switch (if the TOR supports your
BMC ports), it is more economical to use a lower bandwidth switch.

When shipping VMware EVO:RAIL, the BMC ports are preconfigured by DHCP.
The <ApplianceID> can be found on a pull out tag located in front of the
physical appliance. The defaults are as follows:

BMC interface node 1: hostname = <ApplianceID>-01

BMC interface node 2: hostname = <ApplianceID>-02

BMC interface node 3: hostname = <ApplianceID>-03

BMC interface node 4: hostname = <ApplianceID>-04

The default user name and password are:

Username: admin

Password: admin

The password is case sensitive.

The BMC interface IP addresses can be assigned manually and the user name
and password can be modified as necessary.

The following are the instructions for BMC network changes on appliances
using a Quanta T41S-2U server:
• To use the serial console and BIOS options:

Repeat this process for each node and reconnect using the new
management IP address.
1. Press <DEL> or <F2> during the power on to enter setup.
2. Go to the Server Mgmt tab and then scroll down to BMC network
configuration. Press <Enter>.

3-16 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3. Scroll down to Configuration Address source and select either
“Static on next reset” or “Dynamic.” The default is Dynamic, with
the BMC interfaces obtaining their IP address through DHCP. If you
select Static, you need to set the IP address, Netmask, and Gateway
IP.
4. Press <F10> and select save and reset. Your server will reboot with
the new settings.
• To use the BMC Web UI (only if the BMC IP of the node is known):

Repeat this process for each node and reconnect using the new
management IP.
1. Open a browser and type the BMC IP in the browser bar.
2. Type the credentials.
3. Click the Configuration tab and click Network.
4. Under IPv4 Configuration, clear the Use DHCP check box, and type the
IP address, Netmask, and Gateway IP.
5. Click Save.

Step 2: Set Up Switch


In order for EVO:RAIL to function properly, you must configure the ports the
that EVO:RAIL will use on the 10GbE switch before you plug in EVO:RAIL
and turn it on.

Understanding Switch Configuration

Various network topologies for 10GbE switch(es) and VLANs are possible with
EVO:RAIL. Complex production environments will have multiple core switches
and VLANs. For high-availability, use two 10GbE switches and connect one
port from each node to each 10GbE switch.

Ports on a switch operate in one of the following modes:

• Access mode – The port accepts only untagged packets and distributes the
untagged packets to all VLANs on that port. This is typically the default mode
for all ports.

• Trunk mode – When this port receives a tagged packet, it passes the
packet to the VLAN specified in the tag. To configure the acceptance of
untagged packets on a trunk port, you must first configure a single VLAN as a
“Native VLAN”. A “Native VLAN” is when you configure one VLAN to use as the
VLAN for all untagged traffic.

• Tagged-access mode – The port accepts only tagged packets.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-17
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Network Traffic

All EVO:RAIL traffic (except for out-of-band management) is on the 10GbE


NICs. Each node in an EVO:RAIL appliance has two 10GbE network ports. Each
port must be connected to a 10GbE switch that supports IPv4 multicast and
IPv6 multicast. To ensure vSphere vMotion traffic does not consume all
available bandwidth on the 10GbE port, EVO:RAIL limits vMotion traffic to
4Gbps.

EVO:RAIL traffic is separated as follows:

Network Requirements 1st 10GbE NIC 2nd 10GbE NIC


Management (EVO:RAIL, vCenter Server, IPv6 multicast
Standby Active
ESXi, Log Insight, HDS Solution)
vSphere vMotion Standby Active
Virtual SAN IPv4 multicast Active Standby
Virtual Machines Standby Active

Out-of-band traffic is optionally used to manage hardware on the 1GbE port on


each node.

Multicast Traffic

IPv4 multicast support is required for the Virtual SAN VLAN. IPv6 multicast is
required for the EVO:RAIL management VLAN. The 10GbE switch(es) that
connect to EVO:RAIL must allow for pass-through of multicast traffic on these
two VLANs.

EVO:RAIL creates very little traffic via IPv6 multicast for auto discovery and
management. It is optional to limit traffic further on your 10GbE switch by
enabling MLD Snooping and MLD Querier.

There are two options to handle Virtual SAN IPv4 multicast traffic. Either limit
multicast traffic by enabling both IGMP Snooping and IGMP Querier or disable
both of these features. We recommend enabling both IGMP Snooping and
IGMP Querier, if your 10GbE switch supports them.

IGMP Snooping software examines IGMP protocol messages within a VLAN to


discover which interfaces are connected to hosts or other devices interested in
receiving this traffic. Using the interface information, IGMP Snooping can
reduce bandwidth consumption in a multi-access LAN environment to avoid
flooding an entire VLAN. IGMP Snooping tracks ports that are attached to
multicast-capable routers to help manage IGMP membership report
forwarding. It also responds to topology change notifications. Disabling IGMP
Snooping may lead to additional multicast traffic on your network.

3-18 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
IGMP Querier sends out IGMP group membership queries on a timed interval,
retrieves IGMP membership reports from active members, and allows updates
to group membership tables. By default, most switches enable IGMP Snooping,
but disable IGMP Querier.

vSphere Security Recommendations

Security recommendations for vSphere are found here:

http://pubs.vmware.com/vsphere-
60/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-
FA661AE0-C0B5-4522-951D-A3790DBE70B4.html

In particular, ensure that physical switch ports are configured with Portfast if
spanning tree is enabled. Because VMware virtual switches do not support
STP, physical switch ports connected to an ESXi host must have Portfast
configured if spanning tree is enabled to avoid loops within the physical switch
network. If Portfast is not set, potential performance and connectivity issues
might arise.

Configure VLANs on your 10GbE Switch(es)

The EVO:RAIL network can be configured with or without VLANs. For


performance and scalability, it is highly recommended to configure EVO:RAIL
with VLANs. Refer to the plan that you created in Step 1b for your VLAN IDs.

Configure the VLANs on your 10GbE switch as listed in the “Prerequisites


and Checklist” section:

• Configure your selected management VLAN (default is untagged/native).

• Make sure that IPv6 multicast is configured/enabled on the management


VLAN (regardless of whether tagged or native).

• Configure your selected VLAN for Virtual SAN.

• Make sure that IPv4 multicast is configured/enabled on the Virtual SAN


VLAN (enabling IGMP snooping and querier is highly recommended).

• Configure your selected VLAN for vSphere vMotion.

• Configure your selected VLANs for VM Networks.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-19
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Using the EVO:RAIL Network Configuration Table:

1. Configure the management VLAN (Row 1) on your 10GbE switch ports. If


you entered “Native VLAN”, then set the ports on the 10GbE switch to
accept untagged traffic. This is the default management VLAN setting on
EVO:RAIL appliances.

2. Regardless of whether you are using an untagged Native VLAN or a tagged


VLAN, you must set the management VLAN to allow IPv6 multicast traffic to
pass through. Depending on the type of switch you have, you may need to
turn on IPv6 and multicast directly on the port or on the VLAN. Be sure to
read the previous sections on Understanding Switch Configuration and
Multicast Traffic, and consult the switch manufacturer for further
instructions on how to configure these settings.

3. Configure a vSphere vMotion VLAN (Row 30) on your 10GbE switch


ports.

4. Configure a Virtual SAN VLAN (Row 34) on your 10GbE switch ports and
set it to allow IPv4 multicast traffic to pass through.

5. Configure the VLANs for your VM Networks (Rows 35-38) on your 10GbE
switch ports.

Inter-switch Communication

In a multi-switch environment, configure the ports used for inter-switch


communication to carry IPv6 multicast traffic for the EVO:RAIL management
VLAN. Likewise, carry IPv4 multicast traffic between switches for the Virtual
SAN VLAN. Consult your switch manufacturer’s documentation for how to do
this.

3-20 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Disable Link Aggregation

Do not use link aggregation, including protocols such as LACP and


EtherChannel, on any ports directly connected to EVO:RAIL. The EVO:RAIL
engine uses active/standby configuration (NIC teaming) for network
redundancy, as discussed in the section on Network Traffic.

Step 3: Hands-on Lab (Optional)


As you wait for your appliance to arrive, test drive the EVO:RAIL user interface
using the Hands-on Lab: https://www.vmware.com/go/evorail-lab

In EVO:RAIL initial configuration, you will:

• Learn how to configure and build a newly deployed EVO:RAIL appliance

In EVO:RAIL management, you will:

• Create virtual machines and manage lifecycle operations

• Explore management features, such as system health

• Simulate appliance discovery and scale out

• Simulate hardware failure and node replacement

Step 4: Cable & On


Rack and cable your EVO:RAIL appliance as shown Figure 3 if you are using
one 10GbE switch. Use Figure 4 if you are using two 10GbE switches for
redundancy. The figures illustrate a vendor-independent, simple network
setup. The exact hardware configurations will vary, depending on your switch
manufacturer. After the appliance is cabled, power on all four nodes on the
first appliance in an EVO:RAIL cluster.

Do not turn on any nodes in other appliances in an EVO:RAIL cluster until you
have completed the full configuration of the first appliance. Each appliance
must be added to an EVO:RAIL cluster one at a time. See Adding Appliances
to an EVO:RAIL Cluster.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-21
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 3. Rear view of one deployment of Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL appliance connected to one TOR
switch.

3-22 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 4. Generic example of one deployment of EVO:RAIL connected to two
10GbE switches, reconnected for redundancy.

Step 5: Management VLAN


If you have decided to use a tagged management VLAN for EVO:RAIL, you will
need to customize the management VLAN directly on each ESXi host via the
ESXi Command Line Interface (CLI) before using the EVO:RAIL user interface
to configure the appliance.

To customize the management VLAN before EVO:RAIL is initially configured,


changes are required for two different portgroups on all ESXi hosts. The first
portgroup is the ESXi “Management Network”, and the second portgroup is the
initial EVO:RAIL management network, called “VM Network”. During
configuration the second portgroup is renamed “vCenter Server Network”.

Login to each of the four ESXi hosts via the console interface, DCUI.

• Press <F2> to login with the username root and the password Passw0rd!

• Press <ALT-F1> to get to the ESXi shell.

Note: You might need to add Hot Keys on the BMC remote console for <ALT-
F1> and <ALT-F2>.

• Login to the shell with the username root and the password Passw0rd!

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-23
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
• Execute the following ESXi commands with the <VLAN_ID> from Row 1 in
the EVO:RAIL Network Configuration Table:

esxcli network vswitch standard portgroup set -p "Management Network" -v


<VLAN_ID>

esxcli network vswitch standard portgroup set -p "VM Network" -v <VLAN_ID>


/etc/init.d/loudmouth restart

• To verify the VLAN ID was set correctly, run the following command:

esxcli network vswitch standard portgroup list

If your management VLAN is customized on-site, your backup configBundle


will not include the new VLAN. If your appliance is ever reset, the
management VLAN will have to be reconfigured.

Documentation for vSphere/ESXi command line interface is provided at


http://pubs.vmware.com/vsphere-
60/index.jsp#com.vmware.vsphere.scripting.doc/GUID-7F7C5D15-9599-4423-
821D-7B1FE87B3A96.html

Step 6: Connect & Configure


If you have successfully followed all of the previous steps, your network setup
is complete and you are ready to connect to EVO:RAIL from your
workstation/laptop. This section is only used for the first appliance in an
EVO:RAIL cluster.
a. Connect a workstation/laptop to access the EVO:RAIL initial IP address on
your selected management VLAN. It must be either plugged into the 10GbE
switch or able to logically reach the EVO:RAIL management VLAN from
elsewhere on your network.
Use the temporary EVO:RAIL initial IP address that was pre-configured on
the appliance, 192.168.10.200/24 (Row 2 in the EVO:RAIL Network
Configuration Table).
However, if you cannot reach 192.168.10.200/24, you can change the
initial IP address directly on ESXi host #1, following the instructions in
Appendix G.
b. Configure the network settings on your workstation/laptop to talk to
EVO:RAIL

Your workstation/laptop will need to be able to reach both the EVO:RAIL initial
IP address and your selected permanent EVO:RAIL IP address.

Example:

EVO:RAIL Workstation/laptop

IP address/ IP address Subnet mask Gateway


netmask

3-24 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Initial 192.168.10.200/24 192.168.10.150 255.255.255.0 192.168.10.254
(temporary)

Post- 10.10.10.100/24 10.10.10.150 255.255.255.0 10.10.10.254


configuration
(permanent)

It may be possible to give your workstation/laptop or your jump server two IP


addresses, which allows for a smoother experience. Depending on your
workstation/laptop, this can be implemented in several ways (such as dual-
homing or multi-homing). Otherwise, change the IP address on your
workstation/laptop when instructed to in the EVO:RAIL user interface and then
return to the EVO:RAIL user interface.
c. In the next section, EVO:RAIL Initial Configuration, you will browse to
the EVO:RAIL initial IP address to configure and build your appliance.
d. Configure your corporate DNS server for all EVO:RAIL hostnames and IP
addresses unless you are in an isolated environment.

DNS is required for EVO:RAIL because some management operations, such as


importing an OVA file, require a FQDN for direct host access. To add a DNS
server after you have configured EVO:RAIL, see the VMware Knowledge Base
(http://kb.vmware.com/kb/2107249).

If you are in an isolated environment, you will need to use the DNS
server that is built into vCenter Server. To manage EVO:RAIL via your
workstation/laptop, configure your laptop’s network settings to use the
vCenter Server IP address (Row 21) for DNS. EVO:RAIL’s IP addresses and
hostnames are configured for you.

If you are using your corporate DNS server(s) for EVO:RAIL (Row 5), add the
hostnames and IP addresses for EVO:RAIL, vCenter Server, Log Insight, and
each ESXi host (see the naming scheme in Hostnames).

vMotion and Virtual SAN IP addresses are not configured for routing by
EVO:RAIL and there are no hostnames.

Example of EVO:RAIL hostnames and IP addresses for Release 2.x configured


on a DNS server (Release 1.x would not have a evorail.*.* entry):

esxi-host01.localdomain.local 192.168.10.1

esxi-host02.localdomain.local 192.168.10.2

esxi-host03.localdomain.local 192.168.10.3

esxi-host04.localdomain.local 192.168.10.4

evorail.localdomain.local 192.168.10.100

vcserver.localdomain.local 192.168.10.101

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-25
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
loginsight.localdomain.local 192.168.10.102

e. When your appliance is ready, refer to the Management & Maintenance


section in this document. From now on, you will connect to the EVO:RAIL
user interface using either the EVO:RAIL IP address (Row 22) or the fully-
qualified domain name (FQDN) (Row 18) that you have configured on your
DNS server (e.g. https://evorail.yourcompany.com). In Release 2.x, do not
specify a port because the default (443) is used. In Release 1.x, you must
specify port 7443 to reach the EVO:RAIL user interface and port 9443 to
reach the vSphere Web Client on vCenter Server.
When you add more appliances to an EVO:RAIL cluster, be sure to follow
the steps in Adding Appliances to an EVO:RAIL Cluster in this
document.

Avoiding Common Network Mistakes


To avoid common network mistakes with the VMware EVO:RAIL solution:
1. Follow all the network prerequisites described in this document. Otherwise
EVO:RAIL will install properly, but EVO:RAIL will not function correctly in
the future. You must fill in the EVO:RAIL Network Configuration Table.
2. If you have separate teams for network and servers in your data center,
you need to work together to design the network and configure the switch
or switches.
3. Some network configuration errors cannot be recovered from and you will
need HDS to reset your appliance to factory defaults. When EVO:RAIL is
reset to factory defaults, all data is lost.
4. Read the vendor instructions for your TOR switch.
a. Remember to configure multicast and don’t block IPv6 on your 10GbE
switch. Re-read the sections on 10GbE switches and VLANs in this
document.
b. Remember that management traffic will be untagged on the native
VLAN on your 10GbE switch, unless your appliance has been
customized for a specific management VLAN.
c. If you have two or more switches you must make sure that IPv4
multicast and IPv6 multicast traffic is transported between them.
5. Remember to connect and enable the ports.
a. Make sure your management gateway IP address is accessible. It is
used for vSphere High Availability (HA) to work correctly. You can use a
corporate gateway on your EVO:RAIL network segment or you may be
able to configure your 10GbE L3 switch as the gateway. When vSphere
HA is not working, you will see a “network isolation address” error.
EVO:RAIL will continue to function, but it will not be protected by the
vSphere HA feature.

3-26 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
http://pubs.vmware.com/vsphere-
60/index.jsp#com.vmware.vsphere.avail.doc/GUID-5432CA24-14F1-
44E3-87FB-61D937831CF6.html
b. Make sure you can reach your DNS server from the EVO:RAIL, vCenter
Server, and ESXi network addresses. Then update your DNS server with
all EVO:RAIL hostnames and IP addresses.
c. If you have configured Active Directory, NTP servers, proxy servers, or
a third-party syslog server, you must be able to reach them from all of
your configured EVO:RAIL IP addresses.
d. You must determine all IP addresses before you can configure
EVO:RAIL. You cannot easily change the IP addresses after you have
configured EVO:RAIL.
6. Don’t try to plug your workstation/laptop directly into a server node on
EVO:RAIL;; plug it into your network or 10GbE switch and make sure that
it is logically configured to reach EVO:RAIL.
7. If you are using SFP+, NIC and switch connectors and cables must be on
the same wavelength. Contact HDS for the type of SFP+ connector on your
appliance.

The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-27
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3-28 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3
Deployment and Initial Configuration
This chapter describes the steps to deploy the Hitachi Unified Compute
Platform 1000 for VMware EVO:RAIL appliance and its initial configuration:
 How to Configure EVO:RAIL
 Initial Configuration User Interface

Deployment and Initial Configuration 4-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
How to Configure EVO:RAIL
Prerequisites:

Before you proceed with the initial configuration of your new EVO:RAIL
appliance:

• Please review the physical power, space and cooling requirements for the
expected resiliency level of the appliance. This information is available on
the Physical Requirements section in this document.

• Please review the Prerequisites and Checklist section and fill in the
EVO:RAIL Network Configuration Table. This is essential to help ensure
smooth deployment and configuration.

• If you have not done so before, double-check your Network Configuration.


Procedures to confirm each of the following will vary according to your
network architecture:

o Confirm that you can ping or point to the EVO:RAIL initial IP address
(Row 2). If not, return to Step 6.

o Confirm that your gateway is reachable (Row 24)

o Confirm that your DNS server(s) are reachable unless you are in an
isolated environment (Row 5)

o Confirm that IPv4 multicast and IPv6 multicast are enabled for the
VLANs described in this document.

How to Configure EVO:RAIL

There are two ways to configure Hitachi UCP 1000 for VMware EVO:RAIL:

• Step-by-step: Using the step-by-step user interface (or “customize” in


Release 1.x)

• Configuration File: Uploading a JSON-formatted configuration file that


you created. See Appendix B for the file format and valid values.

EVO:RAIL verifies the configuration data, and then builds the appliance.
EVO:RAIL implements data services, creates the new ESXi hosts, and sets up
vCenter Server, EVO:RAIL, vMotion and Virtual SAN.

Use values from the rows of your EVO:RAIL Network Configuration Table
as follows in the user interface:

• System

4-2 Deployment and Initial Configuration


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual0
Enter your time zone and your existing NTP and DNS server(s) from Rows 3-
5. Enter the Active Directory (optional) domain, username and password from
Rows 6-8. Enter the IP address, port, username, and password for your proxy
server (optional) from Rows 9-12.

• Management

Enter the ESXi host naming scheme from Rows 13-16, vCenter Server
hostname from Row 17, and EVO:RAIL hostname from Row 18. Enter the IP
addresses, subnet mask, and gateway for ESXi, vCenter Server, and EVO:RAIL
from Rows 19-24. Enter the ESXi hosts and vCenter Server/EVO:RAIL
passwords from Rows 25-26.

• vSphere vMotion

Enter the VLAN ID, IP addresses, and subnet mask for vSphere vMotion from
Rows 27-30.

• Virtual SAN

Enter the VLAN ID, IP addresses, and subnet mask Virtual SAN from Rows
31-34.

• VM Networks

Enter the VLAN IDs and names for the VM Networks from Rows 35-38.

• Solutions

For logging, enter the IP address and hostname for vRealize Log Insight or for
an existing third-party syslog server (optional) in your network (Rows 39-
41). Enter two IP addresses from Rows 42-43 for the two HDS Solution VMs.

o Hitachi Data Ingestor (HDI)

o Hitachi Compute Advisor (HCA)

Deployment and Initial Configuration 4-3

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Initial Configuration User Interface
This describes the initial configuration procedures for Hitachi Unified Compute
Platform 1000 for VMware EVO:RAIL.

Use the information from your EVO:RAIL Network Configuration Table as


you follow these steps in the EVO:RAIL user interface.

1. Browse to the EVO:RAIL initial IP address; for example,


https://192.168.10.200 (Release 2.x), or https://192.168.10.200:7443
(Release 1.x). Ignore any browser warnings about security (for example,
by clicking “Advanced” and “Proceed”.) You will then see the EVO:RAIL
welcome splash page below.

2. Click Get Started. Then if you agree, accept the EVO:RAIL End-User
License Agreement (EULA).

3. In Release 2.x, we ask you if you have read this document and setup your
network correctly. If you have done so, check the following boxes and then
click next.

• I set up my switch

• I set up my management VLAN

4-4 Deployment and Initial Configuration


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual0
4. Click Step-by-step to configure hostnames, IP addresses, VLAN IDs, and
passwords. Using one of the two previous sections to guide you, enter your
values from the EVO:RAIL Network Configuration Table.

AlternatelyAlternatively, click Configuration File to upload a JSON-


formatted configuration file that you have created, and then Upload
Configuration File in the lower right part of the webpage.

5. Carefully enter your data or review each configuration field.

6. Click the Review First (Release 2.x only) or Validate button. EVO:RAIL
verifies the configuration data, checking for conflicts.

Deployment and Initial Configuration 4-5

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
7. After validation is successful, click the Build EVO:RAIL button.

8. The new IP address for EVO:RAIL will be displayed.

Note: Click Start Configuration. Ignore any browser messages about


security (for example, by clicking “Advanced” and “Proceed”.)

Note: You may need to manually change the IP settings on your


workstation/laptop to be on the same subnet as the new EVO:RAIL IP
address (Row 22).

Note: If your workstation/laptop cannot connect to the new IP address that


you configured, you will get a message to fix your network and try again. If
you are unable to connect to the new IP address after 20 minutes,
EVO:RAIL will revert to its un-configured state and you will need to re-
enter your configuration at the initial IP address (Row 2).

9. After the appliance build process starts, if you close your browser, you will
need to browse to the new IP address (Row 22).

10. Progress is shown as your appliance is built.

When you see the Hooray! page, click the Manage EVO:RAIL button to
continue to EVO:RAIL management. You should also bookmark this IP
address in your browser for future use.

From now on, refer to the Management & Maintenance section in this
document.

4-6 Deployment and Initial Configuration


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual0
Post-Configuration changes required for UCP 1000 ROBO Edition
When using a Hitachi UCP 1000 ROBO Edition with 1GbE Switches, the
vSphere vMotion bandwidth must be limited to ~400Mbps.

Apply the following configuration ONLY if it is Hitachi UCP 1000 for VMware
EVO:RAIL ROBO Edition.

Procedure:
1. Log in to vSphere Web Client using an account with administrator
privileges. You will be directed to vCenter Home.
2. From the Inventory Lists, select Networking.
3. Under Marvin-Datacenter, expand EVO:RAIL Distributed Switch.
4. Right-clik on vSphere vMotion distributed port group
5. The click on Edit Settings
6. Click Traffic shaping and make the following changes:

Ingress traffic shaping Egress traffic shaping


Status Enabled Enabled
Average bandwidth (kbits/s) 429496 429496
Peak bandwitdth ((kbits/s) 429496 429496
Burst size (KB) 10240 10240

The result should be like this:

7. Click OK

Deployment and Initial Configuration 4-7

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Adding Appliances to an EVO:RAIL Cluster
Hitachi UCP 1000 for VMware EVO:RAIL can scale out to sixteen appliances for
up to 64 ESXi hosts all on one Virtual SAN datastore, backed by a single
vCenter Server and EVO:RAIL instance. Deployment, configuration, and
management are handled by EVO:RAIL, allowing the compute capacity and the
Virtual SAN datastore to grow automatically. New appliances are automatically
discovered and easily added to an EVO:RAIL cluster.

If you plan to scale out to multiple EVO:RAIL appliances in a cluster over time,
allocate extra IP addresses for each of the ESXi, vMotion, and Virtual SAN IP
pools when you configure the first appliance (twelve extra IP addresses per
appliance). Then when you add appliances to a cluster, you will only need to
enter the ESXi and EVO:RAIL / vCenter Server passwords.

Note: If you have multiple independent EVO:RAIL clusters, we recommend


using different VLAN IDs for Virtual SAN traffic and for management across
multiple EVO:RAIL clusters. Otherwise, all appliances on the same network will
see all multicast traffic.

Step 1: Plan
Use the Prerequisites and Checklist to make sure you are ready for
additional appliances. Work with your team to make any decisions that were
not made earlier.

 Eight 10GbE ports for each EVO:RAIL appliance on your 10GbE switch(es)

 Four contiguous IP addresses on the management VLAN for ESXi hosts for
each appliance

 Four contiguous IP addresses on the Virtual SAN VLAN for each appliance

 Four contiguous IP addresses on the vSphere vMotion VLAN for each


appliance

 Optional: Extra capacity and IP addresses for four out-of-band


management ports for each appliance

 Be sure you know the ESXi host and vCenter Server/EVO:RAIL passwords
that were configured on the first EVO:RAIL appliance in the cluster

4-8 Deployment and Initial Configuration


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual0
Step 2: Set Up Switch
Before you plug in your new EVO:RAIL appliance, configure each port that will
be used by the new appliance just like you configured the switch for the first
EVO:RAIL appliance. Configure the management VLAN, Virtual SAN VLAN,
vMotion VLAN, and VM Network VLANs on each EVO:RAIL 10GbE port,
including IPv4 multicast and IPv6 multicast. Disable link aggregation
(LACP/EtherChannel) on each EVO:RAIL 10GbE port.

Step 3: Hands-on Lab


If you haven’t done so already, you can test-drive “Add Appliance” in the
EVO:RAIL Hands-on Lab.

Step 4: Cable & On


Rack your new appliance and connect the 10GbE ports on EVO:RAIL to the
10GbE switch(es). Power on all four nodes on your new EVO:RAIL appliance.

Only one appliance can be added at a time. To add multiple appliances, power
on one appliance at a time, making sure that each is properly configured
before powering on the next appliance. Remember that powering on an
appliance consists of powering on all four nodes in the appliance.

An EVO:RAIL Release 2.x appliance cannot be added into an existing


EVO:RAIL Release 1.x cluster. An EVO:RAIL Release 1.x appliance cannot be
added into an existing EVO:RAIL Release 2.x cluster. Please contact HDS
Support to upgrade your EVO:RAIL appliance or cluster from 1.x to 2.x.
EVO:RAIL prevents these possibilities and does not display Add EVO:RAIL
Appliance

Step 5: Management VLAN


If you have decided to use a tagged management VLAN for your EVO:RAIL
cluster, you must customize the management VLAN directly on each appliance
via the ESXi Command Line Interface.

The first appliance will not discover any ESXi hosts that are not on the same
management VLAN

Login to each ESXi host in the new appliance and follow the management
VLAN instructions for the first appliance in the cluster.

Deployment and Initial Configuration 4-9

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Step 6: Add EVO:RAIL Appliance
Go to the EVO:RAIL user interface to see the first EVO:RAIL appliance in the
cluster detect each node in the new appliance. If you have enough IP
addresses, all you do is enter the passwords and click the Add EVO:RAIL
Appliance button. EVO:RAIL will seamlessly configure all services on the new
appliance and fully integrate it into the cluster. Use procedure below.

Be sure to add any ESXi hostnames that were not previously entered in your
corporate DNS, unless you are in a totally isolated environment

Procedure

Whenever EVO:RAIL detects a new appliance, the following message and


button are displayed on the EVO:RAIL Management page:

To add an EVO:RAIL appliance, follow these steps:


1. Click Add EVO:RAIL Appliance.
2. If you allocated enough IP addresses when you configured your previous
appliance, the IP Pool input fields will be grayed out.
If you do not have enough IP addresses:
a. Type the New Starting ESXi IP Pool addresses and the Ending ESXi
IP Pool addresses for the ESXi IP pool for the new appliance.
b. Type the New Starting vMotion IP Pool addresses and the Ending
vMotion IP Pool addresses for the new appliance.
c. Type the New Starting Virtual SAN IP Pool addresses and the
Ending Virtual SAN IP Pool addresses for the new appliance.
3. Type the Appliance ESXi Password.
4. Type the Appliance vCenter Server Password.
5. Click Add EVO:RAIL Appliance.
6. When the process is complete, click Finish.

4-10 Deployment and Initial Configuration


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual0
4
Management and Maintenance
This chapter provides general troubleshooting information and instructions for
contacting the Hitachi Data Systems Support Center.
 EVO:RAIL Management
 EVO:RAIL Maintenance

Management and Maintenance 5-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Management
Once the initial EVO:RAIL Appliance is deployed and configured, EVO:RAIL
Management is where you will perform your day-to-day tasks, such as the
following:
• Access EVO:RAIL Management
• Creating and Managing Virtual Machines (VMs)
• Monitor Virtual Machines (VMs)
• Monitor the Status of an EVO:RAIL Cluster, appliance, and node
• Add EVO:RAIL Appliances to an EVO:RAIL Cluster
• Activate EVO:RAIL Licenses
• Localize the User Interface
• Update EVO:RAIL software components

Access EVO:RAIL Management


Access EVO:RAIL Management

To access EVO:RAIL Management:


• From your EVO:RAIL workstation/laptop open the EVO:RAIL IP address in a
browser: https://<evo:rail-ip-address>
• Login with user name [email protected] and the current vCenter
Server password set during EVO:RAIL Initial Configuration.

The latest versions of Mozilla Firefox, Google Chrome, and Microsoft Internet
Explorer are all supported. The minimum recommended screen resolution is
1280 × 1024.

Access vSphere Web Client

To access the vSphere® Web Client:

1. Click VMS on the left sidebar.

2. Click the VMware vSphere Web Client icon in the upper right corner of
the EVO:RAIL Management VMS page

3. Log in with username: [email protected] and the current


vCenter Server password.

NOTE: From vSphere Web Client, do not use service datastores when
creating or moving VMs. These datastores are only to be used by Hitachi Data
Systems Support.

5-2 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Localize EVO:RAIL

EVO:RAIL Management is available in English, French, German, Japanese,


Korean, Simplified Chinese, Traditional Chinese, Spanish and Portuguese. To
choose your language, follow these steps:

1. Click Config on the left sidebar.

2. Under the General tab, Choose your Language section, click on your
selection.

Creating and Managing Virtual Machines (VMs)


Create VMs

EVO:RAIL Management streamlines the creation of virtual machines with


simple selections for the following:
• Guest operating system
• Virtual machine size
• Network segment (VLAN)
• Security options

In EVO:RAIL Management, use the following procedure to create a virtual


machine:
1. To begin the virtual machine creation process, on the left menu, click
Create VM (Figure 5). After beginning the process, you have the option to
Cancel/Start Over to return to the beginning of this process. If close out
of the Create VM process, the next time you select Create VM it will
resume at the same step in the process.

Figure 5

Management and Maintenance 5-3


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
2. In the Enter VM name text box, type the name for your virtual machine.
3. To load your ISO image file to be used for your guest operating system,
follow one of these procedures:
a. If you are uploading the ISO image file for local storage, click Upload
Image.
i. Click Choose File to open the standard file selection pop-up.
ii. Locate the ISO image file on your local file system and select it.
iii. Click Open to return to EVO:RAIL Management.
iv. Click Upload Image. At this point the ISO image will be copied to
the Virtual SAN datastore.
NOTE: If the file is too large to upload, see the resolution section of
VMware Knowledge Base Article 2109915.
b. If you are uploading the ISO image file from a network drive, click
Mount NFS/CIFS.
i. Click the protocol of the network drive you are accessing: NFS or
CIFS
ii. Enter the URL <format:[protocol]:/[host]/path/to/file.iso>
iii. Click Mount Remote Image. At this point the ISO image will be
copied to the Virtual SAN datastore.

c. If you are using a previously uploaded guest operating system, you will
be able to re-use an existing image by clicking on its name.

NOTE: EVO:RAIL Management does not support importing existing pre-


built copies of virtual machines in OVA/OVF format. To work around this,
use the vSphere web client interface to create virtual machines from an
existing OVA/OVF format. The template can then be viewed in EVO:RAIL
Management as a virtual machine, which you can use to clone in order to
make a virtual machine from the template.

Figure 6

5-4 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
4. Confirm the Guest OS Version from the dropdown list, and then click
Continue.
5. Under Select a VM size, click a virtual machine (VM) size: Small,
Medium, or Large.
NOTE: EVO:RAIL has a set of predefined virtual machine sizes based on
standard VMware recommendations for each guest operating system. See
Appendix D.
6. Click Select VM Size
7. Select one or more network segments the VM should connect to.
8. Click Select VM Networks.

Figure 7

9. Click the security policy: No Policy, Risk Profile 3, Risk Profile 2, or


Risk Profile 1.
These profiles are a collection of Virtual Machine Advanced Settings, based
on a particular Risk Profile from the vSphere 5.5 Security Hardening Guide,
http://vmware.com/security/hardening-guides. Technical settings for each
profile are given in Appendix E.
 No Policy means that no security configuration options are applied to
the virtual machine.
 Risk Profile 3 specifies guidelines that should be implemented in all
environments. These are VMware best practices for all data centers.
 Risk Profile 2 specifies guidelines for more sensitive environments
or small/medium/large enterprises that are subject to strict
compliance rules.
 Risk Profile 1 specifies guidelines for the highest security
environments, such as top-secret government or military
installations, or anyone with extremely sensitive data or in a highly
regulated environment.

Management and Maintenance 5-5


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
NOTE: By selecting a more secure policy, you will lose some virtual
machine functionality. This includes automated tools, inability to shrink
virtual machine disks, persistent mode only, no logging and performance
information, blocked device interactions, and limited remote console
connections. See the Hardening Guides for more details.
10. Click Create and Start a New VM.
11. After the virtual machine is created, the EVO:RAIL Management interface
displays.

Manage VMs

EVO:RAIL Management allows users to view all virtual machines in a grid. To


access this:

• Click VMs on the left sidebar

Figure 8

Use the Filter By and Sort By menus in the upper right corner to arrange the
virtual machines. Use Search to find virtual machines by name. If you click on
a virtual machine, you have the following options available, depending on the
state of your virtual machine. Click on the VM to view all the options.

5-6 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 9

The following table describes which options are available when the VM is
powered on and when it is powered off.

VM Options Available VM Powered On VM Powered Off

Install VMware Tools Yes No

See Note below

Rename VM Yes Yes

Eject ISO Yes and an ISO is No


mounted

Open Console Yes No

Clone VM Yes Yes

Suspend/Resume VM Yes No

Delete VM No Yes

Power Off/On Yes Yes

Management and Maintenance 5-7


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
NOTE: To install VMware Tools your guest OS must already be installed on
your virtual machine, and the guest OS ISO image must be ejected from the
virtual CD drive. When the VM is powered on and the ISO is not unmounted,
this option is active. When the VM is powered on and the ISO is mounted, this
option will be disabled, but hovering over it will display the message ‘VMware
Tools have been successfully mounted…”. Once VMware tools is installed this
icon will not appear.

Monitoring an EVO:RAIL Cluster


EVO:RAIL provides facilitated monitoring procedures including:

• Monitoring EVO:RAIL appliance health

• Viewing EVO:RAIL events

Monitor Appliance Health

EVO:RAIL Management simplifies live compute management with health


monitors for CPU, memory, storage, and virtual machine usage for an entire
EVO:RAIL cluster, individual appliance, and individual nodes. To access this:

• Click Health on the left sidebar

5-8 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 10

The overall health of the EVO:RAIL cluster is displayed. The status is displayed
for storage IOPS, CPU usage and memory usage as follows:

Color Description

Red • Over 85% storage usage used.

• Over 85% CPU utilization used.

• Over 85% memory usage used.

Yellow • There is 75% to 85% storage usage used.

• There is 75% to 85% CPU utilization used.

• There is 75% to 85% memory usage used.

Management and Maintenance 5-9


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Color Description

Green • There is below 75% storage usage used.

• There is below 75% CPU utilization used.

• There is below 75% memory usage used.

The total storage information for the EVO:RAIL cluster is also shown: total
capacity, amount used, amount free, and amount provisioned. Each EVO:RAIL
appliance in the cluster is shown with a simple red, yellow, or green status.
You can click on the EVO:RAIL appliance tabs on the top of screen to drill
down to the details for that appliance. You can return to the Overall System,
by clicking on that tab on the top of the screen.

For each EVO:RAIL appliance you can see the status of the items above and
each node, HDD, SSD, ESXi boot disk and NIC. If everything is normal a
green checkmark is displayed; if there is a warning a yellow triangle is
displayed. A critical alarm is indicated with a red triangle. All alarms and
warnings are from vCenter Server.

View Events

EVO:RAIL Management provides you access to the list of events occurring in


EVO:RAIL Management, either initiated by EVO:RAIL or by the user. This is
available from Events on the left sidebar. Events can be viewed by Critical or
Most Recent.

Enable Hi-Track Remote Monitoring System

Hi-Track provides the remote monitoring and support functionality for Hitachi
products. Hitachi strongly recommends you install and configure Hi-Track to
monitor UCP 1000 system at your site.

For more information, download the Hi-Track documentation from the Hitachi
Data Systems Support Portal at https://portal.hds.com. Please follow
registration instructions to access the portal and search for “UCP 1000 for
VMware EVO:RAIL”

Managing an EVO:RAIL Cluster


EVO:RAIL provides facilitated management procedures including:

• Licensing an EVO:RAIL appliance

• Updating EVO:RAIL software

• Adding an EVO:RAIL appliance to a cluster

5-10 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
• Shutting down an EVO:RAIL cluster or appliance

License EVO:RAIL

To license the Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL
appliance, follow these steps:
1. Get your Partner Activation Code (PAC) from Hitachi Data Systems.
2. Go to your Hitachi Data Systems activation portal to enter your PAC or
PACs.
3. You will receive an email message with your license key or license keys.
4. Click Config on the left menu.
5. On the Licensing tab, enter your license key(s) for each appliance in the
cluster:
6. Click License Appliance.

Figure 11

7. If the license is part of the VMware vSphere Loyalty Program, perform the
following steps:
a. Go to MyVMware to group the required set of licenses into a single
license key.
i. Create a folder within MyVMware with the licenses called EVORAIL.
ii. Send a service request to the VMware license team with the license
keys you need combined into a single license key.

Management and Maintenance 5-11


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
b. Type the single license key obtained from the VMware license team, and
then click License Appliance.
c. Confirm that you want to you use this license, and then click License
Appliance.

Note: No other licenses are needed. Your EVO:RAIL appliance is now fully
licensed.

Update EVO:RAIL

EVO:RAIL supports vCenter Sever, ESXi™, and EVO:RAIL software patch and
upgrade. With a minimum of four independent ESXi hosts in an EVO:RAIL
cluster, updates are non-disruptive and require zero downtime.

Refer to the release specific EVO:RAIL Customer Release Notes for update
instructions.

Add Appliances to an EVO:RAIL Cluster

EVO:RAIL can scale out to sixteen appliances for up to 64 ESXi hosts and one
Virtual SAN datastore, backed by a single vCenter Server and EVO:RAIL
instance. Deployment, configuration, and management are handled by
EVO:RAIL, allowing the compute capacity and the Virtual SAN datastore to
grow automatically when appliances are added to a cluster. New appliances
are automatically discovered and easily added to an EVO:RAIL cluster with a
few mouse clicks.

Software Version Restrictions

We recommend that all VMware components (ESXi, vCenter Server, and


EVO:RAIL) are at the same version on all EVO:RAIL appliances in an EVO:RAIL
cluster.

Follow all the planning steps and procedure from the Deployment and Initial
Configuration > Adding Appliances to an EVO:RAIL Cluster.

Shutdown an EVO:RAIL Cluster or Appliance

An EVO:RAIL cluster, with one or more appliances, can be shutdown/restarted.


In addition, a single EVO:RAIL appliance in a multiple appliance EVO:RAIL
cluster can be shutdown/restarted. The procedures are documented in
Appendix F in this document.

5-12 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Maintenance
EVO:RAIL provides facilitated maintenance procedures including:

• Logs

• Hardware Replacement

Logs

EVO:RAIL Management provides several ways to access log information:

• Support Logs

• vRealize Log Insight

Support Logs

EVO:RAIL Management combines diagnostic information for vCenter Server,


ESXi, and EVO:RAIL into one log bundle. This log bundle can be uploaded to
technical support as part of a support request (SR). To download the log
information from your EVO:RAIL appliance in a zip file, follow these steps:
1. Click Config on the left menu.
2. On the Support tab, click Generate Support Bundle.
NOTE: This process can take from 20 minutes to several hours. It is run in
the background, so you can continue to use EVO:RAIL Management.
3. After generating the Support Log, click Download Support Bundle.
The zip file will be downloaded to your EVO:RAIL workstation/laptop.
NOTE: When using Chrome as your web browser, a warning message will
be displayed indicating the download can be harmful. Press Keep when
prompted.

vRealize Log Insight

EVO:RAIL deploys with vCenter Log Insight. However, you may choose to use
your own third-party syslog server or servers.

To use vRealize Log Insight, do one of the following:


• If you use your browser to open vRealize Log insight, type the configured
IP address in the address bar. The user name is admin.
• If you use SSH to open vRealize Log Insight, the user name is root.

The password, in either case, is the one you specified for vCenter Server. See
VMware vRealize Log InsightVRealize Log Insight Documentation.

Management and Maintenance 5-13


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
If you configured an existing third-party syslog server, follow the instructions
supplied with that product.

Hardware Replacement

Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance


supports the replacement of the following Field Replacement Units (FRUs):
• Node
• CPU
• Motherboard
• Memory
• Storage adapter
• ESXi boot device
• Hard disk drives
• Solid-state drives
• Power supply

Contact Hitachi Data Systems for specific hardware replacement procedures.


Do not use the Hardware Replacement or Re-Add Node automated
processes unless directed to by Hitachi Data Systems support.

5-14 Management and Maintenance


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
A
EVO:RAIL Network Configuration Table
This appendix contains the EVO:RAIL Network Configuration Table:
 EVO:RAIL Network Configuration Table

EVO:RAIL Network Configuration Table A-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Network Configuration Table

Customer
Row Category Description Examples
Values
Set a management VLAN on
Management ESXiTM before you configure
VLAN ID EVO:RAIL, otherwise
1 Native VLAN
(optionally management traffic will be
modify) untagged on the switch’s Native
EVO:RAIL VLAN
Appliance EVO:RAIL
If you cannot reach the default
Initial
EVO:RAIL initial IP address
2 IP Address 192.168.10.200
(192.168.10.200/24), set an
(optionally
alternate IP address
modify)
3 Time zone UTC
4 Global settings NTP server(s)
5 DNS server(s)
6 Domain
Active Directory
7 Username
System (optional)
8 Password
9 IP Address
HTTP Proxy
10 Port
Settings
11 Username
(optional)
12 Password
13 ESXi hostname prefix host
14 Separator None
15 Iterator 0X
Hostnames localdomain.loca
16 Top-level domain
l
17 vCenter Server hostname vcenter
18 EVO:RAIL hostname evorail
19 ESXi starting address for IP pool 192.168.10.1
Management
20 ESXi ending address for IP pool 192.168.10.4
21 vCenter Server IP address 192.168.10.201
Networking
22 EVO:RAIL IP address 192.168.10.200
23 Subnet mask 255.255.255.0
24 Gateway 192.168.10.254
25 ESXi “root”
Passwords vCenter Server & EVO:RAIL
26
[email protected]
27 Starting IP address 192.168.20.1
28 Ending IP address 192.168.20.4
vSphere vMotion
29 Subnet mask 255.255.255.0
30 VLAN ID 20
31 Starting IP address 192.168.30.1
32 Ending IP address 192.168.30.4
Virtual SAN
33 Subnet mask 255.255.255.0
34 VLAN ID 30
35 VM Network name and VLAN ID Sales / 110
36 VM Network name and VLAN ID Marketing / 120
37 VM Networks …
Unlimited number in Release
38
2.0
39 vRealize Log Insight hostname loginsight
Solutions Logging
40 vRealize Log Insight IP address 192.168.10.202

A-2 EVO:RAIL Network Configuration Table


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Customer
Row Category Description Examples
Values
Syslog server (instead of Log
41
Insight)
42 Primary VM IP address (HDI) 192.168.10.211
HDS Solutions
43 Secondary VM IP address (HCA) 192.168.10.212

• The Sample Default configuration values represent VMware and Hitachi


Data Systems defaults.

EVO:RAIL Network Configuration Table A-3


Open-Systems Host Attachment Guide for Hitachi VSP G200, G400, G600, G1000, VSP, HUS VM, USP V/VM
A-4 EVO:RAIL Network Configuration Table
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
B
JSON Configuration File
This appendix contains instructions how to modify and upload the JSON
configuration file:
 JSON Configuration File
 Upload Configuration File
 JSON File Format and Valid Values

JSON Configuration File B-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
JSON Configuration File
Before configuring EVO:RAIL, read the EVO:RAIL Network Configuration Table.
A JSON Configuration file can be uploaded in the EVO:RAIL initial configuration
user interface. Sample JSON configuration files such as the one listed in this
appendix are found in VMware Knowledge Base articles.

Important Notes:
• The JSON file format may change throughout EVO:RAIL releases. Please
get the sample JSON file that corresponds to the software release with
which your appliance was built at Hitachi Data Systems. Then edit the
sample file for your configuration.
• EVO:RAIL expects the data in the configuration file in a specific format. Any
changes to the JSON format results in unexpected results and/or crashes.
• There is no built-in capability to generate and export an updated copy of an
EVO:RAIL JSON configuration file from the user interface.

Upload Configuration File


Create a custom configuration file with the following steps:
1. Obtain a sample json file for the EVO:RAIL release that you will be
configuring from the VMware Knowledge Base:
http://kb.vmware.com/kb/2106961
2. Edit your new configuration file to insert the values from the EVO:RAIL
Network Configuration Table (Appendix A).
3. Make sure that the filename has a “json” extension.
4. Make sure that the file is in valid JSON format because EVO:RAIL will not
validate the syntax. For example, a missing comma will cause the
configuration file to fail.

EVO:RAIL validates the content of a correctly formatted JSON file in the


same manner that it validates manual entries, verifying data entry and
performing deep validation prior to building the appliance.
5. Make this file accessible from your workstation/laptop.
Deploy EVO:RAIL as usual by configuring your 10GbE switch, racking and
cabling your new EVO:RAIL appliance, and powering on all four EVO:RAIL
nodes.
Step through the Initial Configuration User Interface section to upload
your JSON configuration file.

B-2 JSON Configuration File


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
JSON File Format and Valid Values
The JSON configuration file must be properly formatted. Also, the values must
be valid for EVO:RAIL and for your network. The following list contains the
field restrictions, both color-coded and in list format:
1. Variables in red can be replaced with custom names or IP addresses. All
red fields are required.
 minIP, maxIP, netmask, gateway, IP: valid IP addresses and
netmask in your network
 vlanId: valid numeric VLAN ID, configured on your top-of-rack switch
 name: alphanumeric string to identify a VM network segment
 prefix: alphanumeric string for the first part of an ESXi hostname
 tld: valid top-level domain name in your network
 vCenter, evorail: alphanumeric string for the vCenter Server and
EVO:RAIL hostnames

2. Fields in purple contain multiple options as follows:


"separator": “” (no separator) or “-“ (dash)
 The general formula for the FQDN (fully qualified domain name) of an
ESXi host is: <hostname><separator><iterator>.<domain>
 When using “-“ as the separator, the FQDN of an ESXi host is:
<hostname>-<iterator>.<domain> (i.e. host-01.vsphere.local)
 When using “” as the separator, the FQDN of an ESXi host is:
<hostname><iterator>.<domain> (i.e. host01.vsphere.local)
"iterator": “NUMERIC_N” or “NUMERIC_NN” or “ALPHA”
 ALPHA means that the first host starts with A, the second host with B,
etc.
 NUMERIC_N means that the first host starts with “1”, the second host
with “2”, etc.
 NUMERIC_NN means that the first host starts with “01”, the second
host with “02”, etc.
"logging": “LOGINSIGHT” or “SYSLOG”
 LOGINSIGHT means that Log Insight will be used as the log collection
server. When this option is used, "loginsightServer" and
"loginsightHostname" must be filled out.
 SYSLOG means that an external log collection server will be used as
the log collection server. When this option is used, "syslogServerCSV”
must be filled out.

JSON Configuration File B-3


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
"timezone":
 Any value listed in
http://en.m.wikipedia.org/wiki/List_of_tz_database_time_zones in
the TZ column is accepted as valid input.
3. Fields in green are optional. If the field is not used, it should be left blank
with just opening and closing quotes, i.e. “”.
 activeDirectoryDomain, activeDirectoryUsername,
proxyUsername: alphanumeric strings
 proxyServer, proxyPort: IP address and port identifier
 ntpServerCSV, dnsServerCSV: a comma-­separated list of IP
addresses or hostnames (ntp only)
4. The field in brown, syslogServerCSV, is required if the “logging” field is
set to “SYSLOG”. Up to two IP addresses (or FQDNs) are supported in this
field.
Otherwise, if the “logging” field is set to "LOGINSIGHT", syslogServerCSV
must be left blank.
5. Fields in yellow, loginsightServer and loginsightHostname, are
required if the “logging” field is set to “LOGINSIGHT.
Otherwise, if the field is set to "SYSLOG", loginsightServer and
loginsightHostname must be left blank
6. Fields that contain passwords should only be filled out by a customer from
the EVO:RAIL interface. They should not be pre-filled in the JSON file for
security reasons.
7. Do not modify the JSON syntax or any other field.

The following is a sample JSON file based on EVO:RAIL Release 2.0. Data is
mapped to the rows in the EVO:RAIL Network Configuration Table.

B-4 JSON Configuration File


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
1. {
2. "version": "2.0.0",
3. "network": {
4. "dhcp": false,
5. "hosts": {
6. "management": {
7. "pools": [{
8. "minIp": "192.168.10.1",
9. "maxIp": "192.168.10.4"
10. }], Rows 19,20,23,24
11. "netmask": "255.255.255.0",
12. "gateway": "192.168.10.254"
13. },
14. "vsan": {
15. "pools": [{
16. "minIp": "192.168.30.1",
17. "maxIp": "192.168.30.4"
18. }], Rows 31-34
19. "netmask": "255.255.255.0",
20. "vlanId": 30
21. },
22. "vm": [{
23. "name": "VM Network A",
24. "vlanId": 110
25. }, { Rows 35-38
26. "name": "VM Network B",
27. "vlanId": 120
28. }],
29. "vmotion": {
30. "pools": [{
31. "minIp": "192.168.20.1",
32. "maxIp": "192.168.20.4"
33. }], Rows 27-29
34. "netmask": "255.255.255.0",
35. "vlanId": 20
36. }
37. },
38. "vcenter": {
39. "ip": "192.168.10.200" Row 21-22
40. },
41. "evorail": {
42. "ip": "192.168.10.201"
43. }
44. },
45. "hostnames": {
46. "hosts": {
47. "prefix": "host",
48. "separator": "-",
49. "iterator": "NUMERIC_NN" Row 13-16
50. },
51. "tld": "localdomain.local",
52. "vcenter": "vcserver", Row 17-18
53. "evorail": "evorail"
54. },

JSON Configuration File B-5


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
55. "passwords": {
56. "esxiPassword": "",
57. "esxiPasswordConfirm": "",
58. "vcPassword": "", Row 25-26
59. "vcPasswordConfirm": "",
60. "activeDirectoryDomain": "optional_leave_blank_if_not_needed",
61. "activeDirectoryUsername": " optional_leave_blank_if_
not_needed ", Row 6-8
62. "activeDirectoryPassword": "",
63. "activeDirectoryPasswordConfirm": ""
64. },
65. "global": { Row 39-41
66. "logging": "LOGINSIGHT",
67. "timezone": "UTC", Row 3
68. "loginsightServer": "required_if_logging_field_is_LOGINSIGHT-
otherwise_it_must_be_blank", Rows
69. "loginsightHostname": "required_only_if_logging_field_is_ 39-40
LOGINSIGHT-otherwise_it_must_be_blank",
70. "ntpServerCSV": "optional_leave_blank_if_not_needed", Row 4
71. "syslogServerCSV": "required_only_if_logging_field_is_SYSLOG- Row 41
otherwise_it_must_be_blank",
72. "dnsServerCSV": "optional_leave_blank_if_not_needed", Row 5
73. "proxyServer": "optional_leave_blank_if_not_needed",
74. "proxyPort": "optional_leave_blank_if_not_needed", Rows
75. "proxyUsername": "optional_leave_blank_if_not_needed",
76. "proxyPassword": "" 9-12
77. },
78. "vendor": {
79. "ovfs": [{
80. "ip": "192.168.10.211"
81. }, { Row 42-43
82. "ip": "192.168.10.212"
83. }]
84. }

On the “vendor” section, enter the two IP addresses required for the Hitachi
Solution VMs, similar to what is shown above.

B-6 JSON Configuration File


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
C
Physical Requirements
For physical details about the Hitachi Unified Compute Platform 1000 for
VMware EVO:RAIL appliance, see the following document: “QuantaPlex
Series T41S-2U/T41SP-2U User’s Guide” (www.quantaqct.com).

Physical Requirements C-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
B-2
Field Replacement Unit for Hitachi Unified Compute Platform 1000
D
Virtual Machine Size by Guest
Operating System
This appendix has a set of predefined virtual machine sizes based on standard
VMware recommendations for each guest operating system:
 Virtual Machine Profiles

Virtual Machine Size by Guest Operating System D-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Virtual Machine Profiles
Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM
Red Hat Enterprise Linux 7 (64-bit) Small 16 GB 1 1 1 GB
Red Hat Enterprise Linux 6 (64-bit)
Medium 24 GB 2 1 2 GB
Red Hat Enterprise Linux 5 (64-bit)
Ubuntu Linux (64-bit)
Large 32 GB 4 1 6 GB
CentOS 4/5/6 (64-bit)

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM


Microsoft Windows Server 2012 (64- Small 40 GB 1 1 1 GB
bit)
Medium 60 GB 2 1 4 GB
Microsoft Windows Server 2008 (64-
bit) 80 GB 4 1 8 GB
Large

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM

Microsoft Windows Server 2003 (64- Small 16 GB 1 1 1 GB


bit) Medium 32 GB 2 1 4 GB

Large 60 GB 4 1 8 GB

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM


Small 32 GB 1 1 2 GB
Microsoft Windows 10 (64-bit)
Medium 40 GB 2 1 4 GB

Large 60 GB 2 1 8 GB

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM


Small 32 GB 1 1 2 GB
Microsoft Windows 8 (64-bit)
Medium 40 GB 2 1 4 GB

Large 60 GB 2 1 8 GB

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM


Small 32 GB 1 1 1 GB
Microsoft Windows 7 (64-bit)
Medium 40 GB 2 1 4 GB

Large 60 GB 2 1 8 GB

Guest Operating System EVO:RAIL Size vDisk vCPU Core vMEM


Small 16 GB 1 1 1 GB
Microsoft Windows XP
Medium 32 GB 2 1 2 GB

Large 60 GB 2 1 4 GB

D-2 Virtual Machine Size by Guest Operating System


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
E
Security Profile Details
This appendix has a set of security profiles. The policies that match the VMware
Security Hardening Guide for vSphere 6.0 are found in the virtual machine
advanced settings (key/value) for each of the three security risk profiles. For more
detailed information on Security Virtual Machines see
http://pubs.vmware.com/vsphere-
60/index.jsp#com.vmware.vsphere.security.doc/GUID-CF45F448-2036-4BE3-
8829-4A9335072349.html.
 Audit only
 Risk Profile 1
 Risk Profile 2
 Risk Profile 3

Security Profile Details E-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Audit only
The following parameters from the VMware Security Hardening Guide for
vSphere 6.0 are treated as audit only and are not added to the VM
configuration by EVO:RAIL:

ethernetn.filtern.name
floppyX.present
parallelX.present
pciPassthru*.present
sched.mem.pshare.salt
scsiX:Y.mode
serialX.present

E-2 Security Profile Details


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Risk Profile 1
These guidelines should only be implemented in the highest security
environments. For example, top-secret government or military, extremely
sensitive data, etc.

^ethernet[0-9]*.filter[0-9]*.name = null
isolation.bios.bbs.disable = true
isolation.device.connectable.disable = true
isolation.device.edit.disable = true
isolation.ghi.host.shellAction.disable = true
isolation.monitor.control.disable = true
isolation.tools.autoInstall.disable = true
isolation.tools.copy.disable = true
isolation.tools.copy.disable = true
isolation.tools.diskShrink.disable = true
isolation.tools.diskWiper.disable = true
isolation.tools.dispTopoRequest.disable = true
isolation.tools.dnd.disable = true
isolation.tools.getCreds.disable = true
isolation.tools.ghi.autologon.disable = true
isolation.tools.ghi.launchmenu.change = true
isolation.tools.ghi.protocolhandler.info.disable = true
isolation.tools.ghi.trayicon.disable = true
isolation.tools.guestDnDVersionSet.disable = true
isolation.tools.hgfsServerSet.disable = true
isolation.tools.memSchedFakeSampleStats.disable = true
isolation.tools.paste.disable = true
isolation.tools.setGUIOptions.enable = false
isolation.tools.trashFolderState.disable = true
isolation.tools.unity.disable = true
isolation.tools.unity.push.update.disable = true
isolation.tools.unity.taskbar.disable = true
isolation.tools.unity.windowContents.disable = true
isolation.tools.unity.windowContents.disable = true
isolation.tools.unityActive.disable = true
isolation.tools.unityInterlockOperation.disable = true
isolation.tools.vixMessage.disable = true
isolation.tools.vmxDnDVersionGet.disable = true
logging = false
RemoteDisplay.vnc.enabled = false
Security.AccountUnlockTime = 900
tools.guestlib.enableHostInfo = false
tools.setInfo.sizeLimit = 1048576

Risk Profile 2
These guidelines should be implemented for more sensitive environments. For
example, those handling more sensitive data, those subject to stricter
compliance rules, etc.

^ethernet[0-9]*.filter[0-9]*.name = null

Security Profile Details E-3


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
isolation.device.connectable.disable = true
isolation.device.edit.disable = true
isolation.tools.autoInstall.disable = true
isolation.tools.copy.disable = true
isolation.tools.copy.disable = true
isolation.tools.diskShrink.disable = true
isolation.tools.diskWiper.disable = true
isolation.tools.dnd.disable = true
isolation.tools.paste.disable = true
isolation.tools.setGUIOptions.enable = false
log.keepOld = 10
log.rotateSize = 100000
RemoteDisplay.vnc.enabled = false
Security.AccountUnlockTime = 900
tools.guestlib.enableHostInfo = false
tools.setInfo.sizeLimit = 1048576

Risk Profile 3
These guidelines should be implemented in all environments not subject to
stricter security.

^ethernet[0-9]*.filter[0-9]*.name = null
isolation.device.connectable.disable = true
isolation.device.edit.disable = true
isolation.tools.copy.disable = true
isolation.tools.copy.disable = true
isolation.tools.diskShrink.disable = true
isolation.tools.diskWiper.disable = true
isolation.tools.dnd.disable = true
isolation.tools.paste.disable = true
isolation.tools.setGUIOptions.enable = false
log.keepOld = 10
log.rotateSize = 100000
RemoteDisplay.vnc.enabled = false
Security.AccountUnlockTime = 900
tools.setInfo.sizeLimit = 1048576

E-4 Security Profile Details


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
F
EVO:RAIL Cluster Shutdown and
Restart Procedures
The following are manual procedures to shutdown/restart an EVO:RAIL cluster.
There are two categories of virtual machines for these procedures:

System: These are comprised of VMware vCenter Server Appliance, VMware


vRealize Log Insight, and any Hitachi Data Systems solution virtual machines.

Client: These are comprised of all other virtual machines.

 EVO:RAIL Cluster Shutdown Procedure


 EVO:RAIL Cluster Restart Procedure
 EVO:RAIL Single Appliance Shutdown Procedure
 EVO:RAIL Single Appliance Restart Procedure

Appliance Shutdown and Restart Procedures F-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Cluster Shutdown Procedure
The following manual procedure is for graceful cluster shutdown of a complete
cluster (single or multiple appliances):
1. From EVO:RAIL Management, click VMS on the left sidebar.
2. Power off all client virtual machines with the following steps:
a. If a virtual machine is on, click Power Off/Shutdown.
b. A confirmation message will appear, click Confirm Shutdown.
c. Repeat for each virtual machine.

Note: Do not power off any HDS Solutions virtual machines.

3. Click the VMware vSphere Web Client icon in the top-right corner, and
login with administrator privileges. You will be in vCenter Home.
4. Disable vSphere DRS and High Availability with the following steps:
8. Return to vCenter Home.
9. From the Inventory Lists, select Clusters.
10. Select MARVIN-Virtual-SAN-Cluster-<id> in either the left or center
panes.
11. Select Manage, then Settings in the center pane.
12. Under Services, select vSphere DRS.
13. If the center pane says vSphere DRS is Turned ON, click Edit.
14. Uncheck Turn ON vSphere DRS.
15. Click OK.
16. Under Services, select vSphere HA.
17. If the center pane says vSphere HA is Turned ON, click Edit.
18. Uncheck Turn ON vSphere HA.
19. Click OK.
5. Migrate all the system VMs (VMware vCenter Server Appliance, vRealzie
Log Insight, any EVO:RAIL partner VMs and EVO:RAIL Orchestration
Appliance) to the first ESXi host with the following steps:
a. From Inventory Trees, click Virtual Machines.
b. Right-click the virtual machine and select Migrate.
i. Select Migration Type: Change compute resource only
ii. Select a compute resource: the first ESXi host.
iii. Select Network: vCenter Server Network.
iv. Select vMotion Priority: Schedule vMotion with high priority.

F-2 Appliance Shutdown and Restart Procedures


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Repeat for all system VMs that are not currently on the first ESXi host.
6. Enable automatic start for all the system VMs (VMware vCenter Server
Appliance, vRealize Log Insight, any EVO:RAIL partner VMs and EVO:RAIL
Orchestration Appliance) with the following steps:
a. Return to vCenter Home.
b. From Inventory Lists, select Hosts.
c. Select the first ESXi host, such as esxi-node01.vmworld.local.
d. Select Manage, then Settings in the center pane.
e. Under Virtual Machines select VM Startup/Shutdown.
f. Click Edit in the center pane.
g. Select Automatically start and stop the virtual machines with the
system.
h. Select each service VM, use the Up arrow to move each to the
Automatic Startup section in the following required order (See Figure
12):
i. VMware EVO:RAIL Orchestration Appliance
ii. VMware vCenter Server Appliance
iii. VMware vRealize Log Insight
iv. HDS Solution VMs: Hitachi Compute Advisor and Hitachi Data
Ingestor.
i. Click OK.

Figure 12

Appliance Shutdown and Restart Procedures F-3


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
7. Power off all system VMs (VMware vCenter Server Appliance, vRealize Log
Insight, any EVO:RAIL partner VMs and EVO:RAIL Orchestration Appliance)
with the following steps:
a. Return to vCenter Home.
b. From Inventory Trees, click Virtual Machines.
c. Right-click the Service virtual machine and select Power > Shut Down
Guest OS.
d. Power off vCenter Server Appliance last.
e. A message will appear asking to confirm power off, click Yes.
f. Repeat until all system virtual machines are powered off.
8. For each appliance, power off all nodes. Because the vSphere Web Client is
no longer accessible, use one of the following methods:
a. Press the power button on each EVO:RAIL node, or
b. Use out-of-band management if ACPI is available, or
c. Use the vSphere C# Client to connect to each node and use the Shut
Down option.

EVO:RAIL Cluster Restart Procedure


The cluster is restarted in the reverse order:
1. Power on each EVO:RAIL appliance. Power on Node 1 on the first
appliance, last.

Note: All nodes must be turned on within a short time of each other or Virtual
SAN will think that the datastore is not working properly. It’s not a cause for
concern, but we don’t recommend taking a long break in the middle of
powering on the nodes in a cluster.

Note: It may take up to 15 minutes for all services to be fully restored and for
EVO:RAIL Management to be accessible.

2. Log in to vSphere Web Client using an account with administrator


privileges. You will be directed to vCenter Home.
3. Enable High Availably with the following steps:
a. From the Inventory Lists, select Clusters.
b. Select MARVIN-Virtual-SAN-Cluster-id in either the left or center
panes.
c. Select Manage, then Settings in the center pane.
d. Under Services, select vSphere HA.
e. If the center pane says vSphere HA is Turned OFF click Edit.

F-4 Appliance Shutdown and Restart Procedures


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
f. Check Turn ON vSphere HA.
g. Click OK.
4. Enable vSphere DRS with the following steps:
a. Return to vCenter Home.
b. From the Inventory Lists, select Clusters.
c. Select MARVIN-Virtual-SAN-Cluster-<id> in either the left or
center panes.
d. Select Manage, then Settings in the center pane.
e. Under Services, select vSphere DRS.
f. If the center pane says vSphere DRS is Turned OFF, click Edit.
g. Check Turn ON vSphere DRS.
h. Click OK.
5. From EVO:RAIL Management, click VMS on the left side bar.
6. For each client virtual machine, click Power On.

EVO:RAIL single appliance shutdown procedure


The following manual procedure is for graceful cluster shutdown of a single
appliance in a multi-appliance cluster:
1. Identify the Appliance ID on the identification sticker on the physical
appliance.
2. From EVO:RAIL Management, identify the ESXi hostnames on the appliance
that will be shut down with the following steps:
a. Click the Health Icon on the left sidebar.
b. Click the Appliance ID that you identified in Step 1.
c. Identify the ESXi hostnames on the four nodes on the appliance you
are shutting down.

3. Click the VMware vSphere Web Client icon , and log in using
administrator privileges.
4. Enable Maintenance Mode on the four ESXi hosts identified in Step 2 with
the following steps:
a. Click vCenter in the left pane and you will be in vCenter Home.
b. From Inventory Trees, select Hosts.
c. For each of the four ESXi hosts, right-click on Maintenance Mode
> Enter Maintenance Mode.
d. Select the check box for Move powered-off and suspended virtual
machines to other hosts in the cluster.

Appliance Shutdown and Restart Procedures F-5


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
e. Select Full data migration from the Virtual SAN data migration
dropdown.
f. Click OK.

Note: Complete this procedure on one ESXi host at a time. Wait until the
ESXi host has entered maintenance mode before proceeding to the next. Full
Data Migration can take a long time. See Place a Member of Virtual SAN
Cluster in Maintenance Mode and Virtual SAN – Maintenance Mode Monitoring
for more details.

5. Power off the four ESXi hosts with the following steps:
a. From Inventory Trees, select Hosts.
b. Right-click the system virtual machines and select Power > Shut
Down.
c. Enter the reason for the shut down and click OK.
d. Repeat for each ESXi host.

EVO:RAIL single appliance restart procedure


The appliance is restarted in the reverse order:
1. Power on the EVO:RAIL appliance.
2. Log in to vSphere Web Client using an account with administrator
privileges. You will be directed to vCenter Home.
3. Exit Maintenance Mode on the four ESXi hosts with the following steps:

F-6 Appliance Shutdown and Restart Procedures


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
a. Return to vCenter Home.
b. From Inventory Trees, select Hosts.
c. Right-click the ESXi host and select Maintenance Mode > Exit
Maintenance Mode.
d. Repeat for each ESXi host.

EVO:RAIL will detect that the ESXI hosts are available again. vMotion and
Virtual SAN will distribute compute and storage workloads.

Appliance Shutdown and Restart Procedures F-7


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
G
Customizing the EVO:RAIL Initial IP
Address
Follow the procedure below to customize the EVO:RAIL Initial IP Address:

 Customizing Initial IP Address

Customizing the EVO:RAIL Initial IP Address G-1


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Customizing Initial IP Address
To customize the EVO:RAIL initial IP address, follow these instructions to set
the IP address, subnet mask, and gateway for the EVO:RAIL appliance instead
of the default initial address, 192.168.10.200/24.

You do not need to follow these instructions if you can reach the default
EVO:RAIL initial IP address and merely wish to change the post-­configuration
IP address to something else. Instead, use the EVO:RAIL user interface to
enter the new IP address.

It will be easiest to select the IP settings that you want to use permanently for
your EVO:RAIL cluster. Then all you need to do is configure your
workstation/laptop once. Otherwise, just follow the notes in Step 8 of the
Initial Configuration user interface.

1. From your workstation/laptop, connect a VMware vSphere (C#) Client to


the IP address of ESXi host #1 using the root user and the password
specified during factory ESXi software installation, Passw0rd!

2. Click the Virtual Machines tab and select “EVO:RAIL Orchestration


Appliance” (Release 2.x). The VM should already be powered on. If not,
click the green play button to power it and wait for it to boot.

3. Open the Console and login as root with the default password Passw0rd!

4. Stop vmware-marvin:

/etc/init.d/vmware-marvin stop

5. Using the vami_set_network command, change the default IP address to a


custom IP address, subnet mask, and gateway using the syntax shown
below (all arguments are required).

Use the EVO:RAIL Network Configuration Table, Row 2 for the


<new_IP>, <new_subnet mask>, and <new_gateway>.

/opt/vmware/share/vami/vami_set_network eth0 STATICV4 <new_IP> <new_subnet mask>


<new_gateway>

6. Restart vmware-marvin and vmware-loudmouth on the EVO:RAIL


Orchestration Appliance in Release 2.x:

/etc/init.d/vmware-marvin restart

/etc/init.d/vmware-loudmouth restart

G-2 Customizing the EVO:RAIL Initial IP Address


Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Hitachi Data Systems

Corporate Headquarters
2845 Lafayette Street
Santa Clara, California 95050-2639
U.S.A.
www.hds.com

Regional Contact Information


Americas
+1 408 970 1000
[email protected]

Europe, Middle East, and Africa


+44 (0) 1753 618000
[email protected]

Asia Pacific
+852 3189 7900
[email protected]

MK-92UCP077-02

You might also like