Hitachi Unified Compute Platform 1000 For Vmware Evo:Rail Installation and Reference Manual
Hitachi Unified Compute Platform 1000 For Vmware Evo:Rail Installation and Reference Manual
FASTFIND LINKS
Contents
Product Version
Getting Help
MK-92UCP077-02
© 2016 Hitachi Data Systems Corporation. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose
without the express written permission of Hitachi, Ltd.
Hitachi Data Systems reserves the right to make changes to this document at any time without notice and
assumes no responsibility for its use. This document contains the most current information available at the
time of publication. When new or revised information becomes available, this entire document will be updated
and distributed to all registered users.
Some of the features described in this document might not be currently available. Refer to the most recent
product announcement for information about feature and product availability, or contact Hitachi Data Systems
Corporation at https://portal.hds.com.
Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of
the applicable Hitachi Data Systems Corporation agreements. The use of Hitachi Data Systems products is
governed by the terms of your agreements with Hitachi Data Systems Corporation.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to
U.S. export control laws, including the U.S. Export Administration Act and its associated regulations, and may
be subject to export or import regulations in other countries. Reader agrees to comply strictly with all such
regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries.
Microsoft Internet Explorer, Active Directory, Windows Server, and Windows are trademarks or registered
trademarks of Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their
respective owners.
ii
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Contents
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance3-1
Hardware .........................................................................................................3-2
Software ..........................................................................................................3-5
Networking .......................................................................................................3-5
EVO:RAIL Setup Workflow .................................................................................3-6
Step 1: Network Plan ..........................................................................3-6
Top-of-Rack 10GbE Switch ....................................................................3-7
Reserve VLANs (best practice) ..............................................................3-8
System ................................................................................................3-9
Management ..................................................................................... 3-10
vMotion and Virtual SAN ..................................................................... 3-13
Solutions ........................................................................................... 3-14
Workstation/Laptop (for configuration and management) ...................... 3-15
Out-of-Band Management (optional) .................................................... 3-16
Step 2: Step 2: Set Up Switch ........................................................... 3-17
Understanding Switch Configuration .................................................... 3-17
Network Traffic .................................................................................. 3-18
Contents iii
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Multicast Traffic................................................................................. 3-18
vSphere Security Recommendations.................................................... 3-19
Configure VLANs on your 10GbE Switch(es) ......................................... 3-19
Inter-switch Communication ............................................................... 3-20
Disable Link Aggregation .................................................................... 3-21
Step 3: Hands-on Lab (Optional) ....................................................... 3-21
Step 4: Cable & On .......................................................................... 3-21
Step 5: Management VLAN ............................................................... 3-23
Step 6: Connect & Configure ............................................................ 3-24
Avoiding Common Network Mistakes ......................................................... 3-26
iv Contents
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Network Configuration Table ..................................................... 1
EVO:RAIL Network Configuration Table ..................................................................2
Contents v
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
vi Contents
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Preface
This document describes the deployment, management, ongoing
configurations, and maintenance procedures that a data center administrator
needs to perform when working with the Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL solution.
Read this document carefully to understand how to replace FRUs and maintain
a copy for reference purposes.
Intended audience and qualifications
About this document
Document conventions
Accessing product documentation
Getting help
Comments
Preface vii
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Intended audience and qualifications
This guide is intended for data center administrators and others responsible for
the deployment, configuration, management, and maintenance of Hitachi
Unified Compute Platform 1000 for VMware EVO:RAIL.
Document conventions
This document uses the following typographic conventions:
Convention Description
Regular text bold In text: keyboard key, parameter name, property name, hardware label,
hardware button, hardware switch
In a procedure: user interface item
< > angle brackets Variable (used when italic is not enough to identify variable)
viii Preface
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
This document uses the following icons to draw attention to information:
Caution Warns that failure to take or avoid a specified action can result in
adverse conditions or consequences (for example, loss of access to
data).
Getting help
The Hitachi Data Systems customer support staff is available 24 hours a day,
seven days a week. If you need technical support, log on to the Hitachi Data
Systems Portal for contact information: https://portal.hds.com
Comments
Please send us your comments on this document: [email protected].
Include the document title and number, including the revision level (for
example, -07), and refer to specific sections and paragraphs whenever
possible. All comments become the property of Hitachi Data Systems.
Thank you!
Preface ix
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
x Preface
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
1
Prerequisites and Checklist
To ensure the correct functioning of Hitachi Unified Compute Platform 1000 for
VMware EVO:RAIL and an optimal end-to-end user experience, understanding
the recommendations and requirements are essential. Availability of resources
and workload is critical for any environment, but even more so in a hyper-
converged environment as compute, networking, storage, and management
are provided on the same platform.
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Hitachi Unified Compute Platform 1000 for VMware
EVO:RAIL Setup Checklist
Before you proceed with the deployment and configuration of the Hitachi
Unified Compute Platform 1000 for VMware EVO:RAIL appliance or appliances:
• Review the “UCP for VMware EVO:RAIL Setup Checklist” below and fill
in “Appendix A: EVO:RAIL Network Configuration Table” by following
all of the steps listed in “EVO:RAIL Setup Workflow” This is essential for
smooth deployment and configuration.
Step 1: Plan
• Eight 10GbE ports (SFP+ or RJ-45) for each EVO:RAIL appliance in an EVO:RAIL
cluster.
You can have up to 16 appliances (64 server nodes) in an EVO:RAIL cluster.
Top-of-Rack 10GbE
• Decide if you will have a single or multiple switch setup for redundancy.
Switch
• Power and cooling specifications, operating conditions, and hardware configuration
options as provided in Appendix C.
• One management VLAN with IPv6 multicast for traffic from EVO:RAIL, vCenter
Server™, ESXi™, and any additional solutions provided with EVO:RAIL (default is
Reserve
untagged/native).
VLANs
• One VLAN with IPv4 multicast for Virtual SAN™ traffic.
(Best practice) • One VLAN for vSphere® vMotion™.
• At least one VM Network.
• Time zone.
• Hostname or IP address of the NTP server(s) on your network (recommended).
• IP address of the DNS server(s) on your network (required, except in totally
System
isolated environments).
• Optional: Domain, username, and password for Active Directory authentication.
• Optional: IP address, port, username, and password of your proxy server.
• Decide on your ESXi host naming scheme.
• Decide on the hostnames for vCenter Server and for EVO:RAIL.
• Reserve one IP address for EVO:RAIL.
• Reserve one IP address for vCenter Server.
Management • IP address of the default gateway and subnet mask.
• Reserve four contiguous IP addresses for ESXi hosts for each appliance in an
EVO:RAIL cluster.
• Select a single password for all ESXi hosts in the EVO:RAIL cluster.
• Select a single password for EVO:RAIL and vCenter Server.
• Reserve four contiguous IP addresses and a subnet mask for vSphere vMotion for
vMotion and Virtual each appliance in an EVO:RAIL cluster.
SAN • Reserve four contiguous IP addresses and a subnet mask for Virtual SAN for each
appliance in an EVO:RAIL cluster.
Installation and Reference Guide for Hitachi Unified Compute Platform 1000
• Logging Solution:
To use vRealize Log Insight: Reserve one IP address and decide on the
hostname.
To use an existing syslog server: Find out the hostname or IP address of your
Solutions
third-party syslog server.
• HDS Solutions:
If other solutions are provided with your appliance: Reserve one or two IP
addresses.
• Client workstation/laptop (any operating system).
Workstation/ • A browser to access the EVO:RAIL user interface.
Laptop (The latest versions of Firefox, Chrome, and Internet Explorer 10+ are all
supported)
• A 1GbE switch with four available ports per EVO:RAIL appliance or enough extra
Out-of-band
capacity on the 10GbEswitch.
management
• If needed contact HDS for instructions to modify the default information such as
(Optional) username, password, and IP address for each node.
• Connect the 10GbE ports on EVO:RAIL to the 10GbE switch(es) as shown in the
cabling diagrams.
• Then power on all four nodes on your first EVO:RAIL appliance. See Figure 1 for
power buttons.
• Do not turn on any nodes in other appliance in an EVO:RAIL cluster until you have
completed the full configuration of the first appliance. Each appliance must be added
to an EVO:RAIL cluster one at a time.
• Optional: On each ESXi host, customize the management VLAN with the ID you
specified in Step “Reserve VLANs”.
Otherwise, the default is your switch’s Native VLAN for untagged traffic.
Installation and Reference Guide for Hitachi Unified Compute Platform 1000
2
The Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
combines compute, networking, storage, and management resources together
with industry leading infrastructure, virtualization, and management products
from VMware to form a radically simple hyper-converged infrastructure
appliance offered by Hitachi Data Systems.
Hardware
Software
EVO:RAIL Networking
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-1
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Hardware
Hitachi Data Systems and VMware collaborated to create VMware EVO:RAIL,
the industry’s first hyper-converged infrastructure appliance for SDDC. It is
powered by VMware software using an architecture built with Hitachi Data
Systems and QuantaPlex hardware. Hitachi Data Systems (HDS), as a
Qualified EVO:RAIL Partner (QEP), sells the hardware with integrated
EVO:RAIL software, providing all hardware and software support to customers.
Each Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
has four independent nodes with the following:
• 2 Intel Xeon Processors:
– E5-2620 v3 (6C 2.4GHz 85W) or
– E5-2650 v3 (10C 2.3GHz 105W) or
– E5-2680 v3 (12C 2.5GHz 120W)
Note: A UCP 1000 ROBO configuration will have a single CPU and half of the memory
of a standard configuration
3-2 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
The EVO:RAIL appliance is designed with fault-tolerance and high availability
in mind. There are with four independent nodes, each consisting of the
following dedicated computer, network, and storage resources:
• 4 VMware ESXi hosts in a single appliance enables resiliency for hardware
failures and planned maintenance
• 2 fully redundant power supplies
• 2 redundant 10GbE NIC ports per node
• Enterprise-grade ESXi boot device, HDDs, and SSD
• Fault-tolerant Virtual SAN datastore
EVO:RAIL can scale out to 16 appliances for a total of 64 VMware ESXi hosts,
and one Virtual SAN datastore backed by a single VMware vCenter server and
EVO:RAIL instance. Deployment, configuration, and management are handled
by EVO:RAIL, allowing the compute capacity and the Virtual SAN datastore to
grow automatically. New appliances are automatically discovered and easily
added to an EVO:RAIL cluster with a few mouse clicks. Figure 1 shows the
appliance front view and HDD configuration per node for one of the UCP 1000
options. Figure 2 shows the appliance rear view and I/O ports.
Figure 1
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-3
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 2
3-4 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Software
EVO:RAIL delivers the first hyper-converged infrastructure appliance 100%
powered by the proven suite of core products from VMware. The EVO:RAIL
software bundle, preloaded onto Hitachi UCP 1000hardware, is comprised of
the following:
• EVO:RAIL Deployment, Configuration, and Management
• VMware vSphere Enterprise Plus
• VMware vCenter Server
• VMware Virtual SAN
• VMware vRealize Log Insight
EVO:RAIL is optimized for the new VMware user as well as for experienced
administrators. Minimal IT experience is required to deploy, configure, and
manage EVO:RAIL, allowing it to be used where there is limited or no IT staff
on-site. Since EVO:RAIL utilizes core products of VMware, administrators can
apply existing VMware knowledge, best practices, and processes.
Networking
Each Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance
ships with eight RJ-45 or eight SFP+ NIC ports. Eight corresponding ports are
required for each EVO:RAIL appliance on the top-of-rack (TOR) switch or
switches. One port, either on the TOR switch or on a management VLAN that
can reach the TOR network, is required for a workstation/laptop with a web
browser for VMware EVO:RAIL configuration and management. Any other
ports on the appliance will be covered and disabled.
The first EVO:RAIL node in a cluster creates a new instance of VMware vCenter
server, and all additional EVO:RAIL nodes in one or more appliances join that
first instance.
Note: A UCP 1000 ROBO Edition with a single CPU and half of the memory of a
standard configuration will be supported with 1GbE switches.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-5
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
EVO:RAIL Setup Workflow
To successfully set up EVO:RAIL you must complete the steps in the setup
workflow in this order:
Step 1: Plan
A Hitachi UCP 1000 for VMware EVO:RAIL cluster consists of one or more
appliances. The plan can include multiple appliances. Up to 16 appliances (64
server nodes) can be joined together in one EVO:RAIL cluster . If you have
already configured enough IP addresses for expansion (which we recommend),
all you do is supply the passwords that you created for the first appliance in
the EVO:RAIL cluster. If you do not have enough IP addresses, follow
instructions from section Adding Appliances to an EVO:RAIL Cluster. The
EVO:RAIL user interface will prompt you to add the new IP addresses and the
passwords.
3-6 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Once you set up EVO:RAIL, the configuration cannot be changed easily.
Consequently, we strongly recommend that you take care during this planning
phase to decide on the configurations that will work most effectively for your
organization. We want you to set up EVO:RAIL correctly when it arrives
• System
• Management
• Solutions
• Workstation/laptop
A Hitachi UCP 1000 for VMware EVO:RAIL appliance ships with either eight
SFP+ or RJ-45 NIC ports. Eight corresponding ports are required for each
EVO:RAIL appliance on one or more 10GbE switch(es). One port on the 10GbE
switch or one logical path on the EVO:RAIL management VLAN is required for
a workstation/laptop to access the EVO:RAIL user interface.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-7
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
One or two switches? Decide if you plan to use one or two 10GbE switches
for EVO:RAIL. One switch is acceptable and is often seen in test/development
or remote/branch office (ROBO) environments. However, two or more 10GbE
switches are used for high availability and failover in production environments.
Because EVO:RAIL is an entire software-defined data center in a box, if one
switch fails you are at risk of losing availability of hundreds of virtual
machines.
Network
Configuration Enter a VLAN ID for vSphere vMotion.
Table (Enter a 0 in the VLAN ID field for untagged traffic)
Row 30
Network
Configuration Enter a VLAN ID for Virtual SAN.
Table (Enter a 0 in the VLAN ID field for untagged traffic)
Row 34
Network Enter a VLAN ID and name for each VM network you want to
Configuration create. You must create at least one VM network.
Table
Rows 35-38 (Enter a 0 in the VLAN ID field for untagged traffic)
3-8 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Configure your TOR switch or switches with these VLANs.
TOR switch or
switches Configure the corresponding VLANs between TOR switches and/or
core switches.
System
Time zone
A time zone is required. It is configured on vCenter Server and each ESXi host.
Network
Configuration
Table Enter your time zone.
Row 3
Network
Configuration
Table Enter the hostname(s) or IP address(es) of your NTP server(s).
Row 4
One or more external DNS servers are required for production use in a non-
isolated environment. (This is not required in a completely isolated
environment). During initial configuration, EVO:RAIL sets up vCenter Server to
resolve hostnames to the DNS server. If you have an external DSN server,
enter the EVO:RAIL, vCenter Server, Log Insight, and ESXi hostnames and IP
addresses in your corporate DNS server tables in Step 6d.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-9
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Make sure that the DNS IP address is accessible from the network to which
EVO:RAIL is connected and the address functions properly.
If the DNS server requires access via a gateway that is not reachable during
initial configuration, do not enter a DNS IP address. Instead, add a DNS server
after you have configured EVO:RAIL using the instructions in the VMware
Knowledge Base (http://kb.vmware.com/kb/2107249).
Network
Configuration Enter the IP address(es) for your DNS server(s). Leave blank if
Table you are in an isolated environment.
Row 5
Active Directory
ESXi and vCenter Server 6.0 Documentation -> Add a vCenter Single Sign-on
Identity Source
Network
Configuration If you will be using Active Directory, perform the steps on the
Table vSphere Web Client after EVO:RAIL is configured. Values entered
Row 6-8 in the EVO:RAIL user interface are not used.
Proxy Server
A proxy server is optional. If you have a proxy server on your network and
vCenter Server needs to access services outside of your network, you need to
supply the IP address, a port, user name, and password of the proxy server.
Network
Configuration Enter the proxy server IP address, port, user name, and
Table password.
Row 9-12
Management
An EVO:RAIL cluster consists of one or more appliances, each with four ESXi
hosts. The cluster is managed by a single instance of EVO:RAIL and vCenter
Server. After initial configuration, you will access EVO:RAIL to manage your
cluster at the hostname or IP address that you specify.
Hostnames
3-10 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Since EVO:RAIL is an appliance with four server nodes, it does not have a
single hostname. The asset tag attached to each appliance is used to display
the identity of the appliance in the EVO:RAIL user interface. Asset tags are 11
alphanumeric characters where the first three characters identify your selected
EVO:RAIL partner and the remaining characters uniquely identify the
appliance.
You must configure hostnames for EVO:RAIL, vCenter Server, and your ESXi
hosts. All ESXi hostnames are defined by a naming scheme that comprises: an
ESXi hostname prefix (an alphanumeric string), a separator (“None” or a dash
”-“), an iterator (Alpha, Num X, or Num 0X), and a top-level domain. The
Preview field in the EVO:RAIL user interface shows an example of the result
for the first ESXi host. For example, if the prefix is “host”, the separator is
“None”, the iterator is “Num 0X”, and the top-level domain is “local”, the first
ESXi hostname would be “host01.local”
Examples:
Example 1 Example 2 Example 3
Prefix host myname esxi-host
Separator None - -
Iterator Num 0X Num X Alpha
Domain local mydomain company
Resulting hostname host01.local myname-1.mydomain esxi-host-a.company
Network
Enter an example of your desired ESXi host-naming scheme. Be
Configuration
sure to show your desired prefix, separator, iterator, and top-level
Table
domain.
Row 13-16
Network
Configuration
Enter an alphanumeric string for the vCenter Server hostname.
Table
Row 17
Network
Configuration Enter an alphanumeric string for the EVO:RAIL hostname.
Table
Row 18
IP Addresses
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL appliance ships
with a default set of IP addresses (see Network Configuration Table). It can be
configured on-site with different IP addresses.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-11
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
You must configure the IP addresses for EVO:RAIL, vCenter Server, and your
ESXi hosts. When selecting your IP addresses, you must make sure that none
of them conflict with existing IP addresses in your network. Also make sure
that these IP addresses can reach other hosts in your network.
There are four ESXi hosts per appliance and each requires an IP address. We
recommend that you consider allocating additional ESXi IP addresses for future
appliances to join your EVO:RAIL cluster. If you have already configured
enough IP addresses for expansion, all you do is supply the passwords that
you created for the first appliance in the cluster. Because EVO:RAIL supports
up to sixteen appliances in a cluster, you can allocate up to 64 ESXi IP
addresses.
EVO:RAIL, vCenter Server, and the ESXi hosts all share the same subnet mask
and gateway. EVO:RAIL leverages the same database as vCenter Server, so
any changes in EVO:RAIL are reflected in vCenter Server and vice-versa.
In Release 2.x, EVO:RAIL and vCenter Server are separate virtual machines
and thus have separate IP addresses. They both use default ports (443), so
you do not specify a port number when you point a browser to reach them.
Network
Configuration Enter the starting and ending IP addresses for the ESXi hosts - a
Table continuous IP range is required, with a minimum of 4 IPs.
Row 19-20
Network
Configuration
Enter the IP address for vCenter Server.
Table
Row 21
Network
Configuration Enter the IP address for EVO:RAIL after it is configured.
Table (In Release 1.x only, enter the same IP address that is in Row 21.)
Row 22
Network
Configuration Enter the subnet mask and gateway for all management IP
Table addresses.
Row 23-24
Passwords
3-12 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Passwords are required for ESXi hosts and for the EVO:RAIL / vCenter Server
virtual machines. Passwords must contain between 8 and 20 characters with at
least one lowercase letter, one uppercase letter, one numeric character, and
one special character. For more information about password requirements, see
the vSphere password documentation and vCenter Server password
documentation.
For vCenter Server and EVO:RAIL, the username for both user interfaces is
[email protected] and the console username is root. The pre-
configuration password for EVO:RAIL is Passw0rd! and the post-configuration
password for both is the one you set in the EVO:RAIL user interface (Row
26).
Network
Please check that you know your passwords in these rows, but for security
Configuration Table
reasons, we suggest that you do not write them down.
Row 25-26
vSphere vMotion and Virtual SAN each require at least four IP addresses per
appliance. We recommend that you consider allocating additional IP addresses
for future appliances to join your EVO:RAIL cluster. If you have already
configured enough IP addresses for expansion, all you do is supply the
passwords that you created for the first appliance in the cluster.
Network
Enter the starting and ending IP addresses for vSphere vMotion –
Configuration
a continuous IP range is required, with a minimum of 4 IPs.
Table
Routing is not configured for vMotion.
Rows 27-28
Network
Configuration
Enter the subnet mask for vMotion.
Table
Rows 29
Network
Enter the starting and ending IP addresses for Virtual SAN – a
Configuration
continuous IP range is required, with a minimum of 4 IPs. Routing
Table
is not configured for Virtual SAN.
Rows 31-32
Network
Configuration
Enter the subnet mask for Virtual SAN.
Table
Rows 33
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-13
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Solutions
EVO:RAIL supports initial configuration for solutions both from VMware and
from EVO:RAIL partners.
Logging Solution
EVO:RAIL is deployed with VMware vRealize Log Insight. Alternately, you may
choose to use your own third-party syslog server(s). If you choose to use
vRealize Log Insight, it will always be available by pointing a browser to the
configured IP address with the username, admin. (If you ssh to Log Insight
instead of pointing your browser to it, the username is root.) The password, in
either case, is the same password that you specified for vCenter
Server/EVO:RAIL (Row 26).
The IP address for Log Insight must be on the same subnet as EVO:RAIL and
vCenter Server.
Network
Configuration
Table Enter the hostname and IP address for vRealize Log Insight or the
Row 39-40 or hostname(s) of your existing third-party syslog server(s).
Row 41
Network
Configuration Enter the IP addresses for the Secondary VM from the HDS
Table Solution (Hitachi Compute Advisor or HCA).
Rows 43
3-14 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Workstation/Laptop (for configuration and management)
To access the first appliance in an EVO:RAIL cluster for the first time, you
must use the temporary EVO:RAIL initial IP address that was pre-configured
on the appliance, typically 192.168.10.200/24. You will change this IP address
in the EVO:RAIL initial configuration user interface to your desired permanent
address for your new EVO:RAIL cluster.
If you cannot reach the EVO:RAIL initial IP address, you will need to follow the
instructions in Step 6a to configure a custom IP address, subnet mask, and
gateway.
If you later need to change the EVO:RAIL IP address, contact your selected
EVO:RAIL partner.
Your workstation/laptop will need to be able to reach both the EVO:RAIL initial
IP address and the permanent IP address. The EVO:RAIL user interface will
remind you that you may need to reconfigure your workstation/laptop network
settings to access the new IP address. See the instructions in Step 6b.
Browser Support
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-15
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Use a browser to talk to EVO:RAIL user interface. The latest versions of
Firefox, Chrome, and Internet Explorer 10+ are all supported.
If you use Internet Explorer 10+ and an administrator has set your browser to
“compatibility mode” for all internal websites (local web addresses), you will
get a warning message from EVO:RAIL. Contact your administrator to whitelist
URLs mapping to the EVO:RAIL user interface. Alternately, connect to the
EVO:RAIL user interface using either an IP address or a fully-qualified domain
name (FQDN) configured on the local DNS server.
When shipping VMware EVO:RAIL, the BMC ports are preconfigured by DHCP.
The <ApplianceID> can be found on a pull out tag located in front of the
physical appliance. The defaults are as follows:
Username: admin
Password: admin
The BMC interface IP addresses can be assigned manually and the user name
and password can be modified as necessary.
The following are the instructions for BMC network changes on appliances
using a Quanta T41S-2U server:
• To use the serial console and BIOS options:
Repeat this process for each node and reconnect using the new
management IP address.
1. Press <DEL> or <F2> during the power on to enter setup.
2. Go to the Server Mgmt tab and then scroll down to BMC network
configuration. Press <Enter>.
3-16 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3. Scroll down to Configuration Address source and select either
“Static on next reset” or “Dynamic.” The default is Dynamic, with
the BMC interfaces obtaining their IP address through DHCP. If you
select Static, you need to set the IP address, Netmask, and Gateway
IP.
4. Press <F10> and select save and reset. Your server will reboot with
the new settings.
• To use the BMC Web UI (only if the BMC IP of the node is known):
Repeat this process for each node and reconnect using the new
management IP.
1. Open a browser and type the BMC IP in the browser bar.
2. Type the credentials.
3. Click the Configuration tab and click Network.
4. Under IPv4 Configuration, clear the Use DHCP check box, and type the
IP address, Netmask, and Gateway IP.
5. Click Save.
Various network topologies for 10GbE switch(es) and VLANs are possible with
EVO:RAIL. Complex production environments will have multiple core switches
and VLANs. For high-availability, use two 10GbE switches and connect one
port from each node to each 10GbE switch.
• Access mode – The port accepts only untagged packets and distributes the
untagged packets to all VLANs on that port. This is typically the default mode
for all ports.
• Trunk mode – When this port receives a tagged packet, it passes the
packet to the VLAN specified in the tag. To configure the acceptance of
untagged packets on a trunk port, you must first configure a single VLAN as a
“Native VLAN”. A “Native VLAN” is when you configure one VLAN to use as the
VLAN for all untagged traffic.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-17
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Network Traffic
Multicast Traffic
IPv4 multicast support is required for the Virtual SAN VLAN. IPv6 multicast is
required for the EVO:RAIL management VLAN. The 10GbE switch(es) that
connect to EVO:RAIL must allow for pass-through of multicast traffic on these
two VLANs.
EVO:RAIL creates very little traffic via IPv6 multicast for auto discovery and
management. It is optional to limit traffic further on your 10GbE switch by
enabling MLD Snooping and MLD Querier.
There are two options to handle Virtual SAN IPv4 multicast traffic. Either limit
multicast traffic by enabling both IGMP Snooping and IGMP Querier or disable
both of these features. We recommend enabling both IGMP Snooping and
IGMP Querier, if your 10GbE switch supports them.
3-18 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
IGMP Querier sends out IGMP group membership queries on a timed interval,
retrieves IGMP membership reports from active members, and allows updates
to group membership tables. By default, most switches enable IGMP Snooping,
but disable IGMP Querier.
http://pubs.vmware.com/vsphere-
60/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-
FA661AE0-C0B5-4522-951D-A3790DBE70B4.html
In particular, ensure that physical switch ports are configured with Portfast if
spanning tree is enabled. Because VMware virtual switches do not support
STP, physical switch ports connected to an ESXi host must have Portfast
configured if spanning tree is enabled to avoid loops within the physical switch
network. If Portfast is not set, potential performance and connectivity issues
might arise.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-19
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Using the EVO:RAIL Network Configuration Table:
4. Configure a Virtual SAN VLAN (Row 34) on your 10GbE switch ports and
set it to allow IPv4 multicast traffic to pass through.
5. Configure the VLANs for your VM Networks (Rows 35-38) on your 10GbE
switch ports.
Inter-switch Communication
3-20 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Disable Link Aggregation
Do not turn on any nodes in other appliances in an EVO:RAIL cluster until you
have completed the full configuration of the first appliance. Each appliance
must be added to an EVO:RAIL cluster one at a time. See Adding Appliances
to an EVO:RAIL Cluster.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-21
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 3. Rear view of one deployment of Hitachi Unified Compute Platform
1000 for VMware EVO:RAIL appliance connected to one TOR
switch.
3-22 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Figure 4. Generic example of one deployment of EVO:RAIL connected to two
10GbE switches, reconnected for redundancy.
Login to each of the four ESXi hosts via the console interface, DCUI.
• Press <F2> to login with the username root and the password Passw0rd!
Note: You might need to add Hot Keys on the BMC remote console for <ALT-
F1> and <ALT-F2>.
• Login to the shell with the username root and the password Passw0rd!
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-23
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
• Execute the following ESXi commands with the <VLAN_ID> from Row 1 in
the EVO:RAIL Network Configuration Table:
• To verify the VLAN ID was set correctly, run the following command:
Your workstation/laptop will need to be able to reach both the EVO:RAIL initial
IP address and your selected permanent EVO:RAIL IP address.
Example:
EVO:RAIL Workstation/laptop
3-24 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Initial 192.168.10.200/24 192.168.10.150 255.255.255.0 192.168.10.254
(temporary)
If you are in an isolated environment, you will need to use the DNS
server that is built into vCenter Server. To manage EVO:RAIL via your
workstation/laptop, configure your laptop’s network settings to use the
vCenter Server IP address (Row 21) for DNS. EVO:RAIL’s IP addresses and
hostnames are configured for you.
If you are using your corporate DNS server(s) for EVO:RAIL (Row 5), add the
hostnames and IP addresses for EVO:RAIL, vCenter Server, Log Insight, and
each ESXi host (see the naming scheme in Hostnames).
vMotion and Virtual SAN IP addresses are not configured for routing by
EVO:RAIL and there are no hostnames.
esxi-host01.localdomain.local 192.168.10.1
esxi-host02.localdomain.local 192.168.10.2
esxi-host03.localdomain.local 192.168.10.3
esxi-host04.localdomain.local 192.168.10.4
evorail.localdomain.local 192.168.10.100
vcserver.localdomain.local 192.168.10.101
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-25
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
loginsight.localdomain.local 192.168.10.102
3-26 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
http://pubs.vmware.com/vsphere-
60/index.jsp#com.vmware.vsphere.avail.doc/GUID-5432CA24-14F1-
44E3-87FB-61D937831CF6.html
b. Make sure you can reach your DNS server from the EVO:RAIL, vCenter
Server, and ESXi network addresses. Then update your DNS server with
all EVO:RAIL hostnames and IP addresses.
c. If you have configured Active Directory, NTP servers, proxy servers, or
a third-party syslog server, you must be able to reach them from all of
your configured EVO:RAIL IP addresses.
d. You must determine all IP addresses before you can configure
EVO:RAIL. You cannot easily change the IP addresses after you have
configured EVO:RAIL.
6. Don’t try to plug your workstation/laptop directly into a server node on
EVO:RAIL;; plug it into your network or 10GbE switch and make sure that
it is logically configured to reach EVO:RAIL.
7. If you are using SFP+, NIC and switch connectors and cables must be on
the same wavelength. Contact HDS for the type of SFP+ connector on your
appliance.
The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance 3-27
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3-28 The Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Appliance
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
3
Deployment and Initial Configuration
This chapter describes the steps to deploy the Hitachi Unified Compute
Platform 1000 for VMware EVO:RAIL appliance and its initial configuration:
How to Configure EVO:RAIL
Initial Configuration User Interface
Before you proceed with the initial configuration of your new EVO:RAIL
appliance:
• Please review the physical power, space and cooling requirements for the
expected resiliency level of the appliance. This information is available on
the Physical Requirements section in this document.
• Please review the Prerequisites and Checklist section and fill in the
EVO:RAIL Network Configuration Table. This is essential to help ensure
smooth deployment and configuration.
o Confirm that you can ping or point to the EVO:RAIL initial IP address
(Row 2). If not, return to Step 6.
o Confirm that your DNS server(s) are reachable unless you are in an
isolated environment (Row 5)
o Confirm that IPv4 multicast and IPv6 multicast are enabled for the
VLANs described in this document.
There are two ways to configure Hitachi UCP 1000 for VMware EVO:RAIL:
EVO:RAIL verifies the configuration data, and then builds the appliance.
EVO:RAIL implements data services, creates the new ESXi hosts, and sets up
vCenter Server, EVO:RAIL, vMotion and Virtual SAN.
Use values from the rows of your EVO:RAIL Network Configuration Table
as follows in the user interface:
• System
• Management
Enter the ESXi host naming scheme from Rows 13-16, vCenter Server
hostname from Row 17, and EVO:RAIL hostname from Row 18. Enter the IP
addresses, subnet mask, and gateway for ESXi, vCenter Server, and EVO:RAIL
from Rows 19-24. Enter the ESXi hosts and vCenter Server/EVO:RAIL
passwords from Rows 25-26.
• vSphere vMotion
Enter the VLAN ID, IP addresses, and subnet mask for vSphere vMotion from
Rows 27-30.
• Virtual SAN
Enter the VLAN ID, IP addresses, and subnet mask Virtual SAN from Rows
31-34.
• VM Networks
Enter the VLAN IDs and names for the VM Networks from Rows 35-38.
• Solutions
For logging, enter the IP address and hostname for vRealize Log Insight or for
an existing third-party syslog server (optional) in your network (Rows 39-
41). Enter two IP addresses from Rows 42-43 for the two HDS Solution VMs.
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Initial Configuration User Interface
This describes the initial configuration procedures for Hitachi Unified Compute
Platform 1000 for VMware EVO:RAIL.
2. Click Get Started. Then if you agree, accept the EVO:RAIL End-User
License Agreement (EULA).
3. In Release 2.x, we ask you if you have read this document and setup your
network correctly. If you have done so, check the following boxes and then
click next.
• I set up my switch
6. Click the Review First (Release 2.x only) or Validate button. EVO:RAIL
verifies the configuration data, checking for conflicts.
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
7. After validation is successful, click the Build EVO:RAIL button.
9. After the appliance build process starts, if you close your browser, you will
need to browse to the new IP address (Row 22).
When you see the Hooray! page, click the Manage EVO:RAIL button to
continue to EVO:RAIL management. You should also bookmark this IP
address in your browser for future use.
From now on, refer to the Management & Maintenance section in this
document.
Apply the following configuration ONLY if it is Hitachi UCP 1000 for VMware
EVO:RAIL ROBO Edition.
Procedure:
1. Log in to vSphere Web Client using an account with administrator
privileges. You will be directed to vCenter Home.
2. From the Inventory Lists, select Networking.
3. Under Marvin-Datacenter, expand EVO:RAIL Distributed Switch.
4. Right-clik on vSphere vMotion distributed port group
5. The click on Edit Settings
6. Click Traffic shaping and make the following changes:
7. Click OK
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Adding Appliances to an EVO:RAIL Cluster
Hitachi UCP 1000 for VMware EVO:RAIL can scale out to sixteen appliances for
up to 64 ESXi hosts all on one Virtual SAN datastore, backed by a single
vCenter Server and EVO:RAIL instance. Deployment, configuration, and
management are handled by EVO:RAIL, allowing the compute capacity and the
Virtual SAN datastore to grow automatically. New appliances are automatically
discovered and easily added to an EVO:RAIL cluster.
If you plan to scale out to multiple EVO:RAIL appliances in a cluster over time,
allocate extra IP addresses for each of the ESXi, vMotion, and Virtual SAN IP
pools when you configure the first appliance (twelve extra IP addresses per
appliance). Then when you add appliances to a cluster, you will only need to
enter the ESXi and EVO:RAIL / vCenter Server passwords.
Step 1: Plan
Use the Prerequisites and Checklist to make sure you are ready for
additional appliances. Work with your team to make any decisions that were
not made earlier.
Eight 10GbE ports for each EVO:RAIL appliance on your 10GbE switch(es)
Four contiguous IP addresses on the management VLAN for ESXi hosts for
each appliance
Four contiguous IP addresses on the Virtual SAN VLAN for each appliance
Be sure you know the ESXi host and vCenter Server/EVO:RAIL passwords
that were configured on the first EVO:RAIL appliance in the cluster
Only one appliance can be added at a time. To add multiple appliances, power
on one appliance at a time, making sure that each is properly configured
before powering on the next appliance. Remember that powering on an
appliance consists of powering on all four nodes in the appliance.
The first appliance will not discover any ESXi hosts that are not on the same
management VLAN
Login to each ESXi host in the new appliance and follow the management
VLAN instructions for the first appliance in the cluster.
Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL Installation and Reference Manual
Step 6: Add EVO:RAIL Appliance
Go to the EVO:RAIL user interface to see the first EVO:RAIL appliance in the
cluster detect each node in the new appliance. If you have enough IP
addresses, all you do is enter the passwords and click the Add EVO:RAIL
Appliance button. EVO:RAIL will seamlessly configure all services on the new
appliance and fully integrate it into the cluster. Use procedure below.
Be sure to add any ESXi hostnames that were not previously entered in your
corporate DNS, unless you are in a totally isolated environment
Procedure
The latest versions of Mozilla Firefox, Google Chrome, and Microsoft Internet
Explorer are all supported. The minimum recommended screen resolution is
1280 × 1024.
2. Click the VMware vSphere Web Client icon in the upper right corner of
the EVO:RAIL Management VMS page
NOTE: From vSphere Web Client, do not use service datastores when
creating or moving VMs. These datastores are only to be used by Hitachi Data
Systems Support.
2. Under the General tab, Choose your Language section, click on your
selection.
Figure 5
c. If you are using a previously uploaded guest operating system, you will
be able to re-use an existing image by clicking on its name.
Figure 6
Figure 7
Manage VMs
Figure 8
Use the Filter By and Sort By menus in the upper right corner to arrange the
virtual machines. Use Search to find virtual machines by name. If you click on
a virtual machine, you have the following options available, depending on the
state of your virtual machine. Click on the VM to view all the options.
The following table describes which options are available when the VM is
powered on and when it is powered off.
Suspend/Resume VM Yes No
Delete VM No Yes
The overall health of the EVO:RAIL cluster is displayed. The status is displayed
for storage IOPS, CPU usage and memory usage as follows:
Color Description
The total storage information for the EVO:RAIL cluster is also shown: total
capacity, amount used, amount free, and amount provisioned. Each EVO:RAIL
appliance in the cluster is shown with a simple red, yellow, or green status.
You can click on the EVO:RAIL appliance tabs on the top of screen to drill
down to the details for that appliance. You can return to the Overall System,
by clicking on that tab on the top of the screen.
For each EVO:RAIL appliance you can see the status of the items above and
each node, HDD, SSD, ESXi boot disk and NIC. If everything is normal a
green checkmark is displayed; if there is a warning a yellow triangle is
displayed. A critical alarm is indicated with a red triangle. All alarms and
warnings are from vCenter Server.
View Events
Hi-Track provides the remote monitoring and support functionality for Hitachi
products. Hitachi strongly recommends you install and configure Hi-Track to
monitor UCP 1000 system at your site.
For more information, download the Hi-Track documentation from the Hitachi
Data Systems Support Portal at https://portal.hds.com. Please follow
registration instructions to access the portal and search for “UCP 1000 for
VMware EVO:RAIL”
License EVO:RAIL
To license the Hitachi Unified Compute Platform 1000 for VMware EVO:RAIL
appliance, follow these steps:
1. Get your Partner Activation Code (PAC) from Hitachi Data Systems.
2. Go to your Hitachi Data Systems activation portal to enter your PAC or
PACs.
3. You will receive an email message with your license key or license keys.
4. Click Config on the left menu.
5. On the Licensing tab, enter your license key(s) for each appliance in the
cluster:
6. Click License Appliance.
Figure 11
7. If the license is part of the VMware vSphere Loyalty Program, perform the
following steps:
a. Go to MyVMware to group the required set of licenses into a single
license key.
i. Create a folder within MyVMware with the licenses called EVORAIL.
ii. Send a service request to the VMware license team with the license
keys you need combined into a single license key.
Note: No other licenses are needed. Your EVO:RAIL appliance is now fully
licensed.
Update EVO:RAIL
EVO:RAIL supports vCenter Sever, ESXi™, and EVO:RAIL software patch and
upgrade. With a minimum of four independent ESXi hosts in an EVO:RAIL
cluster, updates are non-disruptive and require zero downtime.
Refer to the release specific EVO:RAIL Customer Release Notes for update
instructions.
EVO:RAIL can scale out to sixteen appliances for up to 64 ESXi hosts and one
Virtual SAN datastore, backed by a single vCenter Server and EVO:RAIL
instance. Deployment, configuration, and management are handled by
EVO:RAIL, allowing the compute capacity and the Virtual SAN datastore to
grow automatically when appliances are added to a cluster. New appliances
are automatically discovered and easily added to an EVO:RAIL cluster with a
few mouse clicks.
Follow all the planning steps and procedure from the Deployment and Initial
Configuration > Adding Appliances to an EVO:RAIL Cluster.
• Logs
• Hardware Replacement
Logs
• Support Logs
Support Logs
EVO:RAIL deploys with vCenter Log Insight. However, you may choose to use
your own third-party syslog server or servers.
The password, in either case, is the one you specified for vCenter Server. See
VMware vRealize Log InsightVRealize Log Insight Documentation.
Hardware Replacement
Customer
Row Category Description Examples
Values
Set a management VLAN on
Management ESXiTM before you configure
VLAN ID EVO:RAIL, otherwise
1 Native VLAN
(optionally management traffic will be
modify) untagged on the switch’s Native
EVO:RAIL VLAN
Appliance EVO:RAIL
If you cannot reach the default
Initial
EVO:RAIL initial IP address
2 IP Address 192.168.10.200
(192.168.10.200/24), set an
(optionally
alternate IP address
modify)
3 Time zone UTC
4 Global settings NTP server(s)
5 DNS server(s)
6 Domain
Active Directory
7 Username
System (optional)
8 Password
9 IP Address
HTTP Proxy
10 Port
Settings
11 Username
(optional)
12 Password
13 ESXi hostname prefix host
14 Separator None
15 Iterator 0X
Hostnames localdomain.loca
16 Top-level domain
l
17 vCenter Server hostname vcenter
18 EVO:RAIL hostname evorail
19 ESXi starting address for IP pool 192.168.10.1
Management
20 ESXi ending address for IP pool 192.168.10.4
21 vCenter Server IP address 192.168.10.201
Networking
22 EVO:RAIL IP address 192.168.10.200
23 Subnet mask 255.255.255.0
24 Gateway 192.168.10.254
25 ESXi “root”
Passwords vCenter Server & EVO:RAIL
26
“[email protected]”
27 Starting IP address 192.168.20.1
28 Ending IP address 192.168.20.4
vSphere vMotion
29 Subnet mask 255.255.255.0
30 VLAN ID 20
31 Starting IP address 192.168.30.1
32 Ending IP address 192.168.30.4
Virtual SAN
33 Subnet mask 255.255.255.0
34 VLAN ID 30
35 VM Network name and VLAN ID Sales / 110
36 VM Network name and VLAN ID Marketing / 120
37 VM Networks …
Unlimited number in Release
38
2.0
39 vRealize Log Insight hostname loginsight
Solutions Logging
40 vRealize Log Insight IP address 192.168.10.202
Important Notes:
• The JSON file format may change throughout EVO:RAIL releases. Please
get the sample JSON file that corresponds to the software release with
which your appliance was built at Hitachi Data Systems. Then edit the
sample file for your configuration.
• EVO:RAIL expects the data in the configuration file in a specific format. Any
changes to the JSON format results in unexpected results and/or crashes.
• There is no built-in capability to generate and export an updated copy of an
EVO:RAIL JSON configuration file from the user interface.
The following is a sample JSON file based on EVO:RAIL Release 2.0. Data is
mapped to the rows in the EVO:RAIL Network Configuration Table.
On the “vendor” section, enter the two IP addresses required for the Hitachi
Solution VMs, similar to what is shown above.
Large 60 GB 4 1 8 GB
Large 60 GB 2 1 8 GB
Large 60 GB 2 1 8 GB
Large 60 GB 2 1 8 GB
Large 60 GB 2 1 4 GB
ethernetn.filtern.name
floppyX.present
parallelX.present
pciPassthru*.present
sched.mem.pshare.salt
scsiX:Y.mode
serialX.present
^ethernet[0-9]*.filter[0-9]*.name = null
isolation.bios.bbs.disable = true
isolation.device.connectable.disable = true
isolation.device.edit.disable = true
isolation.ghi.host.shellAction.disable = true
isolation.monitor.control.disable = true
isolation.tools.autoInstall.disable = true
isolation.tools.copy.disable = true
isolation.tools.copy.disable = true
isolation.tools.diskShrink.disable = true
isolation.tools.diskWiper.disable = true
isolation.tools.dispTopoRequest.disable = true
isolation.tools.dnd.disable = true
isolation.tools.getCreds.disable = true
isolation.tools.ghi.autologon.disable = true
isolation.tools.ghi.launchmenu.change = true
isolation.tools.ghi.protocolhandler.info.disable = true
isolation.tools.ghi.trayicon.disable = true
isolation.tools.guestDnDVersionSet.disable = true
isolation.tools.hgfsServerSet.disable = true
isolation.tools.memSchedFakeSampleStats.disable = true
isolation.tools.paste.disable = true
isolation.tools.setGUIOptions.enable = false
isolation.tools.trashFolderState.disable = true
isolation.tools.unity.disable = true
isolation.tools.unity.push.update.disable = true
isolation.tools.unity.taskbar.disable = true
isolation.tools.unity.windowContents.disable = true
isolation.tools.unity.windowContents.disable = true
isolation.tools.unityActive.disable = true
isolation.tools.unityInterlockOperation.disable = true
isolation.tools.vixMessage.disable = true
isolation.tools.vmxDnDVersionGet.disable = true
logging = false
RemoteDisplay.vnc.enabled = false
Security.AccountUnlockTime = 900
tools.guestlib.enableHostInfo = false
tools.setInfo.sizeLimit = 1048576
Risk Profile 2
These guidelines should be implemented for more sensitive environments. For
example, those handling more sensitive data, those subject to stricter
compliance rules, etc.
^ethernet[0-9]*.filter[0-9]*.name = null
Risk Profile 3
These guidelines should be implemented in all environments not subject to
stricter security.
^ethernet[0-9]*.filter[0-9]*.name = null
isolation.device.connectable.disable = true
isolation.device.edit.disable = true
isolation.tools.copy.disable = true
isolation.tools.copy.disable = true
isolation.tools.diskShrink.disable = true
isolation.tools.diskWiper.disable = true
isolation.tools.dnd.disable = true
isolation.tools.paste.disable = true
isolation.tools.setGUIOptions.enable = false
log.keepOld = 10
log.rotateSize = 100000
RemoteDisplay.vnc.enabled = false
Security.AccountUnlockTime = 900
tools.setInfo.sizeLimit = 1048576
3. Click the VMware vSphere Web Client icon in the top-right corner, and
login with administrator privileges. You will be in vCenter Home.
4. Disable vSphere DRS and High Availability with the following steps:
8. Return to vCenter Home.
9. From the Inventory Lists, select Clusters.
10. Select MARVIN-Virtual-SAN-Cluster-<id> in either the left or center
panes.
11. Select Manage, then Settings in the center pane.
12. Under Services, select vSphere DRS.
13. If the center pane says vSphere DRS is Turned ON, click Edit.
14. Uncheck Turn ON vSphere DRS.
15. Click OK.
16. Under Services, select vSphere HA.
17. If the center pane says vSphere HA is Turned ON, click Edit.
18. Uncheck Turn ON vSphere HA.
19. Click OK.
5. Migrate all the system VMs (VMware vCenter Server Appliance, vRealzie
Log Insight, any EVO:RAIL partner VMs and EVO:RAIL Orchestration
Appliance) to the first ESXi host with the following steps:
a. From Inventory Trees, click Virtual Machines.
b. Right-click the virtual machine and select Migrate.
i. Select Migration Type: Change compute resource only
ii. Select a compute resource: the first ESXi host.
iii. Select Network: vCenter Server Network.
iv. Select vMotion Priority: Schedule vMotion with high priority.
Figure 12
Note: All nodes must be turned on within a short time of each other or Virtual
SAN will think that the datastore is not working properly. It’s not a cause for
concern, but we don’t recommend taking a long break in the middle of
powering on the nodes in a cluster.
Note: It may take up to 15 minutes for all services to be fully restored and for
EVO:RAIL Management to be accessible.
3. Click the VMware vSphere Web Client icon , and log in using
administrator privileges.
4. Enable Maintenance Mode on the four ESXi hosts identified in Step 2 with
the following steps:
a. Click vCenter in the left pane and you will be in vCenter Home.
b. From Inventory Trees, select Hosts.
c. For each of the four ESXi hosts, right-click on Maintenance Mode
> Enter Maintenance Mode.
d. Select the check box for Move powered-off and suspended virtual
machines to other hosts in the cluster.
Note: Complete this procedure on one ESXi host at a time. Wait until the
ESXi host has entered maintenance mode before proceeding to the next. Full
Data Migration can take a long time. See Place a Member of Virtual SAN
Cluster in Maintenance Mode and Virtual SAN – Maintenance Mode Monitoring
for more details.
5. Power off the four ESXi hosts with the following steps:
a. From Inventory Trees, select Hosts.
b. Right-click the system virtual machines and select Power > Shut
Down.
c. Enter the reason for the shut down and click OK.
d. Repeat for each ESXi host.
EVO:RAIL will detect that the ESXI hosts are available again. vMotion and
Virtual SAN will distribute compute and storage workloads.
You do not need to follow these instructions if you can reach the default
EVO:RAIL initial IP address and merely wish to change the post-configuration
IP address to something else. Instead, use the EVO:RAIL user interface to
enter the new IP address.
It will be easiest to select the IP settings that you want to use permanently for
your EVO:RAIL cluster. Then all you need to do is configure your
workstation/laptop once. Otherwise, just follow the notes in Step 8 of the
Initial Configuration user interface.
3. Open the Console and login as root with the default password Passw0rd!
4. Stop vmware-marvin:
/etc/init.d/vmware-marvin stop
/etc/init.d/vmware-marvin restart
/etc/init.d/vmware-loudmouth restart
Corporate Headquarters
2845 Lafayette Street
Santa Clara, California 95050-2639
U.S.A.
www.hds.com
Asia Pacific
+852 3189 7900
[email protected]
MK-92UCP077-02